06 Gradient Descent For Linear Regression Instructionsl

06_gradient-descent-for-linear-regression_instructionsl

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 2

Download06 Gradient-descent-for-linear-regression Instructionsl
Open PDF In BrowserView PDF
Gradient Descent For Linear
Regression
Note: [At 6:15 "h(x) = -900 - 0.1x" should be "h(x) = 900 - 0.1x"]
When specifically applied to the case of linear regression, a new form of the
gradient descent equation can be derived. We can substitute our actual cost
function and our actual hypothesis function and modify the equation to :
repeat until convergence: {
m

1
θ0 :=θ0 − α

∑(h θ (x i ) − yi )

m

i=1
m

1
θ1 :=θ1 − α

Gradient Descent for
Linear Regression

∑ ((h θ (x i ) − yi )x i )

m

i=1

}

where m is the size of the training set, θ a constant that will be changing
simultaneously with θ and x , y are values of the given training set (data).
0

1

i

i

Note that we have separated out the two cases for θ into separate equations for θ
and θ ; and that for θ we are multiplying x at the end due to the derivative. The
following is a derivation of
J (θ) for a single example :
j

1

1

0

i

∂

∂θ j

The point of all this is that if we start with a guess for our hypothesis and then
repeatedly apply these gradient descent equations, our hypothesis will become
more and more accurate.
So, this is simply gradient descent on the original cost function J. This method
looks at every example in the entire training set on every step, and is called batch
gradient descent . Note that, while gradient descent can be susceptible to local
minima in general, the optimization problem we have posed here for linear
regression has only one global, and no other local, optima; thus gradient descent
always converges (assuming the learning rate α is not too large) to the global
minimum. Indeed, J is a convex quadratic function. Here is an example of gradient
descent as it is run to minimize a quadratic function.

The ellipses shown above are the contours of a quadratic function. Also shown is
the trajectory taken by gradient descent, which was initialized at (48,30). The x’s
in the figure (joined by straight lines) mark the successive values of θ that gradient
descent went through as it converged to its minimum.



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.4
Linearized                      : No
XMP Toolkit                     : Adobe XMP Core 5.6-c015 91.163280, 2018/06/22-11:31:03
Create Date                     : 2018:12:07 05:20:37Z
Creator Tool                    : Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36
Modify Date                     : 2018:12:07 13:56:03+08:00
Metadata Date                   : 2018:12:07 13:56:03+08:00
Producer                        : Skia/PDF m70
Format                          : application/pdf
Document ID                     : uuid:c18595c9-8af7-4707-9560-77ff14dc0fb3
Instance ID                     : uuid:b8abd2bf-6222-47c2-be51-7d80347221fc
Page Count                      : 2
Creator                         : Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36
EXIF Metadata provided by EXIF.tools

Navigation menu