Document not found! Please try again

Section 5: The Jacobian matrix and applications.

334 downloads 2055 Views 120KB Size Report
Section 5: The Jacobian matrix and applications. S1: Motivation S2: Jacobian matrix + differentiability S3: The chain rule S4: Inverse functions
Section 5: The Jacobian matrix and applications. S1: S2: S3: S4:

Motivation Jacobian matrix + differentiability The chain rule Inverse functions

Images from“Thomas’ calculus”by Thomas, Wier, Hass & Giordano, 2008, Pearson Education, Inc. 1

S1: Motivation. Our main aim of this section is to consider “general” functions and to define a general derivative and to look at its properties. In fact, we have slowly been doing this. We first considered vector– valued functions of one variable f : R → Rn

f (t) = (f1(t), . . . , fn(t)) and defined the derivative as

f 0(t) = (f10 (t), . . . , fn0 (t)). We then considered real–valued functions of two and three variables f : R2 → R, f : R3 → R and (as we will see later) we may think of the derivatives of these functions, respectively, as ∇f = (∂f /∂x, ∂f /∂y) ∇f = (∂f /∂x, ∂f /∂y, ∂f /∂z). 2

There are still more general functions than those two or three types above. If we combine the elements of each, then we can form “vector– valued functions of many variables”. A function f : Rm → Rn (n > 1) is a vector–valued function of m variables. Example 1  

! x x+y+z  f y  = xyz z

defines a function from R3 to R2.

3

When it comes to these vector–valued functions, we should write vectors as column vectors (essentially because matrices act on column vectors), however, we will use both vertical columns and horizontal m–tuple notation. Thus, for example, for the vector x ∈ R3 we will write both  

x   y  z

or (x, y, z) (and xi + y j + z k)

and so we could write f : R3 → R2 as  

! x f1(x, y, z)  f and y  = f2(x, y, z) z f (x, y, z) = (f1(x, y, z), f2(x, y, z)) = f1(x, y, z)i + f2(x, y, z)j

or combinations of columns and m-tuples.

4

In Example 1, the real–valued functions  

x

  f1  y  = x + y + z

and

z x   f2 y  = xyz z  

are called the co–ordinate or component functions of f , and we may write !

f=

f1 . f2

Generally, any f : Rm → Rn is determined by n co–ordinate functions f1, . . . , fn and we write 



f1(x1, . . . , xm) f (x , . . . , x ) m  f =  2 1 .. .   . fn(x1, . . . , xm)

(1)

5

We shall be most interested in the cases where f : R2 → R2 or f : R3 → R3 because this is where the most applications occur and because it will prove to be extremely useful in our topic on multiple integration. For these special cases we can use the following notation

f (x) = f (x, y) = (f1(x, y), f2(x, y)) = f1(x, y)i + f2(x, y)j.

f (x) = f (x, y, z) = (f1(x, y, z), f2(x, y, z), f3(x, y, z)) = f1(x, y, z)i + f2(x, y, z)j + f3(x, y, z)k. 6

One way of visualizing f , say, f : R2 → R2 is to think of f as a transformation between co–ordinate planes.

So that f may stretch, compress, rotate etc sets in its domain. The above be particularly useful when dealing with multiple integration and change of variables. 7

S2: Jacobian matrix + differentiability. Our first problem is how we define the derivative of a vector–valued function of many variables. Recall that if f : R2 → R then we can form the directional derivative, i.e., Duf = u1

∂f ∂f + u2 = ∇f · u ∂x ∂y

where u = (u1, u2) is a unit vector. Thus, knowledge of the gradient of f gives information about all directional derivatives. Therefore it is reasonable to assume ∇p f =

∂f ∂f (p), ( p) ∂x ∂y

!

is the derivative of f at p. (The story is more complicated than this but when we say f is “differentiable” we mean ∇f represents the derivative, to be discussed a little later.) 8

More generally if f : Rm → R then we take the derivative at p to be the row vector !

∂f ∂f ∂f ( p) = ∇ p f (p), (p), . . . , ∂x1 ∂x2 ∂xm Now take f : Rm → Rn where f is as in equation (1), then the natural candidate for the derivative of f at p is 



∂f1 ∂f1 ∂f1 . . .    ∂x1 ∂x2 ∂x m  

   ∂f2  Jp f =   ∂x1  ..  .   ∂f  n

  ∂f2   ...  ∂xm  ...  ...   ∂fn  

∂f2 ∂x2 ... ∂fn ... ∂x1 ∂x2 ∂xm where the partial derivatives are evaluated at p. This n × m matrix is called the Jacobian matrix of f . Writing the function f as a column helps us to get the rows and columns of the Jacobian matrix the right way round. Note the “Jacobian” is usually the determinant of this matrix when the matrix is square, i.e., when m = n. 9

Example 2 Find the Jacobian matrix of f from Example 1 and evaluate it at (1, 2, 3).

10

Most of the cases we will be looking at have m = n = either 2 or 3. Suppose u = u(x, y) and v = v(x, y). If we define f : R2 → R2 by !

f

!

x u(x, y) f = ≡ 1 y v(x, y) f2

!

then the Jacobian matrix is ∂u ∂u    ∂x ∂y  



Jf = 

   ∂v 

   ∂v

∂x ∂y and the Jacobian (determinant) ∂u ∂x det(J f ) = ∂v ∂x

We often denote det(J f ) by

∂u ∂y

∂u ∂v ∂v ∂u − . = ∂x ∂y ∂x ∂y ∂v ∂y

∂(u, v) . ∂(x, y) 11

Example 3 Consider the transformation from polar to Cartesian co– ordinates, where x = r cos θ and y = r sin θ. We have ∂x ∂r ∂(x, y) = ∂(r, θ) ∂y

∂r



∂x ∂θ

cos θ − r sin θ = r. = sin θ r cos θ ∂y

∂θ

12

We have already noted that if f : Rm → Rn then the Jacobian matrix at each point a ∈ Rm is an m × n matrix. Such a matrix Jaf gives us a linear map Da f : Rm → Rn defined by (Da f ) (x) := Jaf · x for all x ∈ Rn. Note that x is a column vector. When we say f : Rm → Rn is differentiable at q we mean that,  the affine function A(x) := f (q) + Jq f · (x − q), is a “good” approximation to f (x) near x = q in the sense that kf (x) − f (q) − (Jq f ) · (x − q)k =0 x→q kx − qk lim

where kx − qk =

q

(x1 − q1)2 + . . . + (xm − qm)2.

13

You should compare this to the one variable case: a function f : R → R f (a + h) − f (a) is differentiable at a if lim exists, and we call this h→0 h 0 limit f (a). But we could equally well say this as f : R → R is differentiable at a if there is a number, written f 0(a), for which |f (a + h) − f (a) − f 0(a) · h| lim = 0, h→0 |h| because a linear map L : R → R can only operate by multiplication with a number. How do we easily recognize a differentiable function? If all of the component functions of the Jacobian matrix of f are continuous, then f is differentiable.

14

Example 4 Write the derivative of the function in Example 1 at (1, 2, 3) as a linear map.

Suppose f and g are two differentiable functions from Rm to Rn. It is easy to see that the derivative of f + g is the sum of the derivatives of f and g. We can take the dot product of f and g and get a function from Rm to R, and then differentiate that. The result is a sort of product rule, but I’ll leave you to work out what happens. Since we cannot divide vectors, there cannot be a quotient rule, so of the standard differentiation rules, that leaves the chain rule. 15

S3: The chain rule. Now suppose that g : Rm → Rs and f : Rs → Rn. We can now form the composition f ◦ g by mapping with g first and then following with f :

x → g(x) → f (g(x)) (f ◦ g) (x) := f (g(x))

(2)

for all x ∈ Rm.

Example 5 Let g : R2 → R2 and f : R2 → R3 be defined, respectively, by !

x x+y := g y xy

!



and



sin x x   f := x − y  . y xy !

Then f ◦ g is defined by !

x x =f g (f ◦ g ) y y

!!

x+y =f xy

!



sin(x + y)



  =  x + y − xy  .

(x + y) (xy)

16

Let b = g(p) ∈ Rs. If f and g in (2) above are differentiable then the maps Jpg : Rm → Rs and Jbf : Rs → Rn are defined, and we have the following general result. Theorem 1 (The Chain Rule) Suppose that g : Rm → Rs and f : Rs → Rn are differentiable. Then Jp(f ◦ g) = Jg(p)f · Jpg. This is again just like the one variable case, except now we are multiplying matrices (see below).

17

Example 6 Consider Example 5: !

x x+y g = y xy





sin x x   and f = x − y  . y xy

!

!

!

a1 Find Jp(f ◦ g) where p = . We have a2 !

Jp g =

1 1 y x

!

= p

1 1 . a2 a1

Also 



cos x 0   Jg(p)f =  1 − 1 y x x=a +a ,y=a a 1 2 1 2 



cos(a1 + a2) 0   =  1 −1 . a1 a2 a1 + a2 18

(Ex cont.) and   J p (f ◦ g ) = 



cos(x + y) cos(x + y) 1−y 1−x   2xy + y 2 x2 + 2xy p

We observe that    



cos(a1 + a2) cos(a1 + a2)  1 − a2 1 − a1  2a1a2 + a22 a12 + 2a1a2 

! cos(a1 + a2) 0 1 1   = 1 −1  · a2 a1 a1 a2 a1 + a2

19

The one variable chain rule is a special case of the chain rule that we’ve just met — the same can be said for the chain rules we saw in earlier sections. Let x : R → R be a differentiable function of t and and let u : R → R a differentiable function of x. Then (u ◦ x) : R → R is given by (u ◦ x)(t) = u(x(t)). In the notation of this chapter

i.e.

Jt(u ◦ x) = Jx(t)u · Jtx       d du dx (u ◦ x) = . dt dx dt t x(t) t

We usually write this as du dx du = dt dx dt du keeping in mind that when we write we are thinking of u as a dt du function of t, i.e., u(x(t)) and when we write we are thinking of u dx as a function of x. 20

Now suppose we have x = x(t), y = y(t) and z = f (x, y). Then Jt(f ◦ x) = Jx(t)f · Jtx Therefore 

d (f (x(t), y(t))) = dt



dx !    dt  ∂f ∂f   ·    ∂x ∂y  dy  dt

so that ∂f dx ∂f dy df = + , dt ∂x dt ∂y dt which is just what we saw in earlier sections.

21

S4: Inverse functions. In first year (or earlier) you will have met the inverse function theorem, which says essentially that if f 0(a) is not zero, then there is a differentiable inverse function f −1 defined near f (a) with 

d −1 1 = 0 (f ) . dt f (a) f (a) 

What happens in the multi–variable case?

22

Let us consider a case where we can write down the inverse. For polar coordinates we have x = r cos θ, q

r=

x2 + y 2 ,

y = r sin θ   y . θ = arctan x

Now differentiating we obtain ∂r x r cos θ ∂x =q = = cos θ and = cos θ ∂x r ∂r x2 + y 2 ∂r 1 i.e., 6= . ∂x ∂x ∂r We see that the one variable inverse function theorem does not apply to partial derivatives. However, there is a simple generalisation if we use the multivariable derivative, that is, the Jacobian matrix.

23

To continue with the polar coordinate example, define r θ

f

!

!

=

x(r, θ) r cos θ = y(r, θ) r sin θ

!

(3)

and !

!

q



2 + y2 x x r(x, y)   . g = = y θ(x, y) arctan y

(4)

x

Consider !

(f ◦ g )

x x =f g y y

!!

=f

r θ

!

!

=

!

x x = Id . y y

Therefore f ◦g = Id, the identity operator on R2. Similarly g◦f = Id. Recall !

Id

x x = y y

!

!

so that J(Id) =

1 0 ≡ 2 × 2 identity matrix. 0 1 24

Thus by the chain rule !

J f · J g = J(Id) =

1 0 = Jg · Jf 0 1

so that (J f )−1 = J g. Note for simplicity the points of evaluation have been left out. Therefore ∂r ∂r − 1    ∂x ∂y  



   ∂θ 

=



    ∂θ



∂x ∂x    ∂r ∂θ      ∂y

.  ∂y 

∂x ∂y

∂r ∂θ ∂r x =q = cos θ We can check this directly by substituting ∂x x2 + y 2 etc. The same idea works in general: 25

Theorem 2 (The Inverse Function Theorem) Let f : Rn → Rn be differentiable at p. If Jp f is an invertible matrix then there is an inverse function f −1 : Rn → Rn defined in some neighbourhood of b = f (p) and (Jb f −1) = (Jp f )−1. Note that the inverse function may only exist in a small region around b = f (p). Example 7 We earlier saw that for polar coordinates, with the notation of equation (3) !

Jf =

cos θ − r sin θ , sin θ r cos θ

with determinant r. So it follows from the inverse function theorem that the inverse function g is differentiable if r 6= 0. 26

Example 8 The function f : R2 → R2 is given by !

f

!

x u = = y v

x2 − y 2 x2 + y 2

!

.

Where is f invertible? Find the Jacobian matrix of f −1 where f is invertible. 2x −2y 2x 2y everywhere except the axes. SOLN: J f =

J f −1 =

2x −2y 2x 2y

!−1

!

and det J f = 8xy, so f is invertible

!

x−1

x−1

!

1 1 2y 2y = = . 8xy −2x −2x 4 −y −1 −y −1

Translate to (u, v) coordinates and this is √ ! −1/2 −1/2 2 (u + v) (u + v) −1 . Jf = −1/2 −1/2 −(v − u) −(v − u) 4

27

Finally let us apply the inverse function theorem to the Jacobian determinants. We recall that ∂r ∂(r, θ) ∂x = det J g = ∂θ ∂(x, y) ∂x ∂x ∂r ∂(x, y) = det J f = ∂(r, θ) ∂y ∂r

∂r ∂y and ∂θ ∂y ∂x ∂θ . ∂y ∂θ

Since J g and J f are inverse matrices, their determinants are inverses: 1 ∂(r, θ) . = ∂(x,y) ∂(x, y) ∂(r,θ)

This sort of result is true for any change of variable — in any number of dimensions — and will prove very useful in integration.

28