- 10 Jul '16 14:28 / 5 editsIn light of recent topics of infinitesimals and such I was hoping to get some clarification on modeling using differentials.

For example: lets look at the derivation of velocity in polar coordinates for curvilinear motion ( r,Theta coordinates).

Position:**r**=*r****u_r**

Velocity:*d/dt*(**r**) =*d/dt*(r)***u_r**+*r*d/dt*(**u_r**)

The model has to determine the rate of change in direction:

*d/dt*(**u_r**)

They do this by saying a small change in angle Theta results in a change**u_r**in the direction of**u_Theta**

Then define a new vector**u_r'**that follows the relationship:

**u_r'**=**u_r**+**δu_r**

Then using the small angle approximation

**δu_r**= δTheta***u_Theta**

What i'm trying to wrap my head around:

How does**u_r'**have the same magnitude as**u_r**= 1 if**δu_r**is some small quantity orthogonal ( the direction of**u_Theta**) to**u_r**?

Philosophically is this correct because of the nature of infinitesimals...? because it doesn't seem to hold absolutely true in mathematics? - 10 Jul '16 22:52 / 1 edit

The key word is*Originally posted by joe shmo***In light of recent topics of infinitesimals and such I was hoping to get some clarification on modeling using differentials.**=

For example: lets look at the derivation of velocity in polar coordinates for curvilinear motion ( r,Theta coordinates).

Position: [b]r*r****u_r**

Velocity:*d/dt*(**r**) =*d/dt*(r)* [ ...[text shortened]... the nature of infinitesimals...? because it doesn't seem to hold absolutely true in mathematics?[/b]*orthogonal*. Would you expect a force along the x-direction to change the position along the y or z directions? Similarly in polar coordinates there is no reason that a force along the radial direction would affect what direction a vector points in. That the moment is at right angles to the radial direction should tell you that it does not affect the radial component of the vector - it's like putting a force on a spinning top, the top just keeps spinning (apart from internal losses) but the direction the axis is pointing at can change when you try and turn it. - 11 Jul '16 00:49

Sorry, but I'm not quite following. My real concern ( at least I belive so?) is with the relationship:*Originally posted by DeepThought***The key word is***orthogonal*. Would you expect a force along the x-direction to change the position along the y or z directions? Similarly in polar coordinates there is no reason that a force along the radial direction would affect what direction a vector points in. That the moment is at right angles to the radial direction should tell you that i ...[text shortened]... internal losses) but the direction the axis is pointing at can change when you try and turn it.**u_r'**=**u_r**+**δu_r**

In otherwords becuase of the necessary othoganlity; 1 = 1 + something

why is that ok for the purpose of developing the model? - 12 Jul '16 01:44

Maybe you've given up on me already and I'm just spinning my wheels here, but I see what you are saying. However, it still leaves me with some questions.*Originally posted by DeepThought***The key word is***orthogonal*. Would you expect a force along the x-direction to change the position along the y or z directions? Similarly in polar coordinates there is no reason that a force along the radial direction would affect what direction a vector points in. That the moment is at right angles to the radial direction should tell you that i ...[text shortened]... internal losses) but the direction the axis is pointing at can change when you try and turn it.

We derive this relationship:**u_r'**=**u_r**+**δu_r**

I see what you mean now about the the "key word is orthogonal". if**u_r'**=**u_r**was to hold absolutely true, the angle between**δu_r**and**u_r**would necessarily need to be acute. This would create a component of**δu_r**in the opposite direction of**u_r**. I believe you are saying this is not " physically consistent" and I agree a force applied at right angles (causing the rotation of**r**) will not change**u_r**.

However**u_r'**=**u_r**+**δu_r**is not mathematically true if**u_r'**=**u_r**for**δu_r**perpendicular to**u_r**.

And, even further, the small angle approximation (**δu_r**=**u_r'***sin(δTheta) ≈**u_r'***Theta ) is just that as well...another approximation.

I guess my question is that two "approximations in mathematics" are necessary for the "true" physical theory. Are ther any reprecussions to the theory if no mathematical approximations are made? - 12 Jul '16 10:32

My main problem is that I don't know what you mean by*Originally posted by joe shmo***Maybe you've given up on me already and I'm just spinning my wheels here, but I see what you are saying. However, it still leaves me with some questions.**=

We derive this relationship: [b]u_r'**u_r**+**δu_r**

I see what you mean now about the the "key word is orthogonal". if**u_r'**=**u_r**was to hold absolutely true, the angl ...[text shortened]... al theory. Are ther any reprecussions to the theory if no mathematical approximations are made?[/b]**u_r'**, is this meant to be a derivative or just another vector. Also is**u_r**the radial component of a vector, or is it meant to be the entire vector.

Assuming**u_r**is a vector, u_r it's magnitude, and that**δu_r**is infinitesimal then:

u_r' = |**u_r'**| = |**u_r**+**δu_r**|

so,

u_r'^2 =**u_r'**.**u_r'**(understood as the vector inner product)

u_r'^2 =**u_r.u_r**+ 2**u_r.δu_r**+**δu_r.δu_r**

u_r'^2 = u_r^2 + 2**u_r.δu_r**+ δu_r^2

If**δu_r**is orthogonal to**u_r**then the middle term is zero. The last term is second order in**δu_r**, and the notation you are using implies to me that it can be neglected, so we get:

u_r' = u_r - 13 Jul '16 01:40
*Originally posted by DeepThought***My main problem is that I don't know what you mean by [b]u_r'**, is this meant to be a derivative or just another vector. Also is**u_r**the radial component of a vector, or is it meant to be the entire vector.

Assuming**u_r**is a vector, u_r it's magnitude, and that**δu_r**is infinitesimal then:

u_r' = |**u_r'**| = |**u_r**...[text shortened]... and the notation you are using implies to me that it can be neglected, so we get:

u_r' = u_r[/b]**r**is the vector**u_r**is the unit vector of**r**

**u_r'**is the unit vector for**r'**describing the position of a partical after it moves through some angle δTheta in time δt.

What im saying is that the second term in the inner product cannot actually be zero becuase**u_r'**=**u_r**are both unit vectors with the same magnitude ( namely 1 unit) the only way that can hold true mathematically is if**δu_r'**is NOT orthogonal to**u_r**.

So I guess what I'm trying to ask is why the condition of**δu_r**'s orthoganality outweighs**u_r'**=**u_r**= 1 in which he only strict way of doing so is to impose**δu_r**is acute with respect to**u_r**

Then, what are the consequences to neglecting higher order differentials for the remainder of the derivation in general? How do we know its when and where its ok to neglect?

Then there is the business of the samll angle approximation in the derivation. What type of equation can be be developed without any approximations? - 13 Jul '16 02:53

You are treating what you call theta as a large angle and it is not only small, but we are going to take the limit that it goes to zero. Let's choose our axes so that the vector*Originally posted by joe shmo***[b]r**is the vector**u_r**is the unit vector of**r**

**u_r'**is the unit vector for**r'**describing the position of a partical after it moves through some angle δTheta in time δt.

What im saying is that the second term in the inner product cannot actually be zero becuase**u_r'**=**u_r**are both unit vectors with the sam ...[text shortened]... mation in the derivation. What type of equation can be be developed without any approximations?[/b]**u_r**points along the x-axis so its components in Cartesian coordinates are (1, 0). Then the vector**u_r'**has components (cos(δw), sin(δw)), where δw is the small angle (intended to be read as delta omega, but actually the roman letter w) between the vectors (you call this theta, but it should really be δw). It is now easy to calculate**δu_r'**:

**δu_r'**= (cos(δw), sin(δw)) - (1, 0) = (cos(δw) - 1, sin(δw)) ~ (δw^2/2 + O(δw^4), δw)

now let's take the dot product with**u_r**:

**δu_r'**.**u_r**= (δw^2/2 + O(δw^4), sin(δw)) . (1, 0) = (δw^2/2 + O(δw^4), 0)

So, to first order in δw the dot product is zero. Yes, you are right we are relying on an approximation - but you are trying to do calculus and the "error term" goes to zero as δw goes to zero if it is anything more than first order. Let w' be the angular velocity so that w' = limit (δt -> 0) δw/δt. We have:

d**u_r**/dt = limit (δt -> 0)**δu_r'**/δt = limit (δt -> 0) (δw^2/2 + O(δw^4), 0)/δt = limit (δt -> 0) ((δw/δt)δw/2, 0) = (limit(δt -> 0) δt w'^2/2, 0) = (0, 0)

So the rate of change of the unit vector is zero, because the change in the vector is second order in the angle and only first order terms survive.

A more rigorous argument can be made by actually using polar coordinates (what you present above has the components of the vectors in Cartesian coordinates parameterised by the radial and angular coordinates). In a properly constructed coordinate system the unit vectors in the radial direction and in the direction of increasing angle*are automatically orthogonal*.

The components of the vector in plane polar coordinates is (r, w). The components of the*same*vector in Cartesian coordinates is (r cos(w), r sin(w)). The rate of change of the vectors can be worked out just by differentiating (using r' and w' to mean the derivatives):

cartesian_components = (dx/dt, dy/dt) = (d(r cos(w))/dt, d(rsin(w))/dt) = (r'cos(w) + r sin(w)w', r'sin(w) - r cos(w)w' )

polar_components = (dr/dt, dw/dt) = (r', w' )

If we set r' = 0 then:

cartesian_components = (y, x) w'

polar_components = (0, w' )

I'm guessing you're looking at a book on elementary vector calculus. The problem with the treatment in those books is that they do not cover changes in coordinate systems properly (their vector spaces have unit vectors that correspond to Cartesian axes even when they are working in polar or cylindrical coordinates.). More advanced books which call themselves things like "Introduction to Differential Geometry." explain this aspect far better. - 13 Jul '16 23:21

Thanks,*Originally posted by DeepThought***You are treating what you call theta as a large angle and it is not only small, but we are going to take the limit that it goes to zero. Let's choose our axes so that the vector [b]u_r**points along the x-axis so its components in Cartesian coordinates are (1, 0). Then the vector**u_r'**has components (cos(δw), sin(δw)), where δw is the small a ...[text shortened]... themselves things like "Introduction to Differential Geometry." explain this aspect far better.[/b]

This kind of derivation is in at least three of my texts: Physics for Scientist an Engineers, Engineering Mechanincs - Dynamics, and the most advanced of the three, Classical Mechanics - John R. Taylor.

Iv'e never actually taken Classical Mechaincs,...I purchased the text a year or so after college graduation just to peruse what I missed. They all use similar notation, ( mainly just change coordinate variable names ( Classical Mechanics uses r, phi ), Dynamics ( (r,Theta), etc...

None of the derivations paramaterize from the cartesian coordinates as you did here ( as far as I can tell ).

Since you put it out there I would like to ask about this:

**δu_r'**= (cos(δw), sin(δw)) - (1, 0) = (cos(δw) - 1, sin(δw)) ~ (δw^2/2 + O(δw^4), δw)

cos(δw) - 1 ~ δw^2/2 + O(δw^4) This corrdinate is derrived from the Taylor Series, correct? I've never really used this "O" notation. Is it a way of saying " neglecting the remainder of the series"?

and the δw is just the small angle approximation.

Can you elaborate on this:**δu_r'**.**u_r**= (δw^2/2 + O(δw^4), sin(δw)) . (1, 0) = (δw^2/2 + O(δw^4), 0)

So, to first order in δw the dot product is zero.... im missing the "aha" so to speak. Is it because δw goes to zero, we are left with (0,0) or 0...trying a stab at it.

I'm still digesting the rest, but again thanks. - 14 Jul '16 17:35 / 1 edit

Yes, that's the Taylor series for cos(δw). The purpose of big O notation is just to give an indication of the size of the next term, it's conventional to drop constant factors, I'd suggest looking at the Wikipedia page [1].*Originally posted by joe shmo***Thanks,**

This kind of derivation is in at least three of my texts: Physics for Scientist an Engineers, Engineering Mechanincs - Dynamics, and the most advanced of the three, Classical Mechanics - John R. Taylor.

Iv'e never actually taken Classical Mechaincs,...I purchased the text a year or so after college graduation just to peruse what I missed. The ...[text shortened]... e left with (0,0) or 0...trying a stab at it.

I'm still digesting the rest, but again thanks.

Regarding the second question, why the dot product goes to zero, lets differentiate x^3 by hand. To save having to use unicode I'll use h for δx.

dy/dx = lim (h -> 0) (y(x + h) - y(x))/h

When I did the calculation I first just focused on the numerator, here the numerator is:

y(x + h) - y(x) = (x + h)^3 - x^3 = (x^3 + 3x^2h + 3xh^2 + h^3) - x^3 = 3x^2h + 3xh^2 + h^3 = 3hx^2 + O(h^2)

If we set h to zero at this stage we just get zero, which is not right, we need to divide through by h:

dy/dx = lim(h -> 0) (3x^2 + 3xh + h^2) = lim (h -> 0) 3x^2 + O(h) = 3x^2

The only differences between this and the calculation above is that above we were differentiating a vector, so that there two components. The dot product term vanishes because it is linear in δw and this is taken to zero. A vector which is zero is still a vector, it means both components are zero. Technically one must write (0, 0), but as an abuse of notation 0 will do fine.

[1] https://en.wikipedia.org/wiki/Big_O_notation - 23 Jul '16 21:45 / 3 edits

Sorry It took me a while to get back to this. I went on vaction. I don't want you to think I just gave up on understanding you.*Originally posted by DeepThought***Yes, that's the Taylor series for cos(δw). The purpose of big O notation is just to give an indication of the size of the next term, it's conventional to drop constant factors, I'd suggest looking at the Wikipedia page [1].**

Regarding the second question, why the dot product goes to zero, lets differentiate x^3 by hand. To save having to use unicode ...[text shortened]... but as an abuse of notation 0 will do fine.

[1] https://en.wikipedia.org/wiki/Big_O_notation

I think I'm mixing a few things up. Just for clarifiction: The dot product**δu_r'.u_r**has nothing to do with the derrivative d**u_r**/dt...correct? The whole reason you brought that up is to show that in the limit as δw goes to zero**δu_r'**is perpendicular to**u_r**

**δu_r'**.**u_r**= (δw^2/2 + O(δw^4), sin(δw)) . (1, 0) = (δw^2/2 + O(δw^4), 0)

which ends as:

Limit δw -> 0 (**δu_r'**.**u_r**) =**(0,0)**•**(1,0)**= 0+0 = 0 which shows that**δu_r'**┴**u_r**

As a side: Does that imply that in reality motion between A and B along any path is not smooth and continuous, but the summation of a series of infinitesimally small orthogonal changes in direction?

Next, I'm missing something that is leading me to this contradiction: You derrive

d**u_r**/dt = limit (δt -> 0) δ**u_r'**/δt = limit (δt -> 0) (δw^2/2 + O(δw^4), 0)/δt = limit (δt -> 0) ((δw/δt)δw/2, 0) = (limit(δt -> 0) δt w'^2/2, 0) = (0, 0)

The texts however indicate the following (changing the variables to match yours):

d**u_r**/dt = limit (δt ->0) δw/δt***u_w**= dw/dt***u_w**<>**(0,0)**

What am I missing here?