- 30 Jun '16 08:56 / 14 editsI know the standard theory is that the universe is finite in size but unbounded but, just suppose you were told from a reliable source that the universe is infinite and unbounded and space is infinite in all directions.

Now, does it make any sense to say there might exist, say, a planet, that is literally an*infinite*distance from your current location?

Or does it only make sense to talk about, say, a planet, being a finite distance away from you despite it being correct to say space is 'infinite'?

If it makes no sense to say something can exist literally 'infinitely' far away, what argument can you give to convincingly explain*why*it makes no sense?

Also, does it make any sense to say there might exist, say, a planet, that literally an infinite distance from your current location*in a particular direction*such a south? You see, I wonder in particular if that is nonsense because when we imagine the direction of something is away from you, we imagine the shortest line, specifically a straight line, that can be drawn from point A to point B and note the direction that line radiates away from point A. But if that something is infinitely far away from A, you cannot imagine that straight line since that line would have to be infinitely long and you cannot imagine infinitely long.

Also, the direction could be viewed as the direction from point A light must take to take the least amount of time to go from point A to point B; but if A and B are infinitely apart, that light will take an infinite amount of time to get there which I assume is the same as saying it will not ever get there in which case, because the light will not get there from any direction, you have nothing to discriminate between the different directions and thus it is meaningless to say it is in any particular direction, such as 'south'? Does that really make sense?

The reason I want to know is because I discovered it has some relevance to the research I am doing into probability or, to be more specific, some work I am currently doing on a problem of defining the prior probability (assuming that those particular prior probabilities are definable i.e. exists in this case, and I haven't yet worked out that they do ) of there existing at least n number of things in some category c. - 30 Jun '16 09:42 / 13 edits-on a related topic;

I have recently become pretty suspicious of the concept of infinity and I do now very seriously wonder if the whole concept of infinity is total nonsense (asp since the last thread with this business of 'infinitesimals' being " 1/∞ "; I particularly don't like this business of " 1/∞ ≠ 0 " and that is assuming "1/∞ " makes sense! ) and even total nonsense within valid pure mathematics although have yet to form any real firm opinion on whether infinity really*is*nonsense; for now, I merely just wonder, that is all.

Perhaps there isn't such thing as infinity!?

Does anyone here actually believe there isn't such thing as infinity?

If so, have you got any argument that you think explains why the concept of infinity is nonsense?

Do you think you can show some inconsistency/self-contradiction in the concept of infinity? If so, I would like to see your explanation for such inconsistency/self-contradiction here.

I suppose if there isn't such thing as infinity then, assuming f(x) increases with x,

" Lim x→∞ f(x) = y "

, because infinity doesn't exist and therefore nothing can "tend to infinity", shouldn't be interpreted as;

"f(x) tends to y as x tends to infinity"

but rather;

"there exists a finite x that is such that f(x) ≈ y and there exists no finite z where z is such that it is both z>x and f(z) < f(x) "

-just a bit awkward perhaps but doable I think. - 30 Jun '16 10:52 / 1 edit

That is a surprisingly popular guess, but it is not a Theory and there is no evidence whatsoever to support it. It certainly doesn't qualify as 'the standard' either.*Originally posted by humy***I know the standard theory is that the universe is finite in size but unbounded ....**

**Now, does it make any sense to say there might exist, say, a planet, that is literally an***infinite*distance from your current location?

No.

For simplicity, just think of one dimension - the real number line. It is impossible to find two real numbers whose difference is infinity. - 30 Jun '16 11:00 / 1 edit

I do not know if this is what you are doing, but the most common error is to think infinity is a number. It isn't.*Originally posted by humy***I have recently become pretty suspicious of the concept of infinity and I do now very seriously wonder if the whole concept of infinity is total nonsense...**

**Perhaps there isn't such thing as infinity!?**

There isn't a number that is infinity. The concept is real.

**because infinity doesn't exist and therefore nothing can "tend to infinity", shouldn't be interpreted as;**

"f(x) tends to y as x tends to infinity"

but rather;

"there exists a finite x that is such that f(x) ≈ y and there exists no finite z where z is such that it is both z>x and f(z) < f(x) "

-just a bit awkward perhaps but doable I think.

Simply say "f(x) tends to y as x grows larger". Problem solved.

The whole 'limit' system in calculus is precisely to get around actually using infinity as a number. - 30 Jun '16 17:03

Scientists don't believe it is an actual physical thing. They believe it is a dimension very similar to the space dimensions. But yes, if time is infinite (an unknown at this point) then the same issue would arise, but as I say above, you still would not be able to find two points in time between which an infinite amount of time had passed.*Originally posted by vivify***Since scientists believe that time is an actual physical thing, wouldn't that be an example of infinity? Or is time finite, and doomed to end when the universe does?**

An interesting side issue is that if time and or space are infinitely divisible, then it is entirely possible that there are an infinite number of intervals between to points in space-time. So humy could then have his 'infinity between points' on a standard ruler. But one could not pick a scale for which that infinity could be measured to be infinity. All possible scales would still show the distance to be finite. - 30 Jun '16 18:20 / 2 edits

if*Originally posted by twhitehead*

Simply say "f(x) tends to y as x grows larger". Problem solved.

f(x) = 1 – 1/x

then

"f(x) tends to y as x grows larger"

would be true for y = 0.99999999999999*at least until*we reach a certain arbitrary 'high' finite value of x.

I see a need here to define the meaning of " Lim x→∞ f(x) = y " in such a way as to imply we are not to permit here that "*at least until*" above!

Hence the reason why I said it in the more cumbersome and complex way of;

"there exists a finite x that is such that f(x) ≈ y and there exists no finite z where z is such that it is both z>x and f(z) < f(x) "

as that implies we are not to permit here that "at least until" above; -that was my thinking although haven't worked out how to avoid the vague " f(x) ≈ y " bit above which I don't like.

Also, it actually isn't totally clear to me what is meant by "tends to" in "f(x) tends to y as x grows larger" and see the need to either clarify what is "tends to", which I think is just a bit vague, or, as I did with;

"there exists a finite x that is such that f(x) ≈ y and there exists no finite z where z is such that it is both z>x and f(z) < f(x) "

, sidestep the problem by avoiding the use of the term 'tends to' altogether.

Why can't we say, for example, "f(x) tends to 3 as x grows larger" for f(x) = 1 – 1/x ? I don't know of any*formal*definition of "tends to" that would imply to me that is wrong or why it is wrong even though I accept it is wrong. - 30 Jun '16 18:47 / 3 edits

just thought of a solution to that and its so simple I fail to see why I don't see it before!*Originally posted by humy***...**

"there exists a finite x that is such that f(x) ≈ y and there exists no finite z where z is such that it is both z>x and f(z) < f(x) "

...

-that was my thinking although haven't worked out how to avoid the vague " f(x) ≈ y " bit above which I don't like.

...

It is:

" Lim x→∞ f(x) = y "

means

"For any two finite real numbers x and z where x>z, |y – f(x)| < |y – f(z)| "

or, more formally;

Lim x→∞ f(x) = y ⇒ ∀x ∈ ℝ ∧ ∀z ∈ ℝ where x>z: |y – f(x)| < |y – f(z)|

-that's a LOT better and for several reasons!

(if someone can see a better way of expressing that more formally esp to avoid the word "where", please show me here) - 30 Jun '16 19:30

I think you will find, if you look into it, that what you are trying to say has already been said and put into the definition of limits. ie infinity itself is not actually used in calculus other than as a symbol representing what you are trying to say. Even by shortened version would need a rigorous definition similar to what you have said, but you don't need to actually say it all every time you want to write a limit. You say it once, then thereafter use the symbols. But no need to invent new symbols, that would merely cause unnecessary confusion.*Originally posted by humy***Hence the reason why I said it in the more cumbersome and complex way of;**

**I don't know of any***formal*definition of "tends to" that would imply to me that is wrong or why it is wrong even though I accept it is wrong.

http://math.stackexchange.com/questions/1429212/meaning-of-tends-rightarrow"Definition: To say that x tends to zero is to say that x varies in such a way that its numerical value becomes and remains less than any positive number that we may choose, no matter how small."

Higher Algebra by Barnard and Child - 30 Jun '16 19:32 / 3 edits

Not sure if its different, but you can find one on Wikipedia:*Originally posted by humy***(if someone can see a better way of expressing that more formally esp to avoid the word "where", please show me here)**

https://en.wikipedia.org/wiki/Limit_of_a_function#Functions_of_a_single_variable

(notice that infinity is just a special case).

Notice also that infinity itself doesn't actually occur in the definition other than as a symbol. Mathematicians have a healthy respect for infinity and avoid it like the plague. - 30 Jun '16 21:15 / 4 edits

Arr, I forgot about the requirement that f(x) need not necessarily be well defined (if defined at all) for if x equals p input that gives the limit L output.*Originally posted by twhitehead***Not sure if its different, but you can find one on Wikipedia:**

https://en.wikipedia.org/wiki/Limit_of_a_function#Functions_of_a_single_variable

(notice that infinity is just a special case).

...

where your

https://en.wikipedia.org/wiki/Limit_of_a_function#Functions_of_a_single_variable

link where it says;

" For every real ε > 0, there exists a real δ > 0 such that for all real x, 0 < | x − p | < δ implies | f(x) − L | < ε. ",

with "{...}" indicating subscript here as I cannot edit subscript here, that can be more formally written as:

f(x) : ( ∀ε ∈ ℝ{> 0} : ( ∃δ ∈ ℝ{> 0} : ( ∀x ∈ ℝ, 0 < | x − p | < δ ⇒ | f(x) − L | < ε ) ) )

(at least I hope I have written that down with exactly correct conventional notation; please will somebody correct me if I haven't)

But I find that an odd way they expressed it and would naturally prefer:

" For every real x ≠ p, there exists a real z such that z is closer to p than x is to p and | f(z) − L | < | f(x) − L | "

which can be more formally written as;

∀x ∈ ℝ{≠p} ( ∃z ∈ ℝ : |z − p| < |x − p| ∧ |f(z) − L| < |f(x) − L| )

So now we can write;

lim x→p f(x) = L ⇒ ∀x ∈ ℝ{≠p} ( ∃z ∈ ℝ : |z − p| < |x − p| ∧ |f(z) − L| < |f(x) − L| )

adapting that for p = +infinity by making all the necessary adjustments, I can write:

lim x→∞ f(x) = L ⇒ ∀x ∈ ℝ ( ∃z ∈ ℝ : z>x ∧ |f(z) − L| < |f(x) − L|)

(please will somebody correct me if I haven't expressed that conventional notation exactly correctly or made a logical error) - 01 Jul '16 07:24 / 1 edit

Unless I am mistaken, your form does not work for oscillating functions that converge to a point, whereas the Wikipedia version does.*Originally posted by humy*

adapting that for p = +infinity by making all the necessary adjustments, I can write:

lim x→∞ f(x) = L ⇒ ∀x ∈ ℝ ( ∃z ∈ ℝ : z>x ∧ |f(z) − L| < |f(x) − L|)

(please will somebody correct me if I haven't expressed that conventional notation exactly correctly or made a logical error)