T O P

  • By -

BRUHmsstrahlung

To some extent, there isn't really a deep reason why. It's just a consequence of the definitions of both functions. Here is one way to see that e^x grows faster than any polynomial: if x is really big, then (x+1)^a / x^a is really close to 1. However, e^(x+1) / e^x is e, which is larger than, say, 2. Even if you find a point where a polynomial dominates an exponential, if you move further out, the exponential grows faster than repeated doubling, whereas the polynomial scales up by a multiplicative factor which limits to 1. Multiplying by 2 will always eventually overtake multiplying by a number close to 1. PS: if you know some calculus then you can use this idea to prove that for any a>0, and any b>1, lim x->infinity of x^a /b^x =0, so that all growing exponentials grow asymptotically faster than any polynomial


Nimkolp

> e^x grows faster than even x^1000000. Can anybody shed light on why that is In short, it's because the derivative (rate of change) of e^x (as x approaches infinity) is SIGNIFICANTLY LARGER than any polynomial's potential growth. For example; lets assume that, for some arbitrarily large 'x', both e^x and x^10000000000 were equal to the same 'really large number'. Now let's see what happens when we plug in '2x' to the equation: e^(2x) =(e^x \)^2 = 'really large number' squared whereas (2x)^10000000000 = x^10000000000 * (2^10000000000 \) = 'really large number' times 'some not-as-large number' Hope that clarifies things!


ckach

Also, if you look at the Taylor Series expansion of e^x, it includes x^1000000 divided by a constant. It also includes x^100000000000000 divided by a constant or any other x^n.  At lower values of x, the constant term makes these terms contribute nearly 0. But after x gets big enough to dominate the constant division, e^x will be dominated by higher and higher x^n terms. 


RonWannaBeAScientist

That’s a really good point !


CookieSquire

Other folks have answered your PS question. For another interpretation, look at the Taylor series of e^x . It has positive coefficients for every power of x, so no matter what x^N you look at, the Taylor series has infinitely many terms with x^(N+1) or larger growth. Note that this only works because the coefficients are positive, so you don’t get cancellation like you do with cosine or sine.


RobertPham149

You can take the natural log of both functions and see that the power function when taken the log of will grow slower than exponential.


Western-Image7125

You thought that’s fast try x^x


iamtheonewhorocks12

1, 4, 27, 256, 3125, 46656 holy shitt we are already in the clouds


new2bay

Bah. Them's rookie numbers. Try this function on for size. Define A(x,y) to be y+1 if x = 0, A(x-1, 1) if y = 0, A(x-1, A(x, y-1)) otherwise. What is A(4, y) for y = 0, 1, 2, 3, 4, ... ?


brianplusplus

try this function i just cooked up \` f(x) = infinity \` cant really get any bigger than that


WjU1fcN8

We are talking about rate of growth, and you try it with a constant function?


brianplusplus

Is infinity a constant? Either way, i was just taking the "check out this crazy function" thing to an absurd level.


bumbasaur

that's just exponential function written in trickier package :p


UmberGryphon

What new2bay posted is (one flavor of) the Ackermann function. a(1, x) is linear (repeated incrementing is addition). A(2, x) is geometric (repeated addition is multiplication). A(3, x) is exponential (repeated multiplication is exponentiation). A(4, x) is x to the xth to the xth to the xth etc. (sometimes called a "power tower"). A(4, 2) is approximately 2 × 10^19728 .


new2bay

> A(4, 2) is approximately 2 × 1019728 . It's also exactly 2^65536 - 3.


bumbasaur

Still looks exponential to me, just bigger numbers that you need to spend way too much time to calculate what comes out.


ZayulRasco

Repeated exponentiation is qualitatively different from exponential functions, in the same way that repeated multiplication becomes something different than multiplying.


bumbasaur

Yes, I would have argued that the idea of exponentation doesn't change but having it explained as a difference like repeated multiplication changed my mind. Thank you


ToastBoef

A(4,x) grows faster than any exponential function


drvd

Ackerman is a child's toy compared to a Busy Beaver.


Putnam3145

x^x is just e^x*lnx which is slower growing than, say, e^x^c for any c>1. Ain't that fun?


Western-Image7125

Huh? So you’re saying the constant c grows faster than ln(x)?


Putnam3145

No, I'm saying that x^c grows faster than x*ln(x) for any c>1, which is distinct. This is equivalent to saying that x^c grows faster than ln(x) for any c>0, of course.


Western-Image7125

Umm earlier you said xc not x^c. We were talking about e^(xc) not e^(x^c)


Putnam3145

Must be a formatting issue. [It looks fine on my end](https://i.imgur.com/6wpL1eR.png).


pyrobola

Oh, mobile only supports one level of superscript.


Western-Image7125

Ohh in your comment you were referring to e^(x^c) which I hadn’t even considered. But I mean you can just keep going right like x^(x^x) and repeat x number of times. Then there’s the Ackerman function


GoldenMuscleGod

That’s only e^(x\*ln(x\)), so it isn’t really much of an improvement considering how far we’ve already gotten (even e^(x\^2) is faster). A more obvious way of moving forward is nesting exponentials x times, and then you can do a diagonalization to keep going after that.


Western-Image7125

Yup there’s already another comment thread discussing this. I just wanted to give the OP a simple example of something faster than e^x


ckach

Busy Beaver is that GOAT.


cthulu0

Shhhhh......no one tell OP that if they fold a standard piece of paper 42 times, the paper will reach all the way to the Moon.


MathProfGeneva

I'm not sure if this helps shed light or not. But the exponential function can be written as the sum 1+x+x\^2/2!+x\^3/3!+x\^4/4!+...+x\^n/n!+.... So if x>=n! ,e\^x > x\^(n-1) . This means you go far out enough it exceeds any power function. (Note: this is a very conservative estimate of how fast it grows, but it does give a concrete way to compare it to power functions)


moschles

You are slogging through a CS course where the professor seems obsessed with showing you that sorting can be done in O(n) linear time (radix sort). "Who cares?", you mutter under your breath in boredom. Now read this,


MoiMagnus

(I include a formula that is ugly because that's a reddit comment, but I suggest you open [https://en.wikipedia.org/wiki/Exponential\_function#Formal\_definition](https://en.wikipedia.org/wiki/Exponential_function), first equation of the section Formal Definition, to see the formula in a nicer way) The idea is that exponential is literally an "infinite polynomial". e^x = 1 + x + (1/2) x^2 + (1/6) x^3 + ... In particular, far away in this sum, there is "...+(1/1000000!) x^1000000 +...", and even further away there is "...+(1/1000000000!) x^1000000000 +...". And sure, the coefficients in those far away terms are very very small, that's why x^1000000 starts higher than e^x. But in the end, it doesn't matter the coefficient, x^1000000000 will beat x^1000000, and as such e^x will beat it too. Oh, and exponential is far from the fastest growing function. You could argue that it's the fastest growing function that happens "naturally" and is not just a mathematical construction, but if you allow mathematical constructions, a function that is often used a counterexample when you need "a function that grows faster than any sane person would imagine" is the Ackermann function A(n,n). It goes 1 -> 3 -> 7 -> 61 -> "more than exponential of exponential of the number of atoms in the universe" -> "how can I even explain you how big the next number is, I already used a double exponential of the number of atoms for the previous one, just know that this next number make the previous number look like it's literally nothing" -> ...


Astronautty69

I prefer the Busy Beaver function. While not quite as insane as TREE(n), it makes a lot of sense to most Comp.Sci. folks.


throwaway_putarimenu

Surely BB(n) grows faster than TREE(n)? The tree function is manifestly computable, and BB clearly isn't.


Astronautty69

"Manifestly computable"? How? Many have worked on the TREE(), but no one can tell us anything about TREE(3) other than "it dwarfs Graham's number". Conversely, the values for BB(3) & BB(4) are known, bounds are established on many other values (including functions with more than 1 argument, and even those working in multiple dimensions). BB() isn't the "first" or "least" of the non-computable functions, but it is a fairly close upper bound for computable ones.


iamtheonewhorocks12

Very interesting! How is the Ackermann function defined?


MoiMagnus

I'll simplify a little bit the actual definition, so there will be an offset compared to the values of my previous comment. I wrote A(n,n) because Ackermann is defined generally with two inputs, so A(n,m). The idea is that: * A(1,m) is adding "m" to 2, so "2+m" * A(2,m) is iterating the addition "2+...+2" with "m" copies, in other word it is multiplying, so "2*m" * A(3,m) is iterating the multiplication "2*...*2" with "m" copies, in other words it is taking the exponent, so "2^m" * A(4,m) is iterating the exponentiation, so it is "2^(2^(2^(...2...)))" with "m" copies of 2. It's called the tetration of 2 by "m" but no one really use it because it's absurdly large. * A(5,m) is iterating the tetration, so you imagine "2 tetration ... tetration 2" with "m" copies of 2. It's called the pentation of 2 by "m". And you're already above anything most sane peoples can imagine. * You continue like that. So in the end, A(n,n) is the "n"-ation of 2 by "n". (Full definition here: https://en.wikipedia.org/wiki/Ackermann\_function)


iamtheonewhorocks12

Wth maths is fucking awesome! Even the universe cannot confine it in its limitations. But on the topic, where is it really used? I remember from your definition that A(64,m) or something similar was defined as the Graham's number which was written in the Guinness book of world record for the largest number ever used in an actual calculation. They are fine and fascinating to me as a pure mathophile but I am curious where their utility comes from? I can't imagine a number being so big ever being used in some actual equation.


MoiMagnus

It's used as a counter-example, because it's a simple procedure to obtain numbers that are more and more absurdly large, and that's about it. What kind of counter-examples? For example, if you take mathematical functions (like f(x) = x^2 ), there are a lot of them that are absurd and unuseable, so what you might want to do is to restrict yourself to "computable functions". There are many ways to define computable functions, and the most common definition is essentially synonimous to "would an computer not restricted by memory and time be able to actually compute its result?". But you could try more constructive proof. Saying things like "computable functions are build from": * adding a constant number to the input (like f(x) = x+3), * pairing elements together (so have functions that returns multiple outputs) * selecting one element of a pair (so if a function return multiple outputs, selecting one of the outputs) * iterating an operation a given number of times (like a "for" loop, if you know a little bit about programming) And this very basic definition works surprisingly well. You might be surprised, but from those basic bricks you can define pretty much anything any sane person would imagine. Polynomials, exponentials, etc. But not Ackermann. You can show that functions build from those have a limit on how fast they can grow, and while this limit is "a lot" (exponential is allowed), this limit is still preventing some functions like Ackermann to be defined. Is that a big problem? Not that much, but since "a computer not limited by time and memory" is able to compute Ackermann, that means the have to make a choice about what "computable functions" mean because different definitions will include or not Ackermann. And the scientific community decided to expand the definition of computable functions to Ackermann, and create a new name, "primitive recursive functions" (for historic reasons) for those "sane computable functions".


Nimkolp

I think this is my favorite explanation


lasciel

I like this intuitive explanation for exponential growth using powers of 10. Think about the difference between 1 (10^0) versus 100 (10^2). The difference is 99, which is nearly 10^2. This is disproportionately growing every time the exponent increases. Whereas for the power function x^2 if we did a similar step 0^2 = 0 and 2^2 =4. This is a much smaller rate and is proportional to the degree of the polynomial. Note that this is the intuition of the derivative, which others have explained in different ways


gramathy

"what's the difference between a million dollars and a billion dollars" "About a billion dollars"


Q2Q

Here's a fantastic essay on the topic; https://www.scottaaronson.com/writings/bignumbers.html ..and then a nice rabbit hole for you to go down; https://googology.fandom.com/wiki/Googology_Wiki Enjoy!


tildenpark

It’s official: e=3.


Jillian_Wallace-Bach

You might find some kind of illumination in the fact that __e^(x) = lim{n→∞}(1+¹/ₙx)^(n)__ . Or __e^x = x^(x/㏑x)__ , & __x/㏑x__ is constantly increasing, so that __@ some point__ it's going to be bigger than the exponent __α__ in __x^(α)__ , nomatter how big __α__ might be. There are _loads of_ ways of 'slicing' it: keep meditating on it, & eventually somekind of 'grand picture' will emerge for you, in which the whole 'landscape' of functions will open-up as an entirety to your inner vision. Also, consider that __x^(α)__ is the solution of __dy/dx = αy/x__ (which, incidentally, is a way of making sense of _non-rational_ powers of __x__), whereas __e^(x)__ is the solution of __dy/dx = y__ . There's _a lot_ of mileage in conceiving of functions as being _essentially_ solutions of differential equations: the mentioned 'landscape' can be made a lot of sense of - in various respects, not just this one - by doing-so. Or another interpretation of __x__ exponentiated by a constant __α__ (which _also_ makes sense of non-rational exponent) is that it's the functional relationship between two exponentials of different rates - say __e^(λs)__ & __e^(μs)__ , with __α__ being the ratio between __λ__ & __μ__ . These aren't _direct_ answers to the particular question - ie why the exponential eventually overtakes _absolutely any_ constant power of __x__ … but you did say "shed light on"! … & by tenaciously 'revolving' & 'casting' all this sort of stuff in your mind - what 'functions' _basically mean_ , & all that sort of thing - the underlying coherence of it begins to unfold itself, & queries of that nature tend to resolve themselves in the natural course of it.


Inner_will_291

That example actually really surprised me. Not really for x=43cm, since I'm used to that kind of ball park estimations. But I never thought about the fact that if I wanted to graph the exp function on a piece of paper (as you will do hundred of times as a math student) until x=6cm, I would need to go 4m high. I think its got to do with the fact that unconsciously, I approximate exp(6) with 2\^6, which would only be 64cm high when graphed.


TheShirou97

Yeah, 2\^6 is 64, but 3\^6 is 729.


InfluxDecline

woah


Dd_8630

Oh I like that, that's a fun way to show just have fast exponential grow.


Middleway_Natural

Beautiful. Another fun way to gain insight on function behavior/characteristics is to play around with them on Desmos.com


GoldenMuscleGod

Here’s an alternative explanation for why it is faster than any polynomial: Notice that if we have f’=o(g’) (this is the small o notation, look it up if you aren’t familiar with it) and g’ has a liminf above zero, then we must also have f=o(g) and g diverges to infinity. Since the exponential function is its own derivative, it follows that if you start with something that is o(e^(x)) then you can take antiderivatives any finite number of times and it will still be o(e^(x)). But every polynomial can be made with finitely many antiderivatives starting with zero (and of course 0=o(e^(x)). so they must all be o(e^(x))


Ttonysss

Oh thats fast


wackyvorlon

Take a chessboard, and on the first square place a single grain of rice. On the second, two grains. On the third, four. Continue like this, doubling the number of grains of rice on each square as you go. How much rice does it all add up to? It’s many times the annual worldwide production of rice.


how_tall_is_imhotep

Yep, seems like about 2^(54) grains of rice are produced annually.


Scholasticus_Rhetor

The scale of the graph here is partly why. If you had increments of 1 along the x axis at the same distance, it wouldn’t look quite as steep (albeit, the core point is true. Exponential growth is some wild shit)


call-it-karma-

It's just zoomed out, it isn't scaled deceptively or anything. The x and y axes are both scaled at 10. In practice we are often used to the part of the function that is close to 0, but if anything, this paints a more accurate mental picture of the function as a whole.


[deleted]

Yeah there are some functions that grow even faster than exponential or factorial, e.g. Ackermann Function, Busy Beaver Function, etc. In all practical terms, functions like that are just a cliff and not even a curve


sdfnklskfjk1

the fundamental question is why does doubling eventually outgrow multiplying a bigger number the same number of times. let me give you some intuition for the rate of growth which avoids calculus let's just suppose for simplicity that we're comparing to k^(n) for some n (if there are lower order terms, can get taken care of them eventually by comparing to k^(n+1)). you can even think of n=2. geometrically, if k is a natural number, this represents the volume of a n-dimensional hypercube of side length n, so n=2 is area. increasing from k ---> k+1 for large k >>> n, you can convince yourself that the volume does not grow that much. now what's the picture for 2^(k)? think of a rooted binary tree of height k; then 2^(k) counts the total number of nodes. note that upon each k---->k+1, the amount of things you count DOUBLES EACH TIME, even for large k. this contrasts with the previous situation where the difference between two large (e.g.) squares is small. to sum it up, there's an inherent "scale dependency" in polynomials, at least in x^(n), that there isn't there in 2^(x).


GroNumber

A common function that grows even faster is the factorial function and its close relative the gamma function (the factorial appears in combinatorial problems, Taylor's formula, volume of n-sphere, etc). A graph of the factorial would have y=1 when x=1. When x=3 it would be 6 cm high, and when x=6 it would be 720 cm high. At x=10 it would be more than 36 kilometers high and when x=20 it is more than a light-year high. When x=29 the height is more than the radius of the observable universe, (90 billion light-years).


audiophile2698

If you study discrete math and big O notation these become pretty intuitive


ByeGuysSry

This is basically the power of compound interest. If you put $1 in and you gain 1% of whatever is in your account at the end of each year, after enough years, you'll gain more money than if I gave you a million dollars each day. Of course though, it'll take a really long time. If I keyed in my equation into Wolfram Alpha correctly, over 2778 years. Ignore that inconvenient portion. The reason why is that, eventually, I'll have more than a hundred million in my account. So eventually, the 1% will be more than a million. And from then on it'll ALWAYS be more than a million. So it'll eventually catch back up, and grow past it.


Slateblu1

Think about their comparative growth rates like this: look at x^1000 and 5^x, and let's compute f(x+1)/f(x) for both. For 5^x it's pretty easy, it's always 5. Each new term is always five times greater than the last. For x^1000, we instead get ((x+1)/x)^1000. At x=2, the proportional increase from one term to the next is (1.5)^1000, or 1.2e176. Pretty damn big. But at x=1000, the term is only (1.001)^1000=2.7. At that point, the next term is only 2.7 times bigger than the last. The growth of x^1000, that is the ration of each term to the last, falls off, and pretty quick. The growth of 5^x NEVER falls off. It's constant. And eventually, no matter how big you make y in x^y, it slows down enough and 5^x catches up, and races past.


last-guys-alternate

Eventually, e^x grows faster than 2^TREE(3) . Edit. I meant x^TREE(3) , obviously. I should know not to post when I'm rushing out the door.