Talk:Asymptotic analysis

Latest comment: 7 months ago by TMM53 in topic Proposed revision

Untitled

edit

Notation query: f ~ g can be written with the keyboard character ~. HTML isn't needed. For my browser this is fine.

Charles Matthews 11:15, 20 Oct 2004 (UTC)

On my current browser it looks fine, but whatever I was using before showed it up too high. I guess it is pot-luck according to the font one is using. Now I see there is a html entity: ∼ aka ∼   --Zero 11:43, 20 Oct 2004 (UTC)


To Michael's question: "Do some people prefer \ldots to \cdots between plus signs? Why?", my answers are (1) they have no artistic taste, (2) they are so old that they learned their typography on typewriters where temporarily shifting the baseline by a fraction of a line was too much trouble, (3) none of the above cos I just made them all up. ;-) -- Zero 03:59, 6 Nov 2004 (UTC)


Could somebody please include some layman's language in this article at the top; just a sentence saying that asymptotic analysis covers how long something takes to be processed, or whatever the definition is. Thanks. -- Ian Howlett 19:30, 4 Jul 2005 (UTC)


I would like to second the idea of a layman's explanation of Asymptotic "equality." As an outsider looking in on the math community, a plain-English explanation of the concept would go a long ways for understanding of what you're trying to describe. I recommend that you ask someone outside the math community if they understand what you're writing before you post it; that way you've taken care of your "lowest common denominator." 76.111.8.135 (talk)

edit

ISO 31-11 uses the Unicode character ≃ to denote "is asymptotically equal to". Is this character actually in usage? Wouldn't it then be a good addition to the this article? --Abdull 21:16, 28 May 2006 (UTC)Reply

I don't think there is concensus on use of several such "equivalence relation symbols" - some use ~ for asympt.equiv, some use it for proportional, some use a kind of alpha for the latter. I think this is a fashion phenomenon under constant evolution. you might compare the HTML entity code vs amstex commands for the following symbols (from http://www.acronet.net/tags/characters.html):
& sim;   ∼	tilde operator = varies with = similar to = proportional = asympt.equiv.
& cong;  ≅	approximately equal to / (geometrically) congruent
& asymp; ≈	almost equal to / asymptot.equal (\approx in latex)
& equiv; ≡	identical to (used by mathematicians for (arithmetic) congruence)
(and the symbol of this section's title is displayed as division sign in the "edit summary" line of my browser...)— MFH:Talk 22:19, 12 February 2007 (UTC)Reply

I agree there are different "fashions", but it would be good to clarify that. For example what is the LaTeX symbol \asymp   good for? -- Christian.Mercat (talk) 15:22, 15 February 2010 (UTC)Reply

Just wanted to note that this usage (which is fine with me, but I'm no expert) conflicts with the discussion of the ≈ character over at approximation (in the Mathematics section). People with the knowledge and the refs may want to help bring the treatments together. /ninly(talk) 05:14, 11 August 2010 (UTC)Reply

Definition of asymptotic equivalence

edit

Someone did not like my more general definition of asymptotic equivalence, which I took to be f=g(1 o(1)). He argued, that "my" definition (which is common in asymptotic analysis) was not more general, because asymptotic equivalence only deals with limits... Please, explain to me, by using the present definition of the article (lim f/g = 1), whether or not 0 is asymptotically equivalent to 0, and then explain to me, why my definition is not more general than the present definition. According to my definition, 0 ist asymptotically equivalent to 0. ASlateff 128.131.37.74 05:27, 11 February 2007 (UTC)Reply

I agree that the current definition (f ~ g iff lim f/g = 1) is not good. Among others, the function f/g may not be defined even though the function g/f is defined, so there is clearly a problem. I prefer the f = (1 o(1)) g definition. (I prefer putting it that way round because many non-specialists would get confused by the other one thinking that 1 o(1) would be an argument to g.) Else, one can also put it f-g = o(g) (which is of course the same). — MFH:Talk 15:37, 12 February 2007 (UTC)Reply
May I suggest to reestablish my version of the article? ASlateff 128.131.37.74 00:43, 17 February 2007 (UTC)Reply

The current text "This defines an equivalence relation (on the set of functions being nonzero for all n large enough (most mathematicians prefer the definition f-g=o(g) in terms of Landau notation, which avoids this restriction))." is incorrect. There is no function which is o(g) unless g is nonzero for large enough argument, and then f would need to have that property too. Actually the version f = (1 o(1))g is the only one mentioned here that allows f and g to be 0 infinitely often (provided all but finitely many of these places are in common). I'm changing it. Zerotalk 10:46, 16 September 2009 (UTC)Reply

That is not correct: if g equals 0 infinitely often, then   just implies that f also equals 0 at the same places (maybe with finitely many exceptions); it is strictly the same statement as   or  . But the WP definition of little-o notation was incorrect in this respect until I corrected it just now, which probably led you astray. I'll try to see if I can get things right in this article as well. Marc van Leeuwen (talk) 12:38, 14 September 2011 (UTC)Reply
Yes, my comment was with respect to the definition of o() existing at that time. Your new definition of o(1) is mathematically better, but I didn't want to change it since the version with ε is harder for novices to understand. Same with your definition of  . But I won't argue. On the other hand I wonder why you want   at all, rather than just the equivalent  . The latter has several clear advantages, (1) it fits well with both the new and the old definitions of o(), (2) it requires no thought at all about what o() means when its argument is 0, (3) it is transparent that   whereas   needs the triangle inequality to see that. Zerotalk 14:45, 14 September 2011 (UTC)Reply
I'm not sure I can appreciate your remarks fully, but that's because I'm not entirely comfortable with the meaning of expressions with buried little-o expressions. As far as I have seen, their meaning is defined operationally by algebraically manipulating until the little-o expressions are isolated. Then   means exactly the same as  ,   and  , so whatever follows from one follows from any other of them. In particular, I cannot easily see the symmetry of the relation for any one of them without some special consideration for the set of values where f and g vanish (their symmetric difference should be finite), contrary to what you say in (3). The main point seems that   does not exactly mean that f is the product of g and a function that is in 1 o(1): that would force f to share every single zero of g, no exceptions allowed (and it would not define a symmetric relation!). Also, since one wants to have   for any g (I think), I don't think I agree with you points (1) and (2), in spite of appearances. But maybe the point of view that the meaning of any statement involving little-o notation can only mean anything asymptotic (it says nothing away from infinity) can somehow resolve these subtleties. Marc van Leeuwen (talk) 16:56, 14 September 2011 (UTC)Reply
My main concern is not with correctness (except that we need to avoid incorrectness) but with clarity for the likely readership of this page. Someone coming here is likely to be a complete novice wrt this sort of notation and certainly won't understand the subtleties that fascinate us mathematicians (we have met, btw). So what we decide here in this detailed discussion is not necessarily what is best for the article. Leaving that aside, I am a specialist in asymptotics but I don't recall ever seeing "their meaning is defined operationally by algebraically manipulating until the little-o expressions are isolated". I wouldn't know how to define that anyway; what if the o()s are buried inside other functions, even inside other o()s?. In my neck of the woods, an equation involving O() or o() on the right side (not on the left side, to make this simpler) is interpreted that each o() and O() can be replaced by a function in that class so that the equation becomes true for large enough n. So   means that there is some h in "o(1)" such that   for large enough n. People consider in practice that the predicate "for sufficiently large n" applies to any expression with o() in it, but I'm not sure I could find a written source which states that explicitly. There is a related matter which I think is a more serious notational issue than zeros. I'm reading a paper that says some function f(n) derived from random cubic graphs of n vertices is o(1) as n goes to infinity, but f(n) is not even defined if n is odd. Very few people would bother writing this in a pedantically precise manner and would probably answer "everyone knows what I mean" if challenged. Zerotalk 00:58, 15 September 2011 (UTC)Reply
Btw, to complete the interpretation as I understand it, consider an equation with o(), O() etc on both sides. This means that no matter which actual functions the Evil Demon substitutes for the o()s and O()s on the left side, there are functions which can be substituted for the o()s and O()s on the right side so that the equation is true for large enough n, where "large enough" might depend on the Evil Demon's choices. Zerotalk 01:04, 15 September 2011 (UTC)Reply


I come long after the discussion, and understand your arguments in favour of clarity, but let me stress out that when I came and had a look at this article, I saw many completely wrong statements. One may wish to have a simple definition like  . But then the next paragraph was a complete confusion between   and   (in your definition, this means you do not divide by the same thing). The worse help one can do for non-mathematicians who will read the article is to write wrong contradictory things in the hope it will require less effort from readers. I hence changed the section "Asymptotic expansion", and I would also strongly appreciate that the consensual mathematical definition of equivalent (ie  ) would be added to the article.Sleurent (talk) 14:27, 22 August 2015 (UTC)Reply

Why single out computer science?

edit

I understand that asymptotic analysis is an important tool in the analysis of computational complexity, but there are other fields besides CS which also regularly use this. (Physics is an obvious example, but you could also include all physical sciences, theoretical biology and ecology, etc.) I suggest removing the emphasis on CS and leaving Applied Mathematics as the principal area of application. Tim 136.186.1.187 (talk) 02:50, 4 May 2010 (UTC)Reply

Right, and add statistics as an example at least as big as CS. Plus lots of other stuff. I'm not sure about your version, but I agree the existing text needs revision. Zerotalk 10:36, 11 August 2010 (UTC)Reply

Method of dominant balance

edit

Does anyone else find the example in the "Method of dominant balance" section to be hard to follow? Assumptions seem to be pulled out of a hat; also the constant A that appears along the way is mysteriously discarded. If y(x) is a solution, then so is Ay(x), so this is an actual error. Zerotalk 09:11, 17 October 2010 (UTC)Reply


I agree that this section has big issues. To start with, what was written there was completely wrong, containing mis-uses of the <\math>\sim\/math> notation. I edited it to at least remove some obviously wrong statements, and I'm in favor of removing this section, or rewritting it completely, as I do not regard it as satisfactory.

Explanation why it was wrong: the statements   and   were claimed to simultaneously hold. That woud imply  , which is false if  

Another issue with this section is that it seems to take the derivative of an equivalent. Which is not legitimate thing at all! Sleurent (talk) 14:17, 22 August 2015 (UTC)Reply

Assessment comment

edit

The comment(s) below were originally left at Talk:Asymptotic analysis/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.

The article gives the impression that asymptotic analysis is mostly about the big-O notation but there's much more to it. -- Jitse Niesen (talk) 03:24, 2 June 2007 (UTC)Reply

Last edited at 03:24, 2 June 2007 (UTC). Substituted at 01:47, 5 May 2016 (UTC)

Construction

edit

I just added a section on "Construction", because I've thought several times in the past about how to construct such a function and finally identified a solution.

Some of the primary contributors to this page may know a better solution to this problem than what I supplied and / or may know some literature that could be cited here. If so, I hope they will improve what I wrote.

In particular, can a hyperbola be constructed asymptotic to two different but arbitrary straight lines, one as as  , and a different one as as  ?

Thanks, DavidMCEddy (talk) 21:18, 2 February 2020 (UTC)Reply

I don't think that this section is quite correct concerning the construction   where   and   are real-valued analytic functions, and   is a Cumulative distribution function. I think that it is more complicated because it depends upon the convergence of   relative to   and  . For example, consider  ,  , and  . In this case, the limit as   of   is   rather than 0. So   must be carefully chosen to not only exhibit the correct end behavior, but also to dominate the convergence of   and  .
Rogue Whistler (talk) 19:58, 30 November 2022 (UTC)Reply
Also, in reply to your query about a hyperbola, given two lines   and  , with  , the hyperbola   should have the prescribed asymptotes.
Rogue Whistler (talk) 22:43, 30 November 2022 (UTC)Reply
I recommend removing this section about "Construction" until a result with supporting citation can be provided. It is not a bad topic to include, but the section does not currently contain correct mathematics or support from literature. Rogue Whistler (talk) 19:30, 1 December 2022 (UTC)Reply

Revision proposal

edit

Hello, The current introduction only partially defines asymptotic analysis. I propose this edit below. The definition closely corresponds to the definition provided in the references who are recognized authorities in this field. Please discuss if you disagree. Thanks

In mathematical analysis, asymptotic analysis focuses on the development and application of methods to approximate analytical solutions to a wide range of mathematical problems. Asymptotic analysis has been applied to computer science, analysis of algorithms, differential equations, integrals, functions, series, partial sums, and difference equations.[1][2][3]

Citations

edit

References

edit
  • Murray, J. D. (2012). Asymptotic Analysis. Springer Science & Business Media. ISBN 978-1-4612-1122-8.
  • Paulsen, William (2013). Asymptotic Analysis and Perturbation Theory. CRC Press. ISBN 978-1-4665-1512-3.
  • Estrada, Ricardo; Kanwal, Ram P. (2012). Asymptotic Analysis: A Distributional Approach. Springer Science & Business Media. ISBN 978-1-4684-0029-8.

TMM53 (talk) 19:04, 8 April 2024 (UTC)Reply

Proposed revision

edit

I have a proposed revision. The revision contains almost everything in the current document but adds significant content. If you have concerns, please send me a message or comment below. I will revise the document in the future.

Revisions

  • Revision contains a more complete asymptotic analysis definition as defined by J.D. Murray.
  • Revision contains much simpler example in the introduction accompanied by a graph. The current document also uses equivalence notation before it is defined. It is better for the reader to define notation then apply the notation.
  • Revision contains a history section.
  • Revision contains more rigorous definitions of asymptotic notation provided by J.D. Murray, Paulsen, Bleistein, Handleson and Estrada.
  • Revision provides properties of asymptotic equivalence accompanied by sources. The current document's section lacks required sources. One property in the current document was not retained as there was no source.
  • Revision places "Examples" and "worked example" in a collapsible tables which makes the article less cluttered.
  • Revision proves the worked example in a briefer easier way more understandable to the reader.
  • Revision does not includes some content on asymptotic expansion because this content was not sourced. Also, it is unclear exactly what abusively means to the reader.
  • Revision places asymptotic distribution in the applications section because this is an application to statistics.
  • Revision divides application section into logical categories.
  • Revision retains "Asymptotic versus numerical analysis" but I am concerned that copying verbatim about 1 page may be a copyright violation.
  • Revision contains additional content that reflects the chapters seen in the major textbooks on this topic.
  • Revision provides sources for all substantive content. Current document does not.

Thanks TMM53 (talk) 22:50, 9 May 2024 (UTC)Reply