with Calculus

[Thomas] Hobbes characterizes his completely empirical way of thinking very remarkably by the fact that, in his bookDe Principiis Geometrarum, he denies the whole of really pure mathematics, and obstinately asserts that the point has extension and the line breadth. Yet we cannot show him a point without extension or a line without breadth; hence we can just as little explain to him thea priorinature of mathematics as thea priorinature of right, because he pays no heed to any knowledge that is not empirical.

Arthur Schopenhauer,The World as Will and Representation, Volume I, §62, p.342 [Dover Publications, 1966, E.F.J. Payne translation]

...between 1830 and 1870 a serviceable approach to calculus was worked out, based on the concepts of function and limit.

This is the mainstream approach to calculus used today. It denies the existence of infinitesimals, and intreprets the word "infinitesimal" as a mere figure of speech in statements that are properly made using limits. For example "let

dxbe infinitesimal" would be restated as "letxtend to zero." However, even the mainstream approach uses the Leibniz notationsdy/dxandy dx, because they are so concise and suggestive.This leads to some awkward moments. It has to be explained that

dy/dxisnotthe ratio of infinitesimal differencesdyanddx-- since infinitesimals do not exist -- but is rather a symbol for thelimit of the ratio y/xasxtends to zero, wherexis a finite change inxandyis the corresponding change in the functionyofx. Likewise,y dxis not an actual sum of termsy dx, but the limit of a sum of termsy x. Thus avoidance of infinitesimals came at the cost of a strange dual notation: for actual differences andd(the ghost of Leibniz!) for the limits of their quotients and sums.To many this is a compromise solution which fails to explain why infinitesimals work. Is it possible to define and use genuine infinitesimals?....

The first to solve this problem completely was the American mathematician Abraham Robinson, in the 1960s. His system is called

nonstandard analysis, and it has been successful enough to yield some new results. However, nonstandard analysis is not yet as simple as the old Leibniz calculus of infinitesimals, and there is a continuing search for a really natural system that uses infinitesimals in a consistent way.

John Stillwell,Yearning for the Impossible, The Surprising Truths of Mathematics[A.K. Peter, Ltd., Wellesley, Massachusetts, 2006], pp.99-100

[Nonstandard analysis is not] a fad of mathematical logicians [but is destined to become] the analysis of the future... In coming centuries it will be considered a great oddity... that the first exact theory of infinitesimals was developed 300 years after the invention of differential calculus.

Abraham Robinson(1918-1974),Incompleteness, The Proof and Paradox of Kurt Gödel, by Rebecca Goldstein [W.W. Norton & Company, 2005, p.241]

Issac Newton's mathematics of "fluxions" was the first form of differential calculus. Newton himself used geometrical methods; the following algebraic method came later, though not much later (if at all) by way of Leibniz. There were objections to calculus at the time, and philosophical problems with it, or at least questions about it, continue. The statement by John Stillwell, above, goes to show that some mathematicians also continue to have questions.

The philosophical objections, then and now, have made no difference in the success or application of the mathematics. This is due to the abstract nature of mathematics and the logically sufficient nature of scientific method, as understood by Karl Popper (i.e. that scientific theories are sufficient conditions, but not necessary conditions, of the phenomena of observation, prediction, and experiment). Developments in mathematics, however, do not eliminate the philosophical issues, although it is common to think so just because the math **works** -- that is a form of the "Sin of Galileo." The metaphysics of mathematics, which is what is at issue when serious consideration is given here to the implications of infinitesimals or of division by zero, is part of meta-mathematics, not mathematics proper.

The following method of developing a derivative is the classic version involving limits. This replaced, as Stillwell notes, the earlier approach, used by both Newton and Leibniz, of *infinitesimals*. However, this classic approach now also has tended to fall out of favor. More recently, discussion of either infinitesimals or limits can be replaced by descriptions in terms of *functions* in the most abstract sense -- i.e. *dy/dx* is simply a formal operation that turns some equations into other equations. Thus, even expressions like "let *x* tend to zero" can be eliminated and the whole sense can be excised that *x* and *y* in calculus are about changing quantities -- i.e. we may not see *x* or *y*, as indeed we did not when infinitesimals alone were used.

There is nothing mathematically wrong with that, since we can define things any way to do whatever we want, but a function notation (like "*f*(x)") is already more abstract than an equation that is simply between variables like "y" and "x." In "*f*(x)," "*f*" itself is a variable, a *predicate* variable for the form of the equation. Where we are interested in ratios of changing quantities, which is what the derivative is all about, this is not particularly revealing. The traditional notation of *dy/dx* is still used these days, but its derivation and meaning, whether from infinitesimals or limits, is less clear from the new approach. Depending on the text, it may be introduced rather arbitrarily after functional analysis is developed [note].

Thus I have chosen to state this traditional method of taking a derivative. Limits still keep us within conceptual grasp of infinitesimals. If the purpose of the analysis in terms of functions was to *obscure* the philosophical questions about infinitesimals, it was not a good idea. If the idea is that the philosophical questions about infinitesimals *don't exist* because derivatives can be analyzed merely in terms of functions, it is deceptive, for that implication does not follow.

Actually, the following seven steps (an application of the "four step rule") can be used without the slightest attention to either infinitesimals or limits. Once we get y/x, relying on no more than ordinary algebra, we have the derivative by nothing more mysterious than setting y and x to zero. We can happily go on our way, if we wish, without looking back. The complications, then, are all philosophical. We notice that if y and x are zero, then y/x is 0/0, about which we may feel uneasy. This is where philosophical quesitons will start, and I found the problem of 0/0, in its own right, most intriguing. And this is where infinitesimals can be brought in to solve the mystery: y and x are not really set to *zero*, just to something so small, "right before zero," that it cannot be written as a number and is *effectively* zero. As dy/dx, y/x is not *really* 0/0. But, since there really isn't a number "right before zero," limits will solve the paradox in a different way, without worrying as much about metaphysics; but if we then ask what the limit will *look like*, it will look just like the equation where y and x have been set to zero. So we're really back to the beginning. Kant would say that we have returned to the same ignorance whence we began; and in Zen, the mountain is just a mountain again. But none of this matters to the *use* of calculus, which shows that the metaphysical questions are part of meta-mathematics, not mathematics proper -- although the whole business is probably related to the Continuum Problem, about which there seems to have been little progress in decades.

Thus, while the philosophical discussion about calculus seems to be consumed with the alternatives of infinitesimals or limits, they both go back to the paradox of 0/0, whose awkwardness both wish to resolve, sometimes with approaches the manage to avoid mention of the problem at all. Because of this, I would say that the philosophers should pay more attention to 0/0 *first*, since that is where the trouble starts.

TAKING A DERIVATIVE | |
---|---|

I. y = 3x^{2} + 4x + 5 | Given an equation,
where y is a function of x: ( y = f (x) ). |

II. y + y = 3(x + x)^{2} + 4(x + x) + 5 | If the value of x changes, then value of y
will change. We add in the changes in the values (x and y) to the original values. |

III. y + y = 3(x^{2} + 2xx + x^{2}) + 4(x + x) + 5 | (x + x)^{2} is multiplied out. |

IV. y + y = 3x^{2} + 6xx + 3x^{2} + 4x + 4x + 5 | The constants are multiplied into all
the terms. |

V. y + y = 3x^{2} + 6xx + 3x^{2} + 4x + 4x + 5-(y = 3x ^{2} + 4x + 5)= y = 6xx + 3x ^{2} + 4x | The original equation (I) is now subtracted from equation IV. This
gives us an equation about the change in y (or y). |

VI. y/x = (6xx + 3x^{2} + 4x)/x
= 6x + 3x + 4 | Now both sides of the equation are
divided by the change in x (or x),giving us an expression for the ratio between the change in y and the change in x (i.e. y/x). |

VII. If x becomes small and approaches zero as a limit, y also approaches zero; and (6x + 3x + 4) approaches (6x + 4). This is expressed as (dy/dx = 6x + 4), which is the "derivative" of the original equation (I . y = 3x^{2} + 4x + 5). Note that, from the original equation, each x variable drops one power, the constant on each variable is multiplied by the previous power, and the lone constant is simply lost. These are general characteristics of derivatives. Since constants are lost in derivatives, the opposite of derivation, integration, always (for indefinite integrals) introduces a constant (whose value will then be unknown, though it may = 0 ). | |

If y is in units of distance (s) and x in units of time (t), the derivative (ds/dt) is the velocity, indeed, the "instantaneous" velocity of a moving object, at a point in time and space. This in itself was philosophically paradoxical, hearkening back to the pardoxes of motion described by Zeno of Elea, since an object that does not move a finite distance might be said to have no velocity, since it is not moving. |

For 3x in VI to be zero, x would have to be zero; but then y would also be zero. dy/dx therefore would seem to represent zero/zero, which is ordinarily a useless or meaningless relationship in mathematics: zero divided by anything is zero; and anything divided by zero is often said to be "undefined," which is a polite (or wimpy) way of saying "infinite." How a difference could be both zero and infinite is a good question. As it happens, it looks like zero divided by zero can be any quantity, so it is not really undefined but *indefinite*. It does not give us a particular quantity.

The explanation of this in Newton and Leibniz was that y and x were not *really* zero, but that dy/dx represents the quantity they have *just before* they reach zero. That quantity "just before" zero is an "infinitesimal" -- a number smaller than any number that can be written, but not yet nothing. But, it is objected, there is no quantity "just before" zero. Either there is a finite number, in which case 3x is not zero, or there is zero, in which case dy/dx is either meaningless or indeterminate.

Infinitesimals already existed. They were useful as an approach to areas within curves. Thus, the area within a circle can be divided into wedges, whose areas as triangles can easily be determined. However, they do not then cover the area of the circle, since there is space left between the straight outer edge of the triangle and the curved boundary of the circle. This problem can be avoided if the length of the straight outer edge is "infinitesimal," i.e. infinitely small but not exactly zero. Indeed, as the length approaches zero, the sum of the area of the triangles approaches the area within the circle.

A purely conceptual and philosophical objection to this was already lodged in 1656 by Thomas Hobbes against the mathematician John Wallis. As this objection was ignored by Newton and Leibniz, calculus only made things worse; and subsequent British philosophers, like George Berkeley and David Hume, not only could repeat the earlier logical objection, but, as Empiricists, they could add their own epistemological objection, that as infinitesimals would be invisible to perception, they could not be a matter of empirical, and so any, knowledge. Since later positivistic and analytic philosophy tended to view Hume as its spiritual forebearer, neither the logical nor epistemological objections could be easily dismissed.

It is natural for mathematicians to respond by dealing with the matter in such a way that the metaphysical or epistemological issues don't arise. This is not difficult, and the replacement of infinitesimals first by limits and then by a use of functions that doesn't even need x and y are ways of doing that. But, as John Stillwell says in the epigraph, "...nonstandard analysis is not yet as simple as the old Leibniz calculus of infinitesimals, and there is a continuing search for a really natural system that uses infinitesimals in a consistent way." So even some mathematicians are left with a desire for something better.

Approaches to infinitesimals were discussed in a popular article in *Scientific American*, "Resolving Zeno's Paradoxes," by William I. McLaughlin, November 1994. McLaughlin mentions how talk about infinitesimals is usually replaced by talk about limits:

When analysts thought about rigorously justifying the existence of these small quantities, innumerable difficulties arose. Eventually, mathematicians of the 19th century invented a technical substitute for infinitesimals: the so-called theory of limits. So complete was its triumph that some mathematicians spoke of the "banishment" of infinitesimals from their discipline.

In terms of limits, (6x + 4) is never actually reached; but as x approaches zero, (6x + 3x + 4) *approaches* (6x + 4) "as a limit," i.e. the quantity it would be when x is zero, even though x will never quite get there. In this case, however, 3x is never quite zero but then is treated as zero, which sounds like the definition of an infinitesimal after all.
McLaughlin then continues to explain how infinitesimals have come be included in mathematics anyway:

By the 1960s, though, the ghostly tread of infinitesimals in the corridors of mathematics became quite real once more, thanks to the work of the logician Abraham Robinson of Yale University [see "Nonstandard Analysis," by Martin Davis and Reuben Hersh;Scientific American, June 1972]. Since then, several methods in addition to Robinson's approach have been devised that make use of infinitesimals.....Edward Nelson of Princeton University created the tool we [McLaughlin & Sylvia Miller] found most valuable in our attack [on infinitesimals], a brand of nonstandard analysis known by the rather arid name of internal set theory (IST)....

Nelson adopted a novel means of defining infinitesimals. Mathematicians typically expand existing number systems by tacking on objects that have desirable properties, much in the same way that fractions were sprinkled between the integers. Indeed, the number system employed in modern mathematics, like a coral reef, grew by accretion onto a supporting base: "God made the integers, all the rest is the work of man," declared Leopold Kronecker (1823-1891). Instead the way of IST is to "stare" very hard at the existing number system and note that it already contains numbers that, quite reasonably, can be considered infinitesimals.

Technically, Nelson finds nonstandard numbers on the real line by adding three rules, or axioms, to the set of 10 or so statements supporting most mathematical systems. (Zermelo-Fraenkel set theory is one such foundation.) These additions introduce a new term, standard, and help us to determine which of our old friends in the number system are standard and which are nonstandard. Not surprisingly, the infinitesimals fall in the nonstandard category, along with some other numbers I will discuss later [i.e. the reciprocals of infinitesimals, which are indefinitely large, but not infinite, quantities].

Nelson defines an infinitesimal as a number that lies between zero and every positive standard number. At first, this might not seem to convey any particular notion of smallness, but the standard numbers include every concrete number (and a few others) you could write on a piece of paper or generate in a computer: 10, pi, 1/1000 and so on. Hence, an infinitesimal is greater than zero but less than any number, however small, you could ever conceive of writing. It is not immediately apparent that such infinitesimals do indeed exist, but the conceptual validity of IST has been demonstrated to a degree commensurate with our justified belief in other mathematical systems.

"Commensurate with our justified belief in other mathematical systems" means that the three extra axioms of IST do not produce any contradictions with any other axioms or with any known theorems in Set Theory. This is what could be expected from Gödel's proof of the incompleteness of mathematics: New branches of mathematics may involve the addition of new axioms to the existing logical system. That the new branches can be constructed is the kind of thing that mathematicians like to do anyway; but whether they are good for anything is another question.

That infinitesimals address originally philosophical objections to calculus is an interesting case. Calculus has actually *worked* for several centuries despite the philosophical problems with it. In scientific terms, that is good enough; and most mathematicians, physicists, engineers, etc. have thought so. The need for a seemingly superfluous philosophical explanation is not always apparent to non-philosophers. However, what often happens with new branches of mathematics is that they turn out to be applicable to unanticipated things. That may not have happened yet with infinitesimals, but it is really the practical justification for pure research in mathematics: We don't know what is going to happen in the future.

With infinitesimals, we can avoid the "zero divided by zero" paradox. An infinitesimal divided by an infinitesimal can easily be a finite quantity. 3x in the example above may not be zero; but if it is itself merely an infinitesimal, then it can be ignored without harm in any calculations where all we want are finite numbers. An infinitesimal will not matter in building a bridge -- though one wonders: Chaos Theory is about the sensitivity of systems, whether natural or mathematical, to small variations in initial conditions. If infinitesimal variations in initial conditions produce macroscopic differences in the world, then infinitesimals would suddenly be an important part of physics.

It has been objected by a correspondent that infinitesimals cannot possibly figure in a physical application of Chaos Theory because infinitesimals cannot become finite numbers merely by being multiplied by other infinitesimals or by finite numbers. An infinitesimal would have to be multiplied by an infinite number to give a finite result. However, this overlooks the circumstance that finite velocities even in Newton were the result of an infinitesimal being **divided** by an infinitesimal. An infinitesimal distance divided by an infinitesimal time is a finite velocity. Infinitesimal changes in either quantity would thus make a finite difference.

Thus, questions which at the time might be dismissed as the absurd rantings of philosophers often return in more respectable garb and don't seem so absurd after all.

The real philosophical question, the metaphysical question, about infinitesimals, however, is just that they appear to involve a contradiction. They are zero without being zero and number without being number. The extra axioms of nonstandard analysis may provide a breathing space for them, but, as Stillwell says, it would be nice to have "a really natural system" that can define infinitesimals in way that would not offend the intelligence of Thomas Hobbes or anyone else. This may not be possible, but there may nevertheless be a reasonable logical principle that can be employed, whether Hobbes or Berkeley would like it or not. The principle would be that similar to what we find with imaginary numbers, i.e. we have an entity that exists in representation, that does not correspond to a real object, perhaps because of some contradiction, but which *does enable us to derive results for real objects*.

This would not be strange to anyone who would have thought, like Aristotle, that mathematics is just a device for calculation. It is just disturbing in terms of the mathematical realism of someone like Plato. My own basic sympathies in the matter are Platonic, but I am not actually a Platonist, but a Kantian. In Kant, our knowledge is both real and representational, with characteristics that may apply to reality, or to representation, but not both. A kind of compromise is then possible. The mathematical results that apply to the world are real, and the Platonist can be happy. But it may be that calculation can generate entities that work in representation, but cannot apply to reality. Imaginary numbers and infinitesimals can fall into that category. Indeed, the very concept of "nothing" may fall into that category. Parmenides argued that a concept of something must be, in truth, of something, while the concept of "nothing," by definition, is not *something*. So speaking of nothing treats it, incongruously, as something. While such an argument might strike many as silly, since even Parmenides obviously talks about "nothing," this consideration produced the ontological principle *ex nihilo nihil fit*, "from nothing, nothing comes," which survives in the very non-silly context of the principles of the conservation of mass and the conservation of energy in modern physics. The concept of "nothing" is thus certainly useful in our representation, and gives real results, despite its birth in paradox. Suitably, infinitesimals themselves are things which are nothing, yet nevertheless something, in much the same way as concerned Parmenides.

As it happens, not only can we write an infinitesimal quantity in ordinary mathematical notation but that we can then derive zero from it. We can do this by considering, first, how an infinitesimal would have to be written. Since it is the smallest possible quantity before zero, we could only write it with a decimal point, followed by an infinite number of zeros, followed by a 1 (the smallest positive integer). Writing a *finite* number of zeros, we could always write a smaller number by introducing an extra zero. We cannot write, of course, an infinite number of zeros. We can consider, however, what would happen if we subtracted such a number from the number 1. This would produce a **repeating decimal** with the number 9: i.e. **0.9999 9**, where we indicate the repeating group in a repeating decimal by underlining it. The number 0.

Now, repeating decimals are rational numbers and can be expressed as a ratio of integers. There is also simple technique for discovering such a ratio. Randomly make up repeating decimal.

100,000x = 75,674. |

10x = 9. |

(

So it turns out that our infinitesimal quantity, the decimal point followed by an infinite number of zeros followed by 1, is actually equal to 0. This curious effect would seem to imply that even if there are infinitesimals, which would explain a finite quantity for a derivative, they actually do equal zero, so that the factor 3x, in the example above, can be put equal to zero in all good conscience. At the same time, ** 9** does give us a well-defined and clear notational difference, strictly addressing an infinitesimal difference, with

Exchange with Correspondent on Calculus and Imaginary Numbers

What I have in mind when I contrast the old method of using limits and the new method of a more abstract emphasis on functions is what I see in the contrast between two basic but closely related textbooks. These are *College Mathematics*, by Kaj L. Nielsen, from the Barnes & Noble College Outline Series, copyrighted 1958 and bought by me in the late 60's for $1.95, and *Modern College Mathematics*, by John R. Sullivan, also from the Barnes & Noble College Outline Series, copyrighted 1980 and bought by me sometime in the 80's for $5.95 (there are no further editions of this book, and the College Outline Series itself seems to have lapsed, e.g. Frederic Wheelock's classic *Latin*, formerly in the series, has been updated with a new publisher).

Each of the books is about 300 pages long, but the differences amount to rather more than the prices. One telling feature is in their treatment of Set Theory. In Nielsen's book, Sets are only mentioned once, in the last chapter (#15, "Additional Topics"), under the subtitle "Foundation." Indeed, Set Theory had by then become foundational for arithmetic, and Nielsen goes a little bit into the logic of axiomatic systems and Sets. In Sullivan's book, however, we get muliple references to Sets, beginning on the very first page. Where Nielsen began with a brief Introduction and then a section on "Real Numbers," Sullivan puts off the Real Numbers and rewrites the Introduction as "Definitions and Notation of Sets."

Why this dramatic shift in emphasis? Well, notions about teaching arithmetic changed after Nielsen was writing. If Set Theory, after all, was foundational for arithmetic, then perhaps arithmetic should be *taught* by way of Set Theory. Thus, Set Theory was no longer treated as *higher* mathematics, but as *elementary* mathematics. The introduction of this into primary and secondary education of mathematics in the late 60's was called the "New Math." Confidence in this approach probably derived from the historical experience with geometry, which had been taught as an axiomatic system since Euclid. Why should arithmetic be any different?

Unfortunately, it was. I don't know that American mathematics education has ever really recovered. Much of the reason for *that* has been the disinclination of mathematics educators simply to go back to the old methods. The New Math thus was followed by the "New New Math," "Fuzzy Math," and a succession of other educational fads, including attempts to get children to discover the rules of mathematics for themselves. The problem with the New Math, of course, was that it *was higher mathematics*, something that it had taken mathematicians *themselves* two thousand years longer to discover than the axioms of geometry. Children had no more chance of understanding it, let alone discovering it for themselves, than the Babylonians.

This does not bode well for Sullivan's treatment of Calculus. What we get, indeed, is a switch similar, on a smaller scale, to that between elementary arithmetic and advanced Set Theory. Thus, Nielsen begins with a discussion of "Limits" and a look at an example of y = *f*(x), e.g. y = x^{2} + x. If the basic notation of a derivative is *dy/dx*, reflecting a relationship between two variables, this is a reasonable way to get started. Nielsen then goes through steps I to VII in the table above, to the extent of introducing and discussing the concept of limits. The derivative is then introduced under a new heading ("The Derivative"), and the general form of the derivative given in terms of y = *f*(x), i.e. *dy/dx* = lim [x -> 0] (*f(x + x*) - *f(x)*)/x. To take the derivative of an equation, Nielsen then first introduces the "Four-Step Rule," which details the method already shown in the table. Since this is admitted to be "too cumbersome," other methods are then discussed.

This all looks very different in Sullivan. After a brief Introduction, which features graphs whose significance cannot be understood until later, we plunge right into "The Derivative," where, instead of the expression y = *f(x)*, we get the example of a *f(x)* = x^{2}. This is analyzed in terms of two values and then done abstractly, so that we end up with the derivative defined as *f'*(x) = lim [x -> 0] (*f*(x + x) - *f*(x))/x. Having avoided any sense that the derivative might be a relationship between two variables, Sullivan then says, "If the function is denoted by *y* instead of by *f(x)*, then the derivative is designated by the symbol *y'* or by *dy/dx*" [p.157]. Sullivan thus avoids introducing the concept by watching the behavior of an equation with changing variables. This is why I say above, "The traditional notation of *dy/dx* is still used these days, but its derivation and meaning, whether from infinitesimals or limits, is less clear from the new approach. Depending on the text, it may be introduced rather arbitrarily after functional analysis is developed." Sullivan does eventually give us the kinds of examples that Nielson does, but in reverse order, and the Four-Step Rule seems to have been lost.

What bothers me about this is that one would get the impression that calculus is a branch of the theory of functions, just as the New Math gave the impression that arithmetic was a branch of Set Theory. In the theory of functions, indeed, we may analyze calculus in new and elegant ways, but that reflects neither its historical origin, nor what may be the best pedagogy for it, nor the issues relevant to the metaphysics of mathematics. Where Nielson uses the equation y = x^{2} + x as a way to introduce and explain limits and the derivative [p.212], Sullivan only introduces the equation s = t^{3} - t as an example for all the abstract analysis that he has already done [p.158]. The loss of the Four-Step Rule seems to reflect this cart-before-the-horse transformation, as though such a procedure no longer has anything to do with the derivation. As one of the earliest correspondents to respond to this page put it, calculus is simply "not taught that way anymore."

If the new approach makes calculus a branch of the theory of functions, then it loses touch with what calculus was about originally, which was quantities. Since the *original* quantities in calculus were infinitesimals, I would say there is a certain tendency here, to move further and further away from the original focus. Stillwell's book strikes me as of interest, not just for its return to an examination of infinitesimals, but because he overlooks the difference between the analysis of limits and the analysis in which the nature of functions has become the original and primary focus. From the quotation, one would assume that the approach of mathematicians to calculus is exactly the same now as it was when the theory of limits was settled by 1870 or so. My sense is that the change between the treatment of Kaj Nielsen in the 1950's and that of John Sullivan by 1980 is at least as important, especially when we compare it with the ideas behind the debacle of the New Math.