\documentclass[10pt,letterpaper]{article}
% DESTDIR="calculus"
% table of color names: https://en.wikibooks.org/wiki/LaTeX/Colors
\newcommand{\basepath}{/netbackup/data/latex_related/introduction_to_calculus/resources}
\usepackage[absolute,overlay]{textpos}
\usepackage{listings,color}
\usepackage[usenames,dvipsnames,table]{xcolor}
\definecolor{CodeColor}{rgb}{0.9,0.9,0.9}
\definecolor{LightGray}{rgb}{0.95,0.95,0.95}
\definecolor{DarkGreen}{rgb}{0,0.375,0}
\definecolor{DarkRed}{rgb}{0.5,0,0}
\lstset{basicstyle=\footnotesize,
language=Python,
backgroundcolor=\color{LightGray},
numbersep=5pt,
xleftmargin=3em,
framexleftmargin=4pt,}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{scrextend}
\usepackage[utf8]{inputenc}
% this formats numbers in sci notation
\usepackage[scientific-notation=true]{siunitx}
% this increases the spacing between table rows
\usepackage{tabularx}
\setlength{\extrarowheight}{1.5pt}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage[unicode]{hyperref}
\setlength{\parskip}{0.3\baselineskip}%
% strikethrough in math
\usepackage{xcolor,cancel}
\newcommand\Ccancel[2][black]{\renewcommand\CancelColor{\color{#1}}\cancel{#2}}
% hightlight box in math
\newcommand{\highlight}[1]{%
\colorbox{green!20}{$\displaystyle#1$}}
\usepackage{graphicx}
\usepackage{wrapfig}
\usepackage{listings}
\usepackage{float}
\usepackage{endnotes}
\usepackage{enumitem}
\setlist{nosep}
\newcommand\Hrule{\hrule\vspace{.5\baselineskip}}
\renewcommand\notesname{References}
\hypersetup{
colorlinks=true,
linkcolor=OliveGreen,
urlcolor=blue,
linktoc=all
}
\numberwithin{equation}{section}
\usepackage[margin=.75in]{geometry}
% this allows table rows & columns to have colors
%\usepackage{colortbl}
%\definecolor{TableHead}{rgb}{0.85,0.95,0.85}
%\definecolor{TableRow}{rgb}{0.95,0.95,0.85}
\rowcolors{2}{white}{gray!12}
\usepackage{perpage}
\MakePerPage{footnote}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\newcommand\ParDivider{\vspace{.5\baselineskip}\centerline{\tiny \textbullet\hspace{2em}\textbullet\hspace{2em}\textbullet}\vspace{.5\baselineskip}}
\author{\href{http://arachnoid.com/administration}{Paul Lutus}\\
\href{http://arachnoid.com}{http://arachnoid.com}}
\title{Introduction to Calculus}
\date{September 1, 2016 \linebreak \linebreak Most recent revision: \today}
\begin{document}
\maketitle
\begin{abstract}
\noindent\rule{.9\textwidth}{.1px}
This article provides an overview and introduction to calculus. It's intended for general readers, nonspecialists, and shows the topic's key concepts in a transparent, approachable way.
The article's purpose is to help readers see that calculus is not only relatively easy to understand, but is a useful and accessible intellectual skill without which some aspects of the the modern world are indecipherable -- population growth, space travel, climate change, epidemics and control of diseases, economic policy, investment strategies and others -- all of which require a knowledge of how and why things change, the kind of knowledge calculus provides.
Some familiarity with secondary school algebra is desirable to be able to follow most of the article's content. Some new topics, such as functions, are introduced and explained before being used.
The article includes tutorial exercises that show concepts without overwhelming the reader in detail, and covers related topics such as differential equations and computer algebra systems.
\noindent\rule{.9\textwidth}{.1px}
\end{abstract}
\tableofcontents
\listoffigures
\listoftables
\section{Basic Concepts}
\subsection{Math Mythology}
While introducing calculus, this article tries to demolish a pervasive American myth -- that calculus is outside the intellectual grasp of everyday people. In contrast to the practice in many other countries, American students are told that calculus is an advanced mathematical topic, one neither suitable for, nor within the mental abilities of, ordinary people. The author believes this is a crippling and misleading belief, one easily falsified by some exposure to the topic itself. In this article, the author strives to explain the mathematical concepts in the simplest, most accessible way.
\subsubsection{Stationary Math}
The mathematical topics taught before calculus produce static results, unchanging outcomes for provided arguments. Calculus differs by describing dynamic systems, systems that are moving and changing. If we want to mathematically describe a rock we can weigh it, compare its volume to its mass and derive a density, compute a ratio of volume to surface area, things like that -- fixed outcomes, no motion or change. But imagine that we throw the rock and try to predict the rock's path. Because the rock is now moving, we need calculus.
\subsection{Car Example}
\label{Car Example}
The key idea of calculus, on which the entire field stands, is the relationship between a quantity and a \textit{rate of change} in that quantity. For example, we might want to describe the position of a moving car as time passes:
\begin{itemize}
\item The car's acceleration, controlled by the accelerator pedal, is a \textit{rate of change} in the car's velocity.
\item The car's velocity, recorded by the speedometer, is a \textit{rate of change} in the car's position.
\item Expressed in everyday language, position is ``where you are'', velocity is ``how fast you're moving'', and acceleration represents ``how velocity is changing.''
\item These three quantities -- position, velocity and acceleration -- are bound together in such a way that, with the methods of calculus, we can use any one of them to find the other two.
\item For example, we can note a car's speedometer (velocity) readings over time and use that to reconstruct the car's position, without any other information.\footnote{In principle, if the car were equipped with an accelerometer\endnote{\href{https://en.wikipedia.org/wiki/Accelerometer}{Accelerometer} -- a device for measuring changes in velocity.}, we could use that to reconstruct both velocity and position over time (this is one way astronauts find out where they are).}
\end{itemize}
\subsubsection{Car motion data}
Here's an an example. Let's say we're in a car that's stationary at time zero and that begins to accelerate along a road. Here's a table and plot that shows the car's acceleration, velocity and position at different times:
\begin{table}[H]
\begin{center}
\parbox{0.45\linewidth}{
\begin{center}
\resizebox{0.45\columnwidth}{!}{
\input{\basepath/table1.txt}
}
\captionof{table}{Acceleration/Velocity/Position}
\label{table:AVPTable}
\end{center}
}
\parbox{0.45\linewidth}{
\includegraphics[width=0.45\columnwidth]{\basepath/car_example_graph.png}
\captionof{figure}{Acceleration/Velocity/Position}
\label{fig:AVPChart}
}
\end{center}
\end{table}
(For a technical explanation of this table's design, see the ``\nameref{Data Table Design}'' appendix on page \pageref{Data Table Design}.)
\subsubsection{Differentiation and integration}
Three quantities are listed in Table \ref{table:AVPTable} -- acceleration, velocity and position. Here's what we can do with this information:
\begin{itemize}
\item As explained above, using calculus methods, \textit{we can use any one of the values to find the other two.} For example, if we only have a list of accelerations over time, we can use it to compute velocity, and we can use velocity to compute position.
\item Also, by performing the opposite or \textit{inverse}\endnote{\label{inverse}\href{https://en.wikipedia.org/wiki/Inverse_(mathematics)}{Inverse (mathematics)} -- two functions or operations that perform opposite actions are said to be \textit{inverses} of each other.} operation, and with only a list of the car's positions over time, we can compute the car's velocity and acceleration over time.
\item We can do this because velocity is a \textit{rate of change} in position, and acceleration is a \textit{rate of change} in velocity. These values are said to be \textit{interdependent}.
\end{itemize}
\subsubsection{Concepts}
Before we move on, here are some important concepts:
\begin{itemize}
\item Finding a rate of change in a quantity is called \textit{differentiation}, and the result is called a \textit{derivative}\endnote{\label{Derivative}\href{https://en.wikipedia.org/wiki/Derivative}{Derivative (mathematics)} -- ``The derivative of a function of a real variable measures the sensitivity to change of a quantity (a function value or dependent variable) which is determined by another quantity (the independent variable).'' }.
\item The inverse operation, summing rates of change to obtain a quantity (like acquiring position from a series of velocities), is called \textit{integration}\endnote{\href{https://en.wikipedia.org/wiki/Integral}{Integral (mathematics)} -- ``In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data.''}.
\item Integration and differentiation are inverse\textsuperscript{\ref{inverse}} operations -- if we take a derivative of a quantity, then integrate the derivative, we get back the original quantity. This very important idea is called the \textit{Fundamental theorem of calculus}\endnote{\label{FundamentalTheorem}\href{https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus}{Fundamental theorem of calculus} -- The idea that differentiation and integration are inverse operations.}.
\end{itemize}
\subsubsection{Computing a derivative}
\label{Computing a derivative}
To show how this works, let's produce some results using the data from Table \ref{table:AVPTable} (page \pageref{table:AVPTable}). Let's first use position values to acquire a velocity value. To say this another way, let's \textit{differentiate} positions to get velocity:
\begin{itemize}
\item Four seconds into the journey the car was located at position 32, and one second later the car was located at position 50. The average velocity between these positions can be acquired by subtracting the second position from the first (or \textit{taking the difference}): 50 - 32 = 18. The difference between two position values is a velocity, a \textit{rate of change} in position.
\item Notice that each value in the \textit{average velocity} column represents the difference between the two nearest \textit{position} values.
\end{itemize}
Expressed simply, \textit{differentiating} a changing quantity means subtracting adjacent values of that quantity to acquire a rate of change (with more refinements explained below).
\subsubsection{Computing an integral}
\label{Computing an integral}
Now let's use the data from the table on page \pageref{table:AVPTable} to perform the inverse operation -- let's \textit{integrate} velocity to acquire position:
\begin{itemize}
\item We know the car's position at 4 seconds and our goal is to use average velocities to find its position at 8 seconds.
\item To get the desired result, we take the position at 4 seconds and add to it the sum of the average velocities on the interval from 4 to 8 seconds:
\begin{itemize}
\item The initial position at 4 seconds is 32.
\item The sum of the average velocities between 4 and 8 seconds is 18 + 22 + 26 + 30 = 96.
\item Add the initial position to the sum of velocities: 32 + 96 = 128.
\item The result position at 8 seconds is 128.
\end{itemize}
\end{itemize}
\subsubsection{Relationship between integration and differentiation}
Notice about integration that it represents the opposite of differentiation:
\begin{itemize}
\item In differentiation we subtract positions from each other to get a \textit{rate of change} in position (a velocity).
\item In integration we start with an initial position and add to it a sum of velocities to acquire a new position.
\end{itemize}
Careful thought will show that differentiation and integration are exact inverses of each other.
\subsection{Incremental vs. Continuous Processes}
\label{Incremental vs. Continuous Processes}
The above example uses a table of stepwise numerical values to roughly approximate a continuous process -- a moving car. The table's incremental values are useful in showing a relationship between a process and its rate of change, but unlike the steps in the table, many natural processes are continuous. In the following sections we learn how to describe a continuous\endnote{\label{ContinuousProcess}\href{https://en.wikipedia.org/wiki/Continuous_function}{Continuous function} -- roughly speaking, a function whose output smoothly and continuously follows its input, with no discontinuities or interruptions.} process -- a process like those found in nature.
Remember this idea going forward -- many natural processes are continuous, therefore our equations should be continuous too.
\section{Variables, Equations, Functions}
\subsection{Variables}
In algebra, a letter (or variable\endnote{\href{https://en.wikipedia.org/wiki/Variable_(mathematics)}{Variable (mathematics)} -- a symbol that stands for a numerical quantity.}) stands for a number -- \textit{one letter, any number}. It's a powerful idea, one that allows us to think about symbols that apply to any numbers. In preparation for writing equations able to generate the values listed in Table \ref{table:AVPTable} on page \pageref{table:AVPTable}, let's choose letters to stand for each of the table's quantities:
\begin{itemize}
\item The leftmost table column contains values for time in seconds. Let's use the letter $t$ for this quantity. This is called the \textit{independent variable}\endnote{\href{https://en.wikiversity.org/wiki/Independent_variable}{Independent variable} -- a variable that doesn't result from a computation and whose value doesn't depend on another variable.} because its value doesn't depend on, or result from computations of, the other variables.
\item Moving to the right, we have columns for acceleration, velocity and position. Let's use letters $a$ for acceleration, $v$ for velocity, and $p$ for position. These are called \textit{dependent variables}\endnote{\href{https://en.wikiversity.org/wiki/Dependent_variable}{Dependent variable} -- a variable whose value depends on the outcome of a mathematical operation that in turn depends on the value of an independent variable.} because their value depends on a calculation involving the independent variable $t$.
\end{itemize}
Now we can write equations that define, and provide values for, these variables.
\subsection{Equations}
In the example problem above, we computed results using a table of numerical values, but for most mathematical work an equation is much preferred over a table of numbers. The reasons for this include the fact that an equation is more powerful and general than a number, and because of its universality and simplicity of expression an equation might describe, or even lead to, a physical theory, thus contributing to our understanding of nature.
Having assigned variable names for each of the data table's columns, we can write equations that describe and predict the table's numerical values:
\begin{itemize}
\item In the data table on page \pageref{table:AVPTable}, acceleration is the same for any time argument -- this is called a \textit{constant}\endnote{\href{https://en.wikipedia.org/wiki/Mathematical_constant}{Mathematical constant} -- an unchanging numerical value.}, an unchanging value. Its very simple equation is:
\begin{equation}
a = 4
\end{equation}
\item By examining the table's velocity values, readers may be able to guess the corresponding equation -- it appears to be the acceleration constant times time, or:
\begin{equation}
v = a t
\end{equation}
\item The position equation's form isn't self-evident, but we'll add it to this list while promising to provide a definition below. It is:
\begin{equation}
p = \frac{1}{2} a t^2
\end{equation}
\end{itemize}
Here are some facts about these equations:
\begin{itemize}
\item Just like the numerical quantities described earlier, any of the equations in the above list can be obtained from the others.
\item Moving \textit{up} the list from position to acceleration, each equation is said to be the \textit{derivative with respect to time} of the equation below it.
\item Moving \textit{down} the list from acceleration to position, each equation is said to be the \textit{integral with respect to time} of the equation above it.
\item When expressed in technical language, velocity is the time integral of acceleration, and position is the time integral of velocity\footnote{Position is also described as the ``second integral'' of acceleration with respect to time.}.
\item When expressed in everyday language, the velocity at time $t$ is the \textit{sum of all prior accelerations}, and the position at time $t$ is the \textit{sum of all prior velocities}.
\item As before and consistent with the Fundamental Theorem of calculus\textsuperscript{\ref{FundamentalTheorem}} introduced earlier, if we take the derivative of an equation and then integrate the derivative, we get back what we started with.
\end{itemize}
The calculus idea from the prior section -- that all the numerical quantities can be acquired from any one of them -- applies to equations as well: using the methods of calculus, we can acquire all the above equations from any one of them:
\begin{itemize}
\item Provided only with the position equation, we can \textit{differentiate} to acquire velocity and acceleration.
\item Provided only with the acceleration equation, we can \textit{integrate} to acquire velocity and position.
\end{itemize}
But to do this, we need to understand functions.
\subsection{Functions}
\label{Functions}
In a powerful idea from algebra, letters stand for numbers -- \textit{one letter, any number}. In calculus (and other mathematical fields) there's a similar idea for equations: a \textit{function}\endnote{\href{https://en.wikipedia.org/wiki/Function_(mathematics)}{Function (mathematics)} -- a function is a symbol stands for an equation.} stands for an equation -- \textit{one function, any equation}. This ``symbolic equation'' idea is essential for performing calculus operations. Here's an example of functional notation:
\begin{equation}
\label{PositionFunction}
p(t) = \frac{1}{2} a t^2
\end{equation}
In this example we define $p(t)$, a position function that accepts a time argument and produces a position result. But, just as a variable can stand for any number, \textit{a function can stand for any equation}. This means we can write expressions with functions in much the same way we do with variables.
\section{Numerical and Symbolic Differentiation}
\subsection{Numerical Differentiation}
Let's use the above-described function notation to perform numerical differentiation, a calculation based on numerical differences. Specifically, let's perform an operation on the position function shown in \ref{PositionFunction} above to acquire an approximation of its derivative, velocity.
\subsubsection{A small step in time}
In section \ref{Computing a derivative}, ``\nameref{Computing a derivative}'' on page \pageref{Computing a derivative}, we acquired a velocity estimate, a numerical derivative, by subtracting two adjacent positions listed in a data table. Our purpose was to acquire a \textit{rate of change} in position over time. Let's create a more flexible version of that approach by (a) producing results using position function \ref{PositionFunction} defined above, rather than data from a table, and (b) defining a variable representing a \textit{small step in time}. Let's give our time-step this symbol: $\Delta t$, pronounced ``delta-t''. Using this approach and with functional notation we can write:
\begin{equation}
\label{FirstDerivativePositionNumerical}
p'(t) \approx \frac{p(t+\Delta t) - p(t)}{\Delta t}
\end{equation}
Some explanation:
\begin{itemize}
\item The function $p(t)$, defined in \ref{PositionFunction} above, generates positions for time arguments.
\item The expression $p'(t)$ (note the apostrophe) means ``the first derivative\textsuperscript{\ref{Derivative}} of $p(t)$'', a \textit{rate of change} in position, or velocity\footnote{The second derivative, acceleration, would be $p''(t)$.}.
\item The symbol ``$\approx$'' means ``approximately equal to''. It's present because this numerical method cannot produce exact results.
\item The symbol ``$\Delta t$'', pronounced ``delta-t'', is assigned a small numerical value, sufficient to produce a small step in time and acquire a numerical derivative of position, which is velocity.
\end{itemize}
To summarize, equation \ref{FirstDerivativePositionNumerical} uses a method much like that shown in the ``\nameref{Computing a derivative}'' section above -- it computes two position values and subtracts one from the other to get a numerical derivative, an estimate of velocity. The result of the subtraction is then divided by $\Delta t$ to make the result independent of the chosen step size.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.6\columnwidth]{\basepath/derivative_graph.png}
\caption{Numerical Differentiation}
\label{NumericalDifferentiation}
\end{center}
\end{figure}
Figure \ref{NumericalDifferentiation} shows how equation \ref{FirstDerivativePositionNumerical} looks when plotted. It shows two example rates of change or velocity (dashed red lines) generated by the coordinate pairs equation \ref{FirstDerivativePositionNumerical} creates, and superimposed on a plot of positions over time. The dashed lines that intersect the position plot are technically known as \textit{tangent lines}\endnote{\href{http://mathworld.wolfram.com/TangentLine.html}{Tangent Line} -- a line that intersects a function's plot line and matches its slope as well.}. The blue intersection points are intentionally spaced far apart to remind the reader how imprecise numerical differentiation can be.
\subsubsection{Approximate Numerical Results}
I emphasize again that this method can only produce approximate results. Here's a table of results using function \ref{FirstDerivativePositionNumerical} with a time argument of 8 and different $\Delta t$ values:
\begin{table}[H]
\begin{center}
\input{\basepath/table2.txt}
\captionof{table}{Numerical Differential Results}
\label{table:NumericalResults}
\end{center}
\end{table}
\subsubsection{Division by Zero}
Table \ref{table:NumericalResults} shows that, as we make $\Delta t$ smaller, the results become more accurate. So why not just set $\Delta t$ to zero and get perfect results? We can't do that because we must divide by $\Delta t$ in function \ref{FirstDerivativePositionNumerical}, and in mathematics division by zero\endnote{\href{https://en.wikipedia.org/wiki/Division_by_zero}{Division by zero} -- an undefined mathematical operation.} is undefined. Here's why -- consider this equation:
\begin{equation}
\label{DivZero}
y = \frac{x}{0}
\end{equation}
The result of equation \ref{DivZero} is undefined because dividing $x$ by zero (assuming $x \neq 0$) implies the existence of a number $y$ that, when multiplied by zero, would produce $x$, but there's no such number\footnote{Any number, multiplied by zero, equals zero.}. Therefore, when using numerical differentiation methods, remember these constraints:
\begin{itemize}
\item $\Delta t$ cannot equal zero, but
\item Nonzero $\Delta t$ values produce errors proportional to their value\footnote{Also see the ``\nameref{Computer Processing Limitations}'' appendix on page \pageref{Computer Processing Limitations} for a discussion of the special problems created by computer processing of numerical derivatives.}.
\end{itemize}
These problems are addressed by the idea of a \textit{limit}.
\subsection{The Limit}
\label{The Limit}
In the above example we assigned a small numerical value to $\Delta t$ to produce an approximate derivative and discovered that the result's numerical errors decreased as $\Delta t$ decreased in size (table \ref{table:NumericalResults}). The idea of examining small intervals is central to calculus, and the limit operator\endnote{\href{https://en.wikipedia.org/wiki/Limit_(mathematics)}{Limit (mathematics)} -- a method to approach values that cannot be directly assigned.} refines this idea -- it's a way to make statements about infinitesimally small intervals. Another way to say this is that we can use the limit operator to approach values that cannot be assigned directly, like zero or infinity. For example:
\begin{equation}
\lim_{x \to 0} \frac{1}{x} = \infty
\end{equation}
If the above equation were to be expressed in words, it would be something like, ``As $x$ \textit{approaches} zero, $\frac{1}{x}$ \textit{approaches} infinity.'' This shows how a limit can be used to make statements that aren't constrained by practical or logical limitations such as (in this example) the inconvenient facts that division by zero is undefined and infinity isn't a number\textsuperscript{\ref{Infinity}}. Again, by using a limit, we \textit{approach} specified limits but do not \textit{reach} them.
Among many uses, and in concert with the functional notation introduced above, the limit operator can be used to define a symbolic derivative in terms of a limit:
\begin{equation}
f'(x) = \lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x}
\end{equation}
In this example, $f'(x)$ (note the apostrophe) is the derivative of $f(x)$ with respect to $x$, and $\Delta x$ is a small numerical step (already introduced) that the limit operator makes infinitesimally small.
The limit has a number of applications in both differential and integral calculus, so it merits close study. Applications for this operator are shown in both the \nameref{Symbolic Differentiation} and \nameref{Symbolic Integration} sections.
More importantly, it is by means of the limit operator that we make the transition from stepwise processes to the continuous ones described in section \ref{Incremental vs. Continuous Processes} above, and write equations that can describe nature.
\subsection{Symbolic Differentiation}
\label{Symbolic Differentiation}
The most important reason to prefer symbolic differentiation over the numerical methods described above is that the outcome is a symbolic function (and equation) rather than a numerical result. For all the reasons that an equation is more powerful and versatile than a numerical result, the equations and functions produced by symbolic differentiation are much more useful than simple numerical results like those shown above.
Remember that, in both the numerical method described above and in the symbolic method presented here, the expected outcome of the activity is a function that defines a \textit{rate of change} in the original function with respect to a particular variable. Expressed another way, the resulting function will be the symbolic derivative\textsuperscript{\ref{Derivative}} of the original function.
Interestingly, we start the process of symbolic differentiation with almost the same function definition shown in the numerical method (\ref{FirstDerivativePositionNumerical}), but with some changes:
\begin{equation}
\label{FirstDerivativePositionSymbolic}
p'(t) = \lim_{\Delta t \to 0} \frac{p(t+\Delta t) - p(t)}{\Delta t}
\end{equation}
One change is the replacement of ``$\approx$'' (meaning \textit{approximately equal to}) with ``$=$'', (meaning \textit{exactly equal to}) because for the majority of functions, a symbolic differentiation is exact. The other change is the presence of the limit operator, which is key to a process of symbolic manipulation that transforms one function into another.
Here are steps in the symbolic differentiation of our example function $p(t)$:
\begin{itemize}
\item For clarity we temporarily abandon functional notation, so the equation within the position function $p(t)$ is revealed:
\begin{equation}
\frac{p(t+\Delta t) - p(t)}{\Delta t} \mapsto \frac{a {\left({\Delta t} + t\right)}^{2} - a t^{2}}{2 \, {\Delta t}}
\end{equation}
\item Now we perform a limit operation, which eliminates $\Delta t$ and terms multiplied by it and reduces the result to its simplest form (the full process for acquiring this result is shown in the ``\nameref{Detailed Symbolic Differentiation}'' appendix on page \pageref{Detailed Symbolic Differentiation}):
\begin{equation}
\lim_{\Delta t \to 0} \frac{a {\left({\Delta t} + t\right)}^{2} - a t^{2}}{2 \, {\Delta t}} = a t
\end{equation}
\end{itemize}
The above process creates a new function that represents velocity and is the derivative of the position function \ref{PositionFunction} defined above. It has this formal definition:
\begin{equation}
p'(t) = \lim_{\Delta t \to 0} \frac{a {\left({\Delta t} + t\right)}^{2} - a t^{2}}{2 \, {\Delta t}} = a t
\end{equation}
The ``second'' derivative, which defines acceleration, is derived in much the same way:
\begin{itemize}
\item Abandon functional notation:
\begin{equation}
\frac{p'(t+\Delta t) - p'(t)}{\Delta t} \mapsto \frac{a {\left({\Delta t} + t\right)} - a t}{{\Delta t}}
\end{equation}
\item Perform the limit operation and assign the result to a new function identifier:
\begin{equation}
p''(t) = \lim_{\Delta t \to 0} \frac{a {\left({\Delta t} + t\right)} - a t}{{\Delta t}} = a
\end{equation}
\end{itemize}
The symbolic first and second derivatives acquired by this method agree with the definitions established earlier. Again, a more thorough exposition of this method appears in the ``\nameref{Detailed Symbolic Differentiation}'' appendix on page \pageref{Detailed Symbolic Differentiation}.
\section{Numerical and Symbolic Integration}
\subsection{Numerical Integration}
\subsubsection{Functions without integrals}
Numerical integration plays a larger part in applied mathematics than numerical differentiation does. The reason is that most well-behaved functions have easily acquired symbolic derivatives, but this isn't true for symbolic integrals. Many commonly used functions have no symbolic integrals and must be integrated numerically. Among these are the error function\endnote{\href{https://en.wikipedia.org/wiki/Error_function}{Error function} -- a very widely used function that cannot be integrated symbolically.} widely used in statistics and other fields --
\begin{equation}
\text{erf}(x) = \frac{1}{2 \pi} \int_{-x}^{x} e^{-t^2} \mathrm{d}t
\end{equation}
-- any orbital problem involving more than two bodies\endnote{\href{https://en.wikipedia.org/wiki/Three-body_problem}{Three-body problem (orbital mechanics)} -- the class of orbital problems lacking closed-form solutions.}, and many others. As a result, much study has been devoted to finding accurate numerical algorithms for these kinds of functions.
\subsubsection{Summation of rectangles}
The premise of numerical integration is that we can approximate an integral by summing the areas of a series of rectangles. The original function (in this example $f(x)$) provides the rectangle's height, and the summation operator at the right in equation \ref{NumericalIntegrationFunction} provides the summed areas:
\begin{equation}
\label{NumericalIntegrationFunction}
\int_{a}^{b} f(x) \, \mathrm{d}x \approx \frac{b-a}{m} \sum_{n = 1}^{m} f(\frac{n}{m} (b-a)+a)
\end{equation}
Equation \ref{NumericalIntegrationFunction} says that the symbolic integral at the left can be approximated by the summation operation at the right\footnote{The integration symbol $\int$ refers to a continuous summation of infinitesimal quantities, an ``integral,'' while the summation symbol $\sum$ represents summation of discrete quantities on finite intervals.}, using the same function in both cases. This is the same idea we applied earlier in section \ref{Computing an integral}, ``\nameref{Computing an integral}'' on page \pageref{Computing an integral} -- we took the sum of a series of velocities to acquire a position.
\subsubsection{Approximate results}
It's important to emphasize that numerical integration rarely produces an exact result. The charts in Table \ref{NumericalIntegrationOutcomes} show that the outcome accuracy depends on the value chosen for $m$ in equation \ref{NumericalIntegrationFunction}, with higher values (i.e. more rectangles) leading to greater accuracy, but -- as with numerical differentiation -- there are limits imposed by finite computer variable resolution. An additional factor in computer numerical integration is the fact that program runtime increases directly proportional to the $m$ value chosen.
\begin{table}[H]
\rowcolors{2}{white}{white}
\begin{center}
\begin{tabular}{| c | c | }
\hline
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\columnwidth]{\basepath/numerical_integration_graph_4.png}
\end{subfigure}&
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\columnwidth]{\basepath/numerical_integration_graph_8.png}
\end{subfigure}\\
\hline
\newline
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\columnwidth]{\basepath/numerical_integration_graph_16.png}
\end{subfigure}&
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\columnwidth]{\basepath/numerical_integration_graph_32.png}
\end{subfigure}\\
\hline
\end{tabular}
\caption{Numerical Integration Outcomes}
\label{NumericalIntegrationOutcomes}
\end{center}
\end{table}
The plots in Table \ref{NumericalIntegrationOutcomes} show a green line that represents a continuous process, and a series of rectangles that approximate the process. As explained in section \ref{Incremental vs. Continuous Processes}, the goal is to produce a result that describes a continuous\textsuperscript{\ref{ContinuousProcess}} process, rather than approximating it with a series of discrete steps.
\subsection{Symbolic Integration}
\label{Symbolic Integration}
\subsubsection{Relation between definite and indefinite integrals}
The car example described in section \ref{Car Example} on page \pageref{Car Example} leads to a solution in terms of \textit{definite integrals}\endnote{\label{DefiniteIntegralDefinition}\href{http://mathworld.wolfram.com/DefiniteIntegral.html}{Definite Integral} -- definition in terms of indefinite integrals.}, integrals having a defined interval, for example between $a$ and $b$:
\begin{equation}
\int_{a}^{b} f(x) \, \mathrm{d} x
\end{equation}
Contrast this with an \textit{indefinite} integral, one having no defined interval:
\begin{equation}
\int f(x) \, \mathrm{d} x
\end{equation}
Because the following discussion describes \textit{indefinite} integrals and because our car example relies on \textit{definite} integrals, we show how a definite integral can be obtained from an indefinite one. Let F(x) be the \textit{antiderivative}\textsuperscript{\ref{Antiderivative}} of f(x), such that:
\begin{equation}
F'(x) = f(x)
\end{equation}
A definite integral is obtained from an indefinite integral like this\textsuperscript{\ref{DefiniteIntegralDefinition}}:
\begin{equation}
\int_{a}^{b} f(x) = F(b) - F(a)
\end{equation}
Another property of definite integrals is that a constant is normally added to the result:
\begin{equation}
\int_{a}^{b} f(x) = F(b) - F(a) + C
\end{equation}
The reader may recall that, in the ``\nameref{Computing an integral}'' section on page \pageref{Computing an integral}, a new position was acquired from a series of velocities by summing (integrating) the velocities, then adding a starting position to the result.
\subsubsection{Relation between summation and integration}
In contrast to symbolic differentiation, where nearly all functions have symbolic derivatives and are usually easy to process, not all functions have symbolic integrals, indeed many very useful functions fall into the latter category and must be computed numerically.
Equation \ref{NumericalIntegrationFunction} in the numerical integration section shows an approximate relation between summation and integration, but for symbolic integration we can use the limit operator to show the equivalence of a particular kind of summation and an integral\endnote{\href{http://www.mathcentre.ac.uk/resources/Engineering\%20maths\%20first\%20aid\%20kit/latexsource\%20and\%20diagrams/8_12.pdf}{Integration as summation (PDF)} -- a concise summary showing use of the limit operator to show the equivalence of a particular kind of summation and an integral.}:
\begin{equation}
\label{SymbolicIntegrationFunction}
\lim_{\Delta x \to 0} \sum_{x = a}^{b} f(x)\Delta x = \int_{a}^{b} f(x) \, \mathrm{d}x
\end{equation}
This is only meant to symbolize the process in the abstract. Unlike the numerical form shown in equation \ref{NumericalIntegrationFunction}, this form is not meant to be applied to real functions as written, and symbolic conversion from functions to their integrals is not so straightforward.
There are several approaches to finding a symbolic integral:
\subsubsection{Antiderivative}
If function $f(x)$ is the derivative of function $F(x)$:
\begin{equation}
F'(x) = f(x)
\end{equation}
Then $F(x)$ is said to be the \textit{antiderivative}\endnote{\label{Antiderivative}\href{https://en.wikipedia.org/wiki/Antiderivative}{Antiderivative} -- the opposite of a derivative, an integral.} of $f(x)$. This suggests that one way to find an integral for $f(x)$ is to look for a function whose derivative is $f(x)$ -- according to the Fundamental Theorem of calculus\textsuperscript{\ref{FundamentalTheorem}}, such a function is the integral of $f(x)$.
Using this method to find integrals can sometimes be difficult and complex, but it has the advantage that a result can be easily verified -- we can differentiate a candidate function to see if its derivative is the original function.
\subsubsection{Applying Rules}
Another approach to finding integrals is to apply established rules. For the car example described in section \ref{Car Example} on page \pageref{Car Example}, we can apply rules to the process of acquiring velocity from acceleration and position from velocity. Earlier we established the identities of the first and second integrals of acceleration by examining a table of numerical values and guessing, but these integrals can also be acquired by the application of rules.
We earlier acquired equations by guessing their identities:
\begin{itemize}
\item From acceleration to velocity:
\begin{equation}
\label{AccelerationToVelocity}
\int a \, \mathrm{d} t = a t
\end{equation}
\item From velocity to position:
\begin{equation}
\label{VelocityToPosition}
\int a t \, \mathrm{d} t = \frac{1}{2} a t^2
\end{equation}
\end{itemize}
As it turns out, these pragmatic conversions apply established rules. The first rule is identical to that shown in equation \ref{AccelerationToVelocity}, while equation \ref{VelocityToPosition} applies this general exponential integral rule:
\begin{equation}
\int a x^n \mathrm{d} x = \frac{1}{n+1} a x^{n+1} \text{,} \, n \ne -1
\end{equation}
This rule is applied to equation \ref{VelocityToPosition} by setting $n = 1$ and noting that $x^1 = x$. The ``\nameref{Rules for Integration}'' appendix on page \pageref{Rules for Integration} lists more such rules and sources for more rules.
\subsection{Analyzing a Sphere}
This section shows how one can use successive integrations to analyze a three-dimensional object. The analysis adds one dimension per integration -- integrating a one-dimensional profile function creates a two-dimensional area, then integrating the area creates a three-dimensional volume.
Apart from their usefulness in teaching calculus, these equations have practical applications. The author uses them to analyze the properties of liquid storage tanks\endnote{\href{http://arachnoid.com/storage_container_mathematics}{Storage Container Mathematics} -- mathematics applied to storage vessels.} -- their surface areas and their partial and full content volumes.
\subsubsection{Circle Diameter Function}
For this exercise I've chosen a sphere -- simple enough to fully understand, but complex enough to be interesting. Let's start with a simple function that generates a bisecting diameter line for a circle with radius $r$. This simple equation is the basis for all the other forms described in this section:
\begin{equation}
\label{SphereAnalysis1}
f(t) = 2 \, \sqrt{-{\left(r - t\right)}^{2} + r^{2}}
\end{equation}
For all the functions in this section, the independent variable $t$ has the range $0 <= t <= 2 r$. Here's a plot of function \ref{SphereAnalysis1}:
\begin{figure}[H]
\centering
\includegraphics[width=0.6\columnwidth]{\basepath/sphere1.png}
\caption{Circle Diameter Function (one dimension)}
\end{figure}
\subsubsection{Circle Partial Area Function}
To move from one to two dimensions, let's write a function based on \ref{SphereAnalysis1} that, for a given $t$ argument, integrates a series of infinitesimal lines extending across a circle, in the range $0 <= y <= t$. The result should be the partial area of a circle of radius $r$ bisected at $t$:
\begin{equation}
\label{SphereAnalysis2}
f(t) = \frac{1}{2} \, \pi r^{2} + \int_{0}^{t} 2 \, \sqrt{-{\left(r - y\right)}^{2} + r^{2}} \, \mathrm{d} y = \frac{1}{2} \, \pi r^{2} - r^{2} \arcsin\left(\frac{r - t}{r}\right) -
\sqrt{2 \, r t - t^{2}} {\left(r - t\right)}
\end{equation}
Function \ref{SphereAnalysis2} is often applied to the task of determining the partial volume of a horizontal cylindrical storage tank. A result from \ref{SphereAnalysis2}, which in that case would represent the partial surface area of one end of the tank, is multiplied by the tank's length to acquire a partial volume:
\begin{figure}[H]
\centering
\includegraphics[width=0.6\columnwidth]{\basepath/sphere1a.png}
\caption{Circle Partial Area Example (two dimensions)}
\end{figure}
Here's a plot of function \ref{SphereAnalysis2}, showing the area profile as a function of $t$:
\begin{figure}[H]
\centering
\includegraphics[width=0.6\columnwidth]{\basepath/sphere2.png}
\caption{Circle Partial Area Function (two dimensions)}
\end{figure}
\subsubsection{Sphere Partial Surface Area Function}
Moving now to three dimensions, the full surface area of a sphere is equal to $4 \pi r^2$. Le's add a function, very much like \ref{SphereAnalysis2}, to compute partial sphere surface areas based on an argument that bisects the sphere in three dimensions:
\begin{equation}
\label{SphereAnalysis3}
f(t,r) = 2 \pi r^{2} + \int_{0}^{t} 8 \, \sqrt{-{\left(r - y\right)}^{2} + r^{2}} \, \mathrm{d} y = 2 \, \pi r^{2} - 4 \, r^{2} \arcsin\left(\frac{r - t}{r}\right) - 4 \,
\sqrt{2 \, r t - t^{2}} {\left(r - t\right)}
\end{equation}
\subsubsection{Sphere Partial Volume Function}
Finally, a surprisingly simple function that computes partial sphere volume. A full sphere volume should be $\frac{4}{3} \pi r^3$, let's see how this turns out:
\begin{equation}
\label{SphereAnalysis4}
f(t) = \pi \int_{0}^{t} -{\left(r - y\right)}^{2} + r^{2} \, \mathrm{d} y = \pi r t^{2} - \frac{1}{3} \, \pi t^{3}
\end{equation}
Here's a sphere volume profile for function \ref{SphereAnalysis4}:
\begin{figure}[H]
\centering
\includegraphics[width=0.6\columnwidth]{\basepath/sphere3.png}
\caption{Sphere Partial Volume Function (three dimensions)}
\end{figure}
Given a basis like the partial circumference equation shown in \ref{SphereAnalysis1} and with the assistance of \nameref{Computer Algebra Systems} (page \pageref{Computer Algebra Systems}) to acquire efficient integrals, it's not difficult to assemble a full analysis of a given three-dimensional shape.
\section{Differential Equations}
\label{Differential Equations}
A differential equation\endnote{\href{https://en.wikipedia.org/wiki/Differential_equation}{Differential equation} -- an equation that relates a function to its derivatives.} is one that relates a function to its derivatives. Differential equations are very important to modern science and technology. It's only a small exaggeration to say that modern physical theory is assembled from a large set of differential equations.
\subsection{First-Order}
\subsubsection{Rate of Decline}
Let's start with a very common first-order\endnote{\href{http://mathworld.wolfram.com/First-OrderOrdinaryDifferentialEquation.html}{First-Order Ordinary Differential Equation}} differential equation (hereafter DE) that models natural growth and decay. It looks like this:
\begin{equation}
\label{DiffEq1}
y'(t) + y(t) \, r = 0
\end{equation}
\begin{equation}
y(0) = C
\end{equation}
Where:
\begin{itemize}
\item $t$ = time, the independent variable.
\item $y(t)$ = an unknown function of t.
\item $y'(t)$ = the first derivative of y with respect to time\footnote{There are many derivative notations, some described in the ``\nameref{Calculus Notations}'' appendix on page \pageref{Calculus Notations}.}.
\item $r$ = a value that determines the function's rate of change with respect to time.
\item $C$ = a constant that establishes an \textit{initial value}\endnote{\href{https://en.wikipedia.org/wiki/Initial_value_problem}{Initial value problem} -- a differential equation that includes one or more initial values.} at time zero.
\end{itemize}
The solution to this DE is:
\begin{equation}
\label{Desol1}
y(t) = C e^{-r t}
\end{equation}
Where:
\begin{itemize}
\item $e$ = the base of natural logarithms\endnote{\href{https://en.wikipedia.org/wiki/Natural_logarithm}{Natural logarithm}}.
\item $C$ = a constant of integration whose value depends on specific circumstances.
\end{itemize}
When plotted as a function of time, the result looks like this:
\begin{figure}[H]
\begin{center}
\includegraphics[width=.6\textwidth]{\basepath/diffeq1.png}
\caption{Plot of Natural Decay Equation}
\end{center}
\end{figure}
Here are some notes on this example:
\begin{itemize}
\item A DE consists of one or more statements that describe an unknown function.
\item Solving a DE is much like integrating a function, in fact symbolic or numerical integration is usually involved in the solution.\footnote{Because this article is an introduction to calculus, it doesn't cover the specifics of solving DEs, except by way of computer algebra systems described below.}
\item Not all DEs can be solved symbolically, but as with integrals, numerical methods can be used to acquire approximate results.
\item This DE is called "first-order" because it only includes a first derivative. See below for a second-order DE.
\item This specific DE is widely applied in fields such as physics, biology, electrical engineering and thermodynamics among others, to describe natural processes of decline -- the rate of change in temperature between two connected reservoirs of heat energy, the rate of water flow between two connected bodies of water, the rate of gas pressure change between two connected, pressurized gas reservoirs, the rate of voltage change in an electrical resistor-capacitor circuit, the rate of decline in sources of radioactivity, and many others.
\item Intuition into the behavior of this DE can be had by noting that the function's first derivative or rate of change -- $y'(t)$ -- is determined by a constant value assigned to $r$, therefore as the value of $y(t)$ declines, the \textit{amount} of change declines in proportion. This means that in theory, for a nonzero $C$ initial value, the value of $y(t)$ never becomes zero.
\end{itemize}
\subsubsection{Rate of Growth}
In this example, we make a very small change in the unknown function's definition to track rates of growth:
\begin{equation}
y'(t) - y(t) \, r = 0
\end{equation}
\begin{equation}
y(0) = C
\end{equation}
Notice that the only change from definition \ref{DiffEq1} above is from $y'(t) + y(t) r$ to $y'(t) - y(t) r$, with significant changes in the behavior of the solution.
Here is the solution to this DE:
\begin{equation}
y(t) = C e^{r t}
\end{equation}
Note that the only change from solution \ref{Desol1} above is that a negative exponent becomes a positive one.
Here is a plot of the solution as a function of time:
\begin{figure}[H]
\begin{center}
\includegraphics[width=.6\textwidth]{\basepath/diffeq2.png}
\caption{Plot of Natural Growth Equation}
\label{Plot of Natural Growth Equation}
\end{center}
\end{figure}
Here are some notes on this DE:
\begin{itemize}
\item This DE differs from the first in that the rate of change is positive, so the amount of change $y'(t)$ over time \textit{increases} and is proportional to the value of $y(t) \, r$. As before, the rate of change with respect to time is determined by a constant value assigned to $r$.
\item This DE has the property that, for a value $y(t)$ at any time $t$, that value will have doubled after $\frac{\text{log}(2)}{r}$ units of time\footnote{We can say that, for a given rate of increase $r$, the value will have increased by N times in $\frac{\text{log}(N)}{r}$ units of time.}.
\item This DE is widely applied to problems involving biological growth, population studies\endnote{\href{http://arachnoid.com/lutusp/populati.html}{The Mathematics of Population Increase} -- differential equations applied to measurement of population growth.}, compound interest, and other systems that involve patterns of natural growth.
\item Figure \ref{Plot of Natural Growth Equation} shows that, for a growth rate $r$ of 1\% per unit of time (i.e. $r = 0.01$), the function's value doubles in $\frac{\text{log}(2)}{r}$ or 69.3 units of time, and quadruples in $\frac{\text{log}(4)}{r}$ or 138.6 units of time.
\end{itemize}
\subsection{Second-Order}
\subsubsection{Harmonic Oscillator}
\label{Harmonic Oscillator}
Second-order DEs, DEs that involve second derivatives, are somewhat more complex than first-order, so we'll show just one relatively simple and well-studied example. Here is its definition:
\begin{equation}
\label{SecondOrder1}
y''(t) + y(t) = 0
\end{equation}
\begin{equation}
\label{SecondOrder2}
y(0) = 0
\end{equation}
\begin{equation}
\label{SecondOrder3}
y'(0) = 0
\end{equation}
\begin{equation}
\label{SecondOrder4}
y''(0) = 1
\end{equation}
Before we solve this DE, I ask my readers to think about the system described in equations \ref{SecondOrder1} through \ref{SecondOrder4}:
\begin{itemize}
\item As \ref{SecondOrder1} is written, it seems that $y''(t)$ will always be the negation of $y(t)$. Think about what $y''(t) + y(t) = 0$ means -- it means $y''(t)$ will always be equal to $-y(t)$ for nonzero values. For example, when $y(t) = 1$, $y''(t) = -1$ and vice versa.
\item In summary, this DE defines a closed loop, with $y(t)$ getting its rate of change from $y'(t)$ and $y'(t)$ getting its rate of change from ... wait for it ... $-y(t)$.
\item On that basis, we can predict that this DE will oscillate endlessly, and that is exactly what happens.
\end{itemize}
Here's the solution to the DE defined by \ref{SecondOrder1} - \ref{SecondOrder4}:
\begin{equation}
\label{SecondOrderSol}
y(t) = sin(t)
\end{equation}
Because $y'(t)$ is not specified in the definition and because it plays a part in the result, here is the first derivative of $y(t)$:
\begin{equation}
\label{SecondOrderDeriv}
y'(t) = cos(t)
\end{equation}
Here is a plot of this second-order DE, including traces for both $y(t)$ and $y'(t)$:
\begin{figure}[H]
\begin{center}
\includegraphics[width=.6\textwidth]{\basepath/diffeq3.png}
\caption{Plot of Second-order DE (harmonic oscillator)}
\label{Plot of Second-order DE (harmonic oscillator)}
\end{center}
\end{figure}
With a bit of insight, the reader can see that the brown first derivative trace ($y'(t)$) represents a rate of change in the primary trace $y(t)$. Not plotted but implied by equation \ref{SecondOrderDeriv} is that $y''(t)$ equals $-sin(t)$.
\section{Computer Algebra Systems}
\label{Computer Algebra Systems}
\subsection{Rationale}
Many of the procedures described in this article can be automated or greatly simplified by use of a special kind of computer software called a \textit{Computer Algebra System} (CAS)\endnote{\href{https://en.wikipedia.org/wiki/Computer_algebra_system}{Computer algebra system} -- a class of computer software able to perform relatively high-level mathematics.}. A modern CAS can provide a variety of numerical and symbolic results that were not practical until recently. And better, such programs are becoming less expensive and in many cases are free.
The objection is sometimes made that a CAS shields its users from the underlying mathematics and may produce results while providing no insight into the reasoning behind them, but there's a counterargument -- that by its ease of use, accessibility and visual aids a CAS may teach mathematics that users might not otherwise understand. To support this point, let me ask how many people would learn calculus if they were obliged to perform long division with a pencil and a pad of paper?
\subsection{CAS Examples}
The example environments in this section are limited to those able to perform symbolic calculus, and because many of my readers will be students I favor the least expensive solutions. For the sake of comparison I include a Mathematica example, even though because of its high cost I don't recommend it to people learning mathematics.
\subsubsection{Python/SymPy}
A Python\endnote{\href{https://en.wikipedia.org/wiki/Python_(programming_language)}{Python (programming language)
} -- a very popular programming language with libraries able to perform numerical and symbolic mathematics.} library named ``SymPy''\endnote{\href{https://en.wikipedia.org/wiki/SymPy}{SymPy} -- a Python library for symbolic computation.} is able to provide symbolic solutions to many kinds of mathematical problems and is a lightweight solution -- only a small amount of computer power is required\footnote{A Raspberry Pi can easily handle the processing required by Python/SymPy, but not necessarily the Jupyter notebook environment shown in this example.}.
Here's a symbolic solution to the \nameref{Harmonic Oscillator} second-order differential equation example (page \pageref{Harmonic Oscillator}), performed using SymPy's symbolic functions, with this example carried out in the (optional) Jupyter\endnote{\label{Jupyter}\href{http://jupyter.org/about.html}{Jupyter} -- an interactive mathematical environment.} interactive notebook environment:
\begin{figure}[H]
\begin{lstlisting}
from sympy import *
var('a:z C1 C2');
y = Function('y')
de = diff(y(t),t,t) + y(t)
f = Lambda((t,C1,C2) , dsolve(de,y(t)).args[1])
p = plot(f(t,1,0),f(t,0,1),(t,0,4*pi),
show=False,title='$'+latex(de)+'=0$')
p[0].line_color = 'DarkGreen'
p[1].line_color = 'DarkRed'
p.show()
\end{lstlisting}
\caption{Python/SymPy second-order DE listing (harmonic oscillator)}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.6\textwidth]{\basepath/jupyter_python_example1.png}
\caption{Python/SymPy second-order DE plot (harmonic oscillator)}
\label{CAS1}
\end{center}
\end{figure}
Some notes about Figure \ref{CAS1}:
\begin{itemize}
\item Notice the second-order derivative notation in the title of the plot: $y(t) + \frac{d^2}{dt^2}y(t)=0$. It seems that SymPy uses a different notation for derivatives than that used in this article. I emphasize again that all such notations are equivalent\textsuperscript{\ref{Calculus Notations}} and a matter of taste.
\item This example was carried out in Jupyter\textsuperscript{\ref{Jupyter}}, a rather nice open-source interactive notebook environment, an environment that greatly speeds up mathematical explorations and development. But the same code placed in a plain-text Python source file produces the same result, so a lightweight computing environment can produce this result as well.
\item The above result can be acquired using a command-shell based embodiment of SymPy called ISymPy\endnote{\href{http://docs.sympy.org/0.6.7/tutorial.html\#isympy-console}{ISymPy Console} -- a command-shell-based SymPy environment.} which reduces resource requirements to an absolute minimum.
\item At the time of writing there's an online interactive version of SymPy\endnote{\href{http://live.sympy.org/}{SymPy Online Shell} -- an interactive ISymPy command-line environment in a Web page.} in a web page, a truly minimal-resource solution.
\item Each CAS has a different notation for expressing calculus ideas -- there's a learning curve for each of them. Some environments can solve more equations than others.
\item Of the CAS examples in this section, this Python/Sympy CAS example requires the least computer power (and even less if the command-line option is chosen).
\end{itemize}
The Jupyter notebook environment is a powerful tool that speeds up mathematical and technical work, and it supports any number of different ``kernels'' or language processors\endnote{\href{https://github.com/ipython/ipython/wiki/IPython-kernels-for-other-languages}{IPython kernels for other languages} -- essentially, Jupyter supported kernels.}, even simultaneously. This example could have been created without Jupyter, but it would have taken more time.
\subsubsection{SageMath}
The SageMath project\endnote{\label{SageMath}\href{http://www.sagemath.org/}{SageMath} -- a free, open-source CAS based on Python.}, formerly known as Sage, is one I've written about in the past\endnote{\href{http://arachnoid.com/sage}{Exploring Mathematics with Sage} -- the author's Sage tutorial.}. SageMath is an ambitious project and has a much larger storage requirement than Python/Sympy, typically 4-5 gigabytes installed.
SageMath is a bundle of different mathematical resources\footnote{Including NumPy, SciPy, matplotlib, Sympy, Maxima, GAP, FLINT, R "and many more" according to the SageMath website\textsuperscript{\ref{SageMath}}.} configured to be accessible through an interface based on Python. Unlike the Python/Sympy environment, SageMath performs more preprocessing of entered expressions, so mathematical expressions are easier to enter and manipulate.
In a recent development SageMath has been made compatible with the Jupyter notebook environment, and this example, like the first, produces a result using Jupyter.
\begin{figure}[H]
\begin{lstlisting}
var(map(chr,xrange(ord('a'),ord('z'))));
y = function('y')
de = diff(y(t),t,t)+y(t)==0
f(t,_K1,_K2) = desolve(de,y(t),t)
p = plot(f(t,0,1),(t,0,4*pi),figsize=5,color='#800000'
,gridlines=True,title='$'+latex(de)+'$')
p += plot(f(t,1,0),(t,0,4*pi),color='#006000')
p
\end{lstlisting}
\caption{SageMath second-order DE listing (harmonic oscillator)}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.6\textwidth]{\basepath/jupyter_python_example2.png}
\caption{SageMath second-order DE plot (harmonic oscillator)}
\label{CAS2}
\end{center}
\end{figure}
I want to emphasize about Figure \ref{CAS2} that it very much sells SageMath short. I wanted to show the same symbolic processing example for all the environments, but this makes SageMath look much less capable than it really is. The truth is that, in typical mathematical work, SageMath is much easier to use than Python/Sympy and other similar environments like Maxima\endnote{\href{https://en.wikipedia.org/wiki/Maxima_(software)}{Maxima} -- a CAS based on an earlier project named Macsyma and used in SageMath for symbolic results.} (on which SageMath relies internally for its symbolic results). SageMath has been in active development for about 12 years at the time of writing and includes many packages meant to satisfy the needs of those involved in high-level mathematical research and education. It also can solve more kinds of equations than Python/SymPy can.
\subsubsection{Mathematica}
I include this entry out of a desire for completeness, even though I don't recommend Mathematica\endnote{\href{https://en.wikipedia.org/wiki/Wolfram_Mathematica}{Wolfram Mathematica} -- a.k.a. Mathematica, a powerful, and powerfully expensive, CAS.} for most students or mathematicians because of its great cost (at the time of writing, a one-seat non-student license is approximately US\$2500.00). Mathematica can solve a subset of problems out of reach of most other CAS solutions, but for most people the cost makes it unacceptable.
\subsubsection{Skydiver equation}
For balance I'll show a problem that only Mathematica can solve, not Sympy nor SageMath -- the ``Skydiver equation''. It's a second-order DE that models the dynamic behavior of a body falling through the atmosphere. For this problem the DE must model the fact that a high initial acceleration declines as air resistance builds up at higher velocities, until gravitational acceleration is exactly balanced by air resistance and velocity becomes a constant, a ``terminal velocity''. Here's a statement of the DE:
\begin{equation}
\label{skydiver1}
y''(t) = y'(t)^2 k - g
\end{equation}
\begin{equation}
\label{skydiver2}
y'(0) = 0
\end{equation}
\begin{equation}
\label{skydiver3}
y(0) = a
\end{equation}
Where:
\begin{itemize}
\item $t$ = time, seconds.
\item $y(t)$ = position, altitude in this system, at time $t$.
\item $g$ = Earth's gravitational acceleration, approximately 9.8 $m/s/s$, known as (``little-g''\endnote{\href{https://en.wikipedia.org/wiki/Gravity_of_Earth}{Gravity of Earth} -- also known as little-g, approximately 9.8 $m/s/s$}).
\item $a$ = initial altitude, meters.
\item $k$ = a constant that takes into account the external shape and properties and ratio of mass to surface area of a falling object.
\end{itemize}
Equation \ref{skydiver1} describes an acceleration profile that balances $g$, Earth's gravitational acceleration, against air resistance, which increases as the square of velocity, the latter represented by $y'(t)^2 k$.
Equation \ref{skydiver2} says that the initial velocity (at time zero) is zero.
Equation \ref{skydiver3} says that the initial position is $a$, a provided altitude, presumably the height of the airplane.
This is the most complex DE in this article, and to date among a handful of the most complex equations I've tried to submit to CAS methods for solution. And as already pointed out, SymPy and SageMath cannot solve it. Here's a narrative of Mathematica's solution:
\begin{itemize}
\item The statement of the DE as submitted to Mathematica:
\begin{equation}
\text{de}=\left\{y''(t)=k y'(t)^2-g,y(0)=a,y'(0)=0\right\}
\end{equation}
\item The Mathematica expression to solve it\footnote{The ``Quiet'' function suppresses a long list of cautionary messages Mathematica produces about its solution}:
\begin{equation}
\text{Quiet}[\text{sol}=\text{DSolveValue}[\text{de},y(t),t]]
\end{equation}
\item Mathematica's solution for position:
\begin{equation}
y(t) = \frac{a k-\log \left(\cosh \left( {\sqrt{\mathstrut g} \sqrt{\mathstrut k} \, t } \right) \right)}{k}
\end{equation}
\item Velocity:
\begin{equation}
y'(t) = -\frac{\sqrt{g} \tanh \left( { \sqrt{\mathstrut g} \sqrt{\mathstrut k} \, t } \right) }{\sqrt{k}}
\end{equation}
\item Acceleration:
\begin{equation}
y''(t) = -g \, \text{sech}^2\left( { \sqrt{\mathstrut g} \sqrt{\mathstrut k} \, t } \right)
\end{equation}
\item ``Jerk''\footnote{Honestly, that's what it's called\endnote{\label{Jerk}\href{https://en.wikipedia.org/wiki/Jerk_(physics)}{Jerk (physics)} -- The semi-official name for the third derivative of position.}.} (rate of change in acceleration):
\begin{equation}
y'''(t) = 2 g^{3/2} \sqrt{k} \tanh \left( { \sqrt{\mathstrut g} \sqrt{\mathstrut k} \, t } \right) \text{sech}^2\left( { \sqrt{\mathstrut g} \sqrt{\mathstrut k} \, t } \right)
\end{equation}
\end{itemize}
Here are plots of the solutions for each derivative:
\begin{figure}[H]
\rowcolors{2}{white}{white}
\begin{center}
\begin{tabular}{| c | c | }
\hline
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\columnwidth]{\basepath/skydiver_position.png}
\caption{Skydiver Position $y(t)$}
\end{subfigure}&
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\columnwidth]{\basepath/skydiver_velocity.png}
\caption{Skydiver Velocity $y'(t)$}
\end{subfigure}\\
\hline
\newline
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\columnwidth]{\basepath/skydiver_acceleration.png}
\caption{Skydiver Acceleration $y''(t)$}
\end{subfigure}&
\begin{subfigure}{0.4\textwidth}
\centering
\includegraphics[width=1\columnwidth]{\basepath/skydiver_jerk.png}
\caption{Skydiver ``Jerk''\textsuperscript{\ref{Jerk}} $y'''(t)$}
\end{subfigure}\\
\hline
\end{tabular}
\caption{Skydiver Result Plots}
\label{SkydiverResultPlots}
\end{center}
\end{figure}
The plots in Figure \ref{SkydiverResultPlots} show that, after falling for about 15 seconds, a skydiver has nearly reached terminal velocity. These plots provide insight into the relationship between a function and its derivatives -- for a function $y(t)$ whose value changes at a steady rate (like position after about 15 seconds), its first derivative $y'(t)$ should be a constant, and its second derivative $y''(t)$ should be zero.
As highly regarded as Mathematica is and despite much effort, I can't persuade it of something as simple as the idea that $\sqrt{\mathstrut g} \sqrt{\mathstrut k} = \sqrt{g k}$ for positive, non-complex $g$ and $k$ values. For US\$2500.00 one would think something as trivial as that should be achievable.\footnote{In fairness, neither SymPy nor SageMath can be persuaded of this either.}
Just to further complicate things, a free copy of Mathematica is included with Raspbian\endnote{\href{https://www.raspbian.org/}{Raspbian} -- a Linux distribution designed for the Raspberry Pi computers.}, a Linux distribution designed for the small, inexpensive but powerful Raspberry Pi\endnote{\href{https://en.wikipedia.org/wiki/Raspberry_Pi}{Raspberry Pi} -- a series of small, inexpensive but surprisingly powerful computers.} computers. It seems that Wolfram Research\endnote{\href{http://www.wolfram.com/}{Wolfram} -- the source of Mathematica.} (the source of Mathematica) would like to entice younger, more impressionable minds into eventually buying a more expensive version of their product.
\subsection{Conclusion}
For those seeking a CAS solution and after re-evaluating all the choices available at the time of writing, my advice for the majority of readers is to install Jupyter and SymPy. An alternative choice, for a few with particular needs, is to install Jupyter with Sage and SymPy (a much larger installation). Here are installation notes for each option:
\subsubsection{Jupyter/SymPy}
For Windows. Linux and the Macintosh, readers may acquire an installer from the Anaconda download page located at \texttt{https://www.continuum.io/downloads}\endnote{\href{https://www.continuum.io/downloads}{Anaconda Download} -- the download page for all versions of Anaconda.}. The installation procedure is much the same on all the platforms. At the time of writing Anaconda installs Python 3.5 with Sympy, SciPy, NumPy and a number of other useful libraries, plus the Jupyter notebook interface shown in examples above. Linux users can install Jupyter/SymPy without relying on Anaconda, but if the system has adequate storage space, there's no need to fine-tune the process.
\subsubsection{Jupyter/SymPy/Sage}
Because Sage is so much larger than the other CAS packages, and because of its well-known connection with the Linux platform, for those running Windows it's simpler to acquire and install it as a virtual machine\endnote{\href{https://en.wikipedia.org/wiki/Virtual_machine}{Virtual machine} -- a scheme in which what is essentially a computer in software is hosted on a different machine.}. As it happens the people behind SageMath have anticipated this and offer something they call the SageAppliance\endnote{\href{https://wiki.sagemath.org/SageAppliance}{SageAPpliance} -- a package that hosts SageMath on any target machine by way of a virtual machine.}, which along with VirtualBox\endnote{\href{https://www.virtualbox.org/wiki/Downloads}{VirtualBox} -- a popular cross-platform virtual machine manager/hosting environment.} allows Windows users to run SageMath locally, by creating a Linux virtual machine to host SageMath.
Those running Linux have more options -- they can download an installable SageMath binary\textsuperscript{\ref{SageMath}} for most Linux distributions. And on Linux, Jupyter is easier to install as well -- there are several approaches apart from the Anaconda installation method described above.
\section{Numerical Methods}
To re-emphasize a point made earlier, there are many equations and functions that cannot be expressed analytically and that must be solved numerically. Even though there's a long list of such functions, for this example we'll look at a problem that can be solved analytically, in order to validate a numerical result by comparing it to the analytical one.
\subsection{Numerical Harmonic Oscillator}
The operation of a well-designed numerical algorithm should be as self-documenting as practical, and this example meets that requirement. Here's a result from a Jupyter\textsuperscript{\ref{Jupyter}} notebook session (using the Python kernel) showing a numerical solution to the harmonic oscillator DE presented earlier in analytical form:
\begin{figure}[H]
\begin{lstlisting}[mathescape=true]
from matplotlib import pyplot as plt
import numpy as np
$\pi$ = np.pi
plt.clf()
y = 0
vy = 1
m = 1000
$\Delta$t = 1.0 / m
f = 2
$\omega$ = 2 * $\pi$ * f
ta = []
ya = []
yva = []
for t in range(m):
vy += y * $\omega$ * $\Delta$t
y -= vy * $\omega$ * $\Delta$t
ta += [t/m]
ya += [y]
yva += [vy]
plt.plot(ta,yva,color='DarkRed')
plt.plot(ta,ya,color='DarkGreen')
plt.xlabel('Time seconds')
plt.ylabel('Amplitude')
plt.title('$\$$y(t) + \\frac{d^2}{dt^2} y(t) = 0$\$$')
plt.grid()
plt.show()
\end{lstlisting}
\caption{SageMath second-order DE listing (harmonic oscillator)}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.6\textwidth]{\basepath/numerical_solutions1.png}
\caption{Numerical second-order DE solution (harmonic oscillator)}
\label{NumSol1}
\end{center}
\end{figure}
\subsubsection{Explanation}
Here's the essence of the algorithm shown in Figure \ref{NumSol1}:
\begin{addmargin}[2em]{2em}
\begin{enumerate}
\item \texttt{vy += y * $\omega$ * $\Delta$t}
\item \texttt{y -= vy * $\omega$ * $\Delta$t}
\end{enumerate}
\end{addmargin}
In line (1), velocity variable \texttt{vy} is updated by position variable \texttt{y} times $\Delta$t, a small value whose purpose was explained earlier, and $\omega$, a value that sets the frequency of oscillation. In line (2), position variable \texttt{y} has \texttt{vy} subtracted from it, times $\Delta$t and $\omega$. Just as with the analytical harmonic oscillator DE $y(t) + \frac{d^2}{dt^2}y(t) = 0$, this produces an oscillating system.
\subsubsection{Parallel with Numerical Integration/Differentiation}
There's a strong parallel between a numerical equation solver like this, and numerical integration and differentiation, indeed many of the same methods are used. And just as with those methods, a smaller value for $\Delta$t produce more accuracy at the cost of longer running times, and with the caveat that a very small value will begin to produce numerical errors, for reasons given in the \nameref{Computer Processing Limitations} appendix on page \pageref{Computer Processing Limitations}.
\subsection{Numerical Elliptical Orbit}
\label{Numerical Elliptical Orbit}
This section discusses a numerical algorithm that, by modeling a planetary orbit, can be used to test a physical theory -- energy conservation\endnote{\href{https://en.wikipedia.org/wiki/Conservation_of_energy}{Conservation of energy} -- a physical theory in which energy is neither created nor destroyed, only changed in form}. According to this theory, energy is neither created nor destroyed, only changed in form. If this theory is true, it should be verifiable by examining elliptical planetary orbits.
A planetary orbit has two kinds of energy -- kinetic energy\endnote{\href{https://en.wikipedia.org/wiki/Kinetic_energy}{Kinetic energy} -- the energy of motion.} (KE), the energy of motion, and gravitational potential energy\endnote{\href{https://en.wikipedia.org/wiki/Potential_energy\#Gravitational_potential_energy}{Gravitational potential energy} -- an energy that is proportional to position.} (GPE), an energy proportional to position.
\subsubsection{Kinetic Energy}
Kinetic energy has this definition:
\begin{equation}
\label{KE equation}
KE = \frac{1}{2} m v^2
\end{equation}
Where:
\begin{itemize}
\item $KE$ = Kinetic energy, Joules.
\item $m$ = mass, kilograms.
\item $v$ = velocity, meters/second.
\end{itemize}
\subsubsection{Gravitational Potential Energy}
Gravitational potential energy has this definition:
\begin{equation}
\label{GPE equation}
GPE = -G \frac{m_1 m_2}{r}
\end{equation}
Where:
\begin{itemize}
\item $GPE$ = Gravitational potential energy, Joules.
\item $G$ = Universal gravitational constant\endnote{\href{https://en.wikipedia.org/wiki/Gravitational_constant}{Gravitational constant} -- an important physical constant having to do with gravitation.}.
\item $m_1,m_2$ = Masses of orbiting bodies, kilograms.
\item $r$ = The distance between the orbiting bodies, meters.
\end{itemize}
\subsubsection{Negative Value for GPE}
GPE values calculated in physics, and in in equation \ref{GPE equation}, are by convention either negative or zero -- here's why:
\begin{enumerate}
\item At an infinite separation between bodies, GPE should equal zero.
\item Lifting a mass in a gravitational field, or increasing the radius $r$ between $m_1$ and $m_2$ in equation \ref{GPE equation}, requires an expenditure of energy, so with increasing separation the GPE value should become more positive, or less negative (which amounts to the same thing).
\item To simultaneously satisfy points (1) and (2) above, $GPE$ must be negative everywhere but at infinity, where it should be zero.
\end{enumerate}
\subsubsection{Constant Energy}
This orbital problem is sufficiently complex that numerical methods must be used to model it. For an elliptical orbit, as time passes the distance between the orbiting bodies changes, as do their velocities, so both $KE$ and $GPE$ are changing. If conservation of energy is a valid theory and barring friction and other energy losses, these two changing energies should sum to a constant at all points in the orbit, i.e. $KE+GPE = C$. That's the first theory we'll be testing in this model -- is orbital energy a constant?
\subsubsection{Kepler's Second Law}
This simulation tests another theory -- Kepler's Second Law\endnote{\label{KeplersLaws}\href{https://en.wikipedia.org/wiki/Kepler\%27s_laws_of_planetary_motion}{Kepler's laws of planetary motion} -- an early effort to systematize our understanding of planetary orbits.}: ``A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time.'' It turns out that, as written, this model can test both theories at once.
\subsubsection{Program Listing}
% source: /home/lutusp/Math/ipython_jupyter_notebooks/equal_energy_elliptical_orbit.ipynb
The model was written in Python, using the Jupyter notebook environment to speed up the process of testing and revision. Here's the program listing:
\begin{figure}[H]
\begin{lstlisting}[mathescape=true]
from matplotlib import pyplot as plt
G = 6.673e-11 # universal gravitational constant
sm = 2e30 # solar mass kilograms
m = int(2e6) # steps in simulation
md = m/16
$\Delta$t = 1.2e7/m # time step
def polygon_area(array):
a = 0
ox,oy=0,0
for x,y in array:
a += y * ox - x * oy # determinant
ox,oy = x,y
return a/2.0
plt.clf()
plt.figure(figsize=(8,4))
op = complex(149e9,0) # initial orbital position
ov = complex(0,1e4) # initial orbital velocity
gpr = []; gpi = []
poly = [[0,0]]
polgx = [0]; polgy = [0]
state = 0
print(" %-16s %-16s %-16s %-16s" % ('KE','GPE','KE+GPE','Area'))
print("-" * 70)
for n in range(m):
poly += [[op.real,op.imag]]
if(n % 1000 == 0):
polgx += [op.real]; polgy += [op.imag]
gpr += [op.real]; gpi += [op.imag]
# compute gravitational acceleration
a = -G * sm * op * (abs(op)**2)**-1.5
ov += a * $\Delta$t # increment velocity
op += ov * $\Delta$t # increment position
if(n % md == 0):
state += 1
if(state % 2 == 0):
poly += [[0,0]]
area = polygon_area(poly)
polgx += [0]; polgy += [0]
plt.plot(polgx,polgy,color='gray')
plt.fill(polgx,polgy,alpha=0.2)
# plot polygon label
tx = (polgx[1] + polgx[-2])/2.5
ty = (polgy[1] + polgy[-2])/2.5
lbl = chr(ord('@') + int(state/2))
plt.text(tx,ty,lbl)
opt = op - ov * $\Delta$t * 0.5 # intermediate position
ke = (abs(ov)**2.0)/2.0 # compute kinetic energy
pe = -G*sm/abs(opt) # compute potential energy
print("%c %16.9e %16.9e %16.9e %16.9e"
% (lbl,ke,pe,ke+pe,area))
else:
poly = [[0,0]]
polgx = [0]; polgy = [0]
plt.plot(gpr,gpi,color='gray') # plot orbit
plt.plot(0,0, 'oy') # plot solar position
plt.axis('equal') # assure equal figure proportions
plt.title('Elliptical Orbit Analysis')
plt.grid()
plt.show()
\end{lstlisting}
\caption{Elliptical Orbit Numerical Model Program Listing}
\end{figure}
\subsubsection{Results}
Here is the program's results table:
\begin{figure}[H]
\begin{lstlisting}
KE GPE KE+GPE Area
----------------------------------------------------------------------
A 5.915025527e+07 -9.048549533e+08 -8.457046980e+08 5.587455300e+20
B 1.423514785e+08 -9.880561765e+08 -8.457046980e+08 5.587455300e+20
C 3.919651862e+08 -1.237669884e+09 -8.457046980e+08 5.587455300e+20
D 1.510712220e+09 -2.356416918e+09 -8.457046980e+08 5.587455300e+20
E 1.628050668e+09 -2.473755366e+09 -8.457046980e+08 5.587455300e+20
F 4.076256220e+08 -1.253330320e+09 -8.457046980e+08 5.587455300e+20
G 1.475607164e+08 -9.932654144e+08 -8.457046980e+08 5.587455300e+20
H 6.055581909e+07 -9.062605171e+08 -8.457046980e+08 5.587455300e+20
\end{lstlisting}
\caption{Elliptical Orbit Model Results}
\label{Elliptical Orbit Model Results}
\end{figure}
Note in Figure \ref{Elliptical Orbit Model Results} that the KE+GPE result columns, showing the sum of kinetic and potential energy, agree with each other to ten decimal places, which confirms the theory of conservation of energy, and the area column values also agree, which confirms Kepler's Second Law.
This example is meant to show how mathematics can reveal connections between theories and objectively quantify predictions of theories, with an efficiency unequaled by other kinds of reasoning. We used mathematics to test a corollary of the theory of energy conservation and our result is perfectly clear: orbits conserve energy.
\subsubsection{Orbital Plot}
Here's the orbital plot the program generates:
\begin{figure}[H]
\begin{center}
\includegraphics[width=.7\textwidth]{\basepath/elliptical_orbit_plot1.png}
\caption{Elliptical Orbit Analysis Diagram}
\label{OrbitPlot1}
\end{center}
\end{figure}
In Figure \ref{OrbitPlot1} the system's central body is shown as a yellow circle at the left. Also shown are the polygons included to test Kepler's Second Law\textsuperscript{\ref{KeplersLaws}} -- in spite of their varied shapes, all their areas are equal.
\section{Infinity}
\label{Infinity}
The idea of infinity appears frequently in various mathematical contexts, so it deserves an explanation. First, infinity is not a number -- one cannot multiply or divide a number by infinity and acquire a meaningful result. To see why this is so, consider the Infinity Hotel.
\subsection{The Infinity Hotel}
Welcome to the Infinity Hotel -- we always have room for more guests, no one is turned away. If you show up without a reservation on a busy Saturday night, no problem -- we'll just put you in room one. But the Infinity Hotel always appears to be full, so how can we put you in room one? That's easy -- we first move the occupant of room one to room two. But to do that, we move the occupant in room two to room three, \textit{ad infinitum}.
After you've occupied room one, after the other guests have been moved to make room for you, the Infinity Hotel is the same size it was before -- it's still infinitely large.
This is why we can't multiply a number by infinity and expect a meaningful result. Just as with the idea of dividing by zero, to multiply a number $n$ by infinity implies the existence of a number that, if divided by infinity, would produce $n$, but no such number exists.
\subsection{Infinite Summation}
\label{Infinite Summation}
One may ask, if infinity isn't a number and can't appear in an equation in a numerical context, then what purpose does it serve in mathematics? Easily explained -- consider this equation\footnote{Remember that the operator $\sum$ takes the sum of the values generated to its right.}:
\begin{equation}
\label{Summation}
y = \sum_{n = 1}^{m} 2^{-n}
\end{equation}
Here are some example results from equation \ref{Summation} for chosen $m$ arguments and resulting $y$ values:
\begin{table}[H]
\begin{center}
\input{\basepath/infinity_table1.txt}
\caption{Discrete Summation}
\label{Discrete Summation}
\end{center}
\end{table}
Table \ref{Discrete Summation} shows that for any chosen $m$ value, equation \ref{Summation} will produce a sum that equals $\displaystyle \frac{2^{m}-1}{2^{m}}$. This means a sum of $2^{-n}$ values can approach 1, but for finite $m$ values can never equal it.
\subsection{Infinite Pie}
\label{Infinite Pie}
To show this in a more graphic way, here's a pie chart showing the result for $m$ = 5 in equation \ref{Summation}:
\begin{figure}[H]
\begin{center}
\includegraphics[width=.6\textwidth]{\basepath/infinity_piechart1.png}
\caption{Discrete Summation Pie Chart}
\label{Discrete Summation PieChart}
\end{center}
\end{figure}
See the empty slice at the top of figure \ref{Discrete Summation PieChart}? Think about how we can fill that gap, so that the sum generated by equation \ref{Summation} equals 1 -- \textit{not approximately, but exactly}.
Consider these points:
\begin{itemize}
\item Each new term in the equation \ref{Summation} series adds a slice to the pie that fills half the remaining space, so regardless of the numerical value assigned to $m$, there will always be an unfilled gap between the sum generated by equation \ref{Summation} and 1.
\item This problem is solved by assigning to $m$, \textit{not a number, but an idea} -- infinity, which has this symbol: $\infty$. The problem is solved \textit{precisely because} infinity is not a number, but an idea.
\item Our infinity solution looks like this:
\begin{equation}
\label{InfinitySum}
y = \sum_{n=1}^{\infty} 2^{-n} = 1
\end{equation}
\item Remember again that infinity is an idea, not a number, and equation \ref{InfinitySum} produces a numerical result based on \textit{the idea of infinity}, not a computer program that produces a sum of $2^{-n}$ values for $n$ arguments between 1 and infinity.
\end{itemize}
\subsection{Lots of Nines}
Here's a similar problem -- does this equation make a true statement?:
\begin{equation}
\label{DecimalNines}
0.9999 ... = 1
\end{equation}
This particular equation has created much online discussion\endnote{\href{https://en.wikipedia.org/wiki/0.999...}{0.999...} -- a discussion of the identity $0.999... = 1$}. The ellipsis (...) in equation \ref{DecimalNines} means ``Extend this sequence out to infinity'' and, as with the \nameref{Infinite Pie} example above, it's resolved with the idea of infinity, not a number. Here's this example in summation form:
\begin{equation}
\label{NineSum}
y = \sum_{n = 1}^{m} 9 \times 10^{-n}
\end{equation}
Here are some example results for equation \ref{NineSum} and finite $m$ values:
\begin{table}[H]
\begin{center}
\input{\basepath/infinity_table2.txt}
\caption{Discrete Summation 2}
\label{Discrete Summation 2}
\end{center}
\end{table}
Each $m$ value applied to the equation \ref{NineSum} series produces the sum $ \displaystyle \frac{10^{m}-1}{10^{m}}$, which, as before, means for finite $m$ values the result can never equal 1. And as before, only by applying the idea of infinity can we make this sum equal 1:
\begin{equation}
\label{NineInfiniteSum}
\sum_{n = 1}^{\infty} 9 \times 10^{-n} = 1
\end{equation}
So it seems this statement is true:
\begin{equation}
0.9999... = 1
\end{equation}
\section{The Platonic Realm}
\subsection{Plato's Third Realm}
Greek philosopher Plato\endnote{\href{https://en.wikipedia.org/wiki/Plato}{Plato} -- influential Greek philosopher.} argued for the existence of three realms:
\begin{enumerate}
\item The external, physical world.
\item The internal world of conscious experience.
\item The realm of abstract forms and ideas, more clearly described in Plato's Theory of Forms/ideas\endnote{\href{https://en.wikipedia.org/wiki/Theory_of_Forms}{Theory of Forms} -- a Platonic realm of ideas and abstract forms.}.
\end{enumerate}
In the theory of forms/ideas, Plato argued that the non-physical realm of forms and ideas was the most accurate reality, ascending above our sense perceptions and conscious perception of the external world. This concept addresses the problem of universals\endnote{\href{https://en.wikipedia.org/wiki/Problem_of_universals}{Problem of universals} -- the philosophical problem addressing the relation between specific things and what abstract forms they represent}, which considers the relation between specific things and the degree to which they represent universal forms.
The Theory of Forms argues that all perceptible things are imperfect representations of universal, abstract forms and ideas, and that those abstract forms exist independent of their manifestations.
It seems to me that mathematics, even though it sometimes addresses everyday reality, resides in Plato's third realm, and many mathematical concepts represent abstract forms/ideas that sometimes have examples in physical reality. This is why, even though reality can't get along without mathematics, mathematics can easily get along with reality.
\subsection{Mathematical Reality}
Because of parallel developments in modern mathematics and physics, the idea is taking hold that reality is innately mathematical\endnote{\href{https://en.wikipedia.org/wiki/Mathematical_universe_hypothesis}{Mathematical universe hypothesis} -- the hypothesis that the universe is innately mathematical.} and that the abstract realm of mathematics is the source for much of our understanding of physical reality.
Examples that support this idea appear in unexpected places. For example there's a species of cicada that hatches out at two different intervals -- 13 or 17 years, no other periods. These long periods between mass births was once a mystery, but it is now thought\endnote{\href{https://en.wikipedia.org/wiki/Periodical_cicadas}{Periodical cicadas} -- a species of cicada that, for survival reasons, reproduce at prime-numbered intervals.} that these intervals result from the fact that both 13 and 17 are prime numbers, and because they are prime these particular reproduction intervals reduce the chance for synchronization with the life cycles of predators. Computer mathematical models show the survival advantage of these specific intervals\endnote{\href{http://arachnoid.com/prime_numbers/\#Mathematical_Locusts}{Mathematical Locusts} -- how mathematics plays a part in species survival.}, which are thought to have arisen by the chance process of natural selection.
Modern physics has evolved in such a way that it's almost entirely described mathematically, with very little content not reliant on equations. Most modern scientific research has become a search for empirically testable equations that successfully explain past observations, make reliable predictions about observations not yet made, and more importantly provide a unifying conceptual framework that may lead to further discoveries.
In the same way, modern technology (applied science) relies on a deep understanding of mathematical ideas. For example, semiconductor technology, and the computers built from it, require an intimate understanding of quantum theory, a theory defined by mathematics.
Our ability to explore space also requires a thorough understanding of mathematics, both conceptually and in the details. A Mars-bound spacecraft was once lost because of a single wrong number\endnote{\href{https://en.wikipedia.org/wiki/Mars_Climate_Orbiter}{Mars Climate Orbiter} -- a spacecraft lost due to one wrong number.} that resulted from confusion about which units of measurement were to be used.
\subsection{Unreasonable Effectiveness}
In a now-famous paper entitled ``The Unreasonable Effectiveness of Mathematics in the Natural Sciences\endnote{\href{https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html}{The Unreasonable Effectiveness of Mathematics in the Natural Sciences} --an article that discusses the power of mathematics to describe reality.},'' physicist Eugene Wigner ponders the surprising degree to which mathematics, which has no essential connection with reality, describes it so well. Wigner concludes his article by saying, ``The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.''
It's not an exaggeration to say that understanding the modern world means understanding mathematics, and we have a simple choice -- we can learn mathematics ourselves, or be perpetually lied to by those who have learned it.
\section{Appendices}
\subsection{Data Table Design}
\label{Data Table Design}
The \nameref{table:AVPTable} data table on page \pageref{table:AVPTable} is meant to introduce certain calculus ideas in a simple, accessible way. Each column is the stepwise \textit{numerical derivative} of the column to its right, and the stepwise \textit{numerical integral} of the column to its left, where such columns exist.
\subsubsection{Average Velocity}
I draw the reader's attention to the ``Average Velocity'' column of the car data table. This column exists so that any two positions in the rightmost column will have an exact numerical derivative value available. This means that any position can be acquired by summing average velocities, and any average velocity can be acquired by subtracting two positions.
The velocity column (to the left of the average velocity column) can't be used to directly acquire positions because each stepwise change in position results from the \textit{average} change in velocity from one position to the next, and neither the initial nor final velocity for that position can serve this purpose. Because the velocity is changing over time, the distinction between average and final velocity must be taken into account, otherwise the exact numerical correspondence between velocity and position columns would not exist.
\subsubsection{No Average Acceleration}
There's no average acceleration column in this table because acceleration is a constant. A strictly designed table would include an average acceleration column, but because acceleration is a constant such a column would contain the same values as the existing column, so for simplicity in design it was eliminated.
As the reader advances through this article and gets to the idea of infinitesimal steps, the distinction between average and absolute rates of change evaporates, a fact that may occur to the astute reader as another reason to learn calculus.
\subsection{Detailed Symbolic Differentiation}
\label{Detailed Symbolic Differentiation}
This appendix provides a step-by-step algebraic procedure for acquiring a symbolic derivative. We begin by showing the basic strategy for moving from a function $f(x)$ to its derivative, denoted by $f'(x)$:
\begin{equation}
\label{DeltaLimitExpression}
f'(x) = \lim_{\Delta x \to 0} \frac{f(x + \Delta x) - f(x)}{\Delta x}
\end{equation}
In equation \ref{DeltaLimitExpression} a step variable $\Delta x$ is created which (in numerical versions of this method) produces a small interval on the slope of the function's curve as shown in Figure \ref{NumericalDifferentiation} on page \pageref{NumericalDifferentiation}. In the symbolic strategy this variable, apart from its theoretical significance, serves to eliminate algebraic terms, in a process explained below.
\subsubsection{Strategy}
The basic strategy for converting a function to its derivatives relies on a combination of algebraic manipulations and removal of terms, including use of a limit operation to remove all $\Delta x$ terms as well as any terms it multiplies. For this example we will use the position function first presented as equation \ref{PositionFunction} above:
\begin{equation}
p(t) = \frac{1}{2} a t^2
\end{equation}
\subsubsection{Sequence}
We begin with this differential limit form:
\begin{equation}
\label{LimitExpression}
p'(t) = \lim_{\Delta t \to 0} \frac{p(t + \Delta t) - p(t)}{\Delta t}
\end{equation}
To facilitate manipulation of terms, we temporarily set aside functional notation and show the equation the function defines:
\begin{equation}
\frac{p(t + \Delta t) - p(t)}{\Delta t} \mapsto \frac{a {\left({\Delta t} + t\right)}^{2} - a t^{2}}{2 \, {\Delta t}}
\end{equation}
Using the identity $(a+b)^2 = a^2 + 2 a b + b^2$, we expand terms for more flexibility in term elimination:
\begin{equation}
\frac{a \highlight{{\left({\Delta t} + t\right)}^{2}} - a t^{2}}{2 \, {\Delta t}} \mapsto \frac{a \highlight{{\left({\Delta t}^{2} + 2 \, {\Delta t} \, t + t^{2}\right)}} - a t^{2}}{2 \, {\Delta t}}
\end{equation}
Solely for clarity of expression in this example, not to satisfy a mathematical requirement, we apply the factor $\frac{2}{a}$ to temporarily remove some terms:
\begin{equation}
\label{ApplyFactor}
\frac{a {\left({\Delta t}^{2} + 2 \, {\Delta t} \, t + t^{2}\right)} - a t^{2}}{2 \, {\Delta t}} \, \frac{2}{a} = \frac{{\Delta t}^{2} + 2 \, {\Delta t} \, t + t^{2} - t^{2}}{{\Delta t}}
\end{equation}
We eliminate terms:
\begin{equation}
\frac{\Ccancel[red]{{\Delta t}}^{2} + 2 \, \Ccancel[red]{{\Delta t}} \, t + \Ccancel[blue]{t^{2}} - \Ccancel[blue]{t^{2}}}{\Ccancel[red]{{\Delta t}}} \mapsto {\Delta t} + 2 t
\end{equation}
We perform a limit operation to eliminate the remaining ${\Delta t}$ term:
\begin{equation}
\lim_{\Delta t \to 0} {\Delta t} + 2 t = 2 t
\end{equation}
We apply the inverse of the factor applied in equation \ref{ApplyFactor} above:
\begin{equation}
\label{DerivativeResult}
2 t \, \frac{a}{2} = a t
\end{equation}
\subsubsection{Outcome}
The result shown in \ref{DerivativeResult} is the first derivative of $p(t)$, which has this notation: $p'(t)$. The above process may be concisely expressed as:
\begin{equation}
\label{SymbolicDerivativeResult}
p'(t) = \lim_{\Delta t \to 0} \frac{a {\left({\Delta t} + t\right)}^{2} - a t^{2}}{2 \, {\Delta t}} = a t
\end{equation}
\subsubsection{Conclusion}
Notice about this process that the explicit differential approach shown in equation \ref{LimitExpression} above produces two avenues for transitioning from a function to its derivative: (a) algebraic elimination of terms, and (b) use of the limit operation to eliminate $\Delta t$ and any terms multiplied by $\Delta t$. The order of these operations may be changed depending on circumstances.
For the nontechnical reader, the take-away from this explanation should be that a function is converted to its derivative by applying the limit operator to $\Delta t$ as shown in \ref{SymbolicDerivativeResult} above, which cancels all $\Delta t$ terms and any terms multiplied by them. This has the effect of transforming a function into a symbolic derivative form of that function, one that tracks \textit{rates of change} in the original function with respect to its independent variable.
\subsection{Rules for Integration}
\label{Rules for Integration}
For many common equations, transformation rules can be applied to obtain integrals. Here are some examples from an old textbook\endnote{These examples come from ``A New Table of Indefinite Integrals -- Computer Processed'', 1971, Melvin Klerer \& Fred Grossman, Dover, ISBN 0-486-62714-4.}:
\begin{equation}
\label{IntegralRule1}
\int A \, \mathrm{d} x = A x
\end{equation}
\begin{equation}
\label{IntegralRule2}
\int x \, \mathrm{d} x = \frac{x^2}{2}
\end{equation}
\begin{equation}
\label{IntegralRule3}
\int x^2 \, \mathrm{d} x = \frac{x^3}{3}
\end{equation}
\begin{equation}
\label{IntegralRule4}
\int x^n \, \mathrm{d} x = \frac{x^{n+1}}{n+1} \text{,} (n \ne -1)
\end{equation}
\begin{equation}
\int (A+Bx)^n \, \mathrm{d} x = \frac{(A+Bx)^{n+1}}{(n+1) B} \text{,} (n \ne -1)
\end{equation}
It should be clear that equation \ref{IntegralRule4} is the general form of \ref{IntegralRule2} and \ref{IntegralRule3}.
Here are some non-overlapping examples from a popular math website\endnote{\href{http://integral-table.com/}{Table of Basic Integrals} -- a compendium of conversion rules for indefinite integrals.}:
\begin{equation}
\int \frac{1}{x} \mathrm{d} x = \ln | x |
\end{equation}
\begin{equation}
\int \frac{1}{ax+b} \mathrm{d} x = \frac{1}{a} \ln | ax + b |
\end{equation}
There are many similar sites online, these are just examples to give a sense of the notation and show the convenience of an established rule. A worthwhile general resource is Wolfram Alpha's Integral Calculator\endnote{\href{http://www.wolframalpha.com/calculators/integral-calculator/}{Online Integral Calculator} -- A subset of Wolfram Alpha's online offering that calculates definite and indefinite integrals.}, which interactively accepts and solves submitted expressions.
\subsection{Computer Processing Limitations}
\label{Computer Processing Limitations}
When using a computer to produce numerical differentials and because of a limitation in how computers manage numbers, beyond a certain point choosing smaller step sizes can produce a decline in accuracy:
\begin{itemize}
\item Typical modern computer variables\endnote{\href{https://en.wikipedia.org/wiki/Floating_point\#IEEE_754:_floating_point_in_modern_computers}{IEEE 754: floating point in modern computers} -- a description of a typical modern computer numerical variable.} can resolve about 15 digits.
\item Therefore if we subtract one computer number from another, if the numbers differ by more than $1 \times 10^{15}$, the result will have the value of the larger of the two numbers -- it will be as if the operation never took place.
\item As one approaches this limitation, the accuracy of the results declines rapidly because more and more significant digits are dropped from the calculation.
\end{itemize}
The conclusion is that we should only use numerical methods (a) to learn how these methods work, or (b) if the nature of the problem leaves us with no alternative. For the majority of problems, we should apply a symbolic differentiation method.
\subsection{Calculus Notations}
\label{Calculus Notations}
It turns out there are a great number of conventions to symbolize differentials and derivatives. The author uses a common notation for functions -- for a given function $f(x)$, $f'(x)$ denotes first derivative, $f''(x)$ second derivative and so forth -- this so-called ``prime'' notation originated with Lagrange\endnote{\href{https://en.wikipedia.org/wiki/Notation_for_differentiationLagrange.27s_notation}{Lagrange's Notation} -- the ``prime'' notation for function derivatives used in this article.}. One drawback to this convention is that it doesn't explicitly distinguish dependent and independent variables (the function argument is assumed to represent the independent variable). For purposes other than functional notation, where both dependent and independent variables are present, Gottfried Liebniz's derivative notation\endnote{\href{https://en.wikipedia.org/wiki/Leibniz\%27s_notation}{Leibniz's notation} -- mathematical notation for derivatives.} might be preferable -- for independent and dependent variables $x$ and $y$:
\begin{equation}
\frac{dy}{dx} \text{(first derivative) ,} \frac{d^2y}{dx^2} \text{(second derivative), etc.}
\end{equation}
Liebniz's derivative notation for functions has a number of variations, some of which are:
\begin{equation}
\frac{df}{dx}(x) \, \text{or} \, \frac{df(x)}{dx} \, \text{or} \, \frac{d}{dx}f(x)
\end{equation}
But it's not clear that this has an advantage over the Lagrange ``prime'' notation. There are many other notations in current use and with historical importance\endnote{\href{https://en.wikipedia.org/wiki/Notation_for_differentiation}{Notation for Differentiation} -- a summary of many notations, current and historical.}.
\addcontentsline{toc}{section}{References}
\theendnotes
\end{document}