Theory

Basic ideas of the CNS

Clean Numerical Simulation (CNS) [T1] is a set of strategies with a purpose to control “numerical noises” arbitrarily. Its main point is decreasing the truncation error and the round-off error to a required level.

Truncation errors come from the discretization of continuous systems. Numerical methods have the following general form:

\[f(t+h) = f(t) + h \cdot RHS(t)\]

where \(RHS(t)\) is the right-hand side, it varies according to the numerical methods. For the N-th order Taylor series method, the right-hand side can written as:

\[RHS(t) = \sum\limits_{i=1}^{N}\dfrac{f^{(i)}(t)}{i!}h^{i-1} + \mathcal{O}(h^{N})\]

where \(\mathcal{O}(h^{N})\) is the order of the global truncation errors.

The CNS suggests that reducing the timestep \(h\) or increasing \(N\) with high order methods can decrease truncation errors to a small level which will not damage the long-term prediction. With the help of extended precision formats to store extra digits in computers, round-off errors can be largely avoided. Consequently, the numerical noises of any simulation can be controlled arbitrarily small.

Based on the ability to control numerical noises arbitrarily small, the CNS can be used to estimate the critical predictable time \(T_c\) of a chaotic system. The critical predictable time \(T_c\) is defined as the time when the averaged level of numerical noise reaches a certain level \(\epsilon_c\).

The averaged level of numerical noise should increase exponentially within a temporal interval [0, \(T_c\)], i.e.

\[\varepsilon(t) = \varepsilon_0 \exp(\kappa t), \quad 0 \leq t \leq T_c\]

So the critical predictable time \(T_c\) can be estimated by:

\[T_c = \dfrac{1}{k}\ln(\dfrac{\varepsilon_c}{\varepsilon_0})\]

For more information, please refer to Liao’s review article [T2].

The old way

In the past studies, using CNS to solve the ordinary differential equations (ODEs) is to differentiate both sides of the equations to get the recursion formulas of the variables. For example, for the Lorenz equations:

\[ \begin{align}\begin{aligned}\dot{x}(t) & = \sigma (y(t) - x(t))\\\dot{y}(t) & = R x(t) - y(t) -x(t)z(t)\\\dot{z}(t) & = x(t)y(t) - b z(t)\end{aligned}\end{align} \]

where \(\sigma, R, b\) are physical parameters, the dot denotes the differentiation with respect to the time \(t\), respectively.

Then, differentiating both sides of the Lorenz eqs \((N-1)\) times with respect to \(t\) and then dividing them by \(n!\), we obtain the high-order temporal derivatives:

\[ \begin{align}\begin{aligned}x^{[n+1]} & = \dfrac{\sigma}{n+1}(y^{[n]}-x^{[n]})\\y^{[n+1]} & = \dfrac{1}{n+1}(Rx^{[n]}-y^{[n]}-\sum\limits_{j=0}^{n}x^{[j]}z^{[n-j]})\\z^{[n+1]} & = \dfrac{1}{n+1}(bz^{[n]}+\sum\limits_{j=0}^{n}x^{[j]}y^{[n-j]})\end{aligned}\end{align} \]

where \(x^{[n]}, y^{[n]}, z^{[n]}\) are the coefficients of the Taylor series:

\[ \begin{align}\begin{aligned}x^{[n]} & = \dfrac{1}{n!}\dfrac{\partial^n x(t)}{\partial t^n}\\y^{[n]} & = \dfrac{1}{n!}\dfrac{\partial^n y(t)}{\partial t^n}\\z^{[n]} & = \dfrac{1}{n!}\dfrac{\partial^n z(t)}{\partial t^n}\end{aligned}\end{align} \]

To achieve iterative solutions for simple equations, manual methods may suffice. However, for more complex equations, manually solving the iterative process can be tedious. In such cases, it becomes necessary to adopt automated methods for derivative finding.

A new way

Automatic Differentiation (AD) is a set of techniques to evaluate the derivate of a function specified by a computer program. In the method of Taylor series method (TSM), it defines basic arithmetic of Taylor series, and use these basic arithmetic of Taylor series, one can automatical get the arbitrary derivative of the function.

The basic arithmetic of Taylor series are derivated by Leibniz rule:

for the \(a(t) = b(t)c(t)\), it gives the following formula:

\[a^{[n]}(t) = \sum\limits_{j=0}^{n}b^{[j]}(t)c^{[n-j]}(t)\]

where \(a^{[n]}, b^{[n]}, c^{[n]}\) are the coefficients of the Taylor series defined in the last section.

Besides, the derivative of the \(a(t), b(t), c(t)\) can be write as:

\[ \begin{align}\begin{aligned}a^{'}(t) = \sum\limits_{j=0}^{n-1}(j+1)a^{[j+1]}\Delta t^{j}\\b^{'}(t) = \sum\limits_{j=0}^{n-1}(j+1)b^{[j+1]}\Delta t^{j}\\c^{'}(t) = \sum\limits_{j=0}^{n-1}(j+1)c^{[j+1]}\Delta t^{j}\end{aligned}\end{align} \]

Based on those fomulas, we can obtain the different basic functions of the taylor series.

Addition and subtraction

For the \(a(t) = b(t) \pm c(t)\), we have the iterative fomulas:

\[a^{[n]}(t) = b^{[n]}(t) \pm c^{[n]}(t)\]

Multiplication

For the \(a(t) = b(t)c(t)\), we have the iterative fomulas:

\[a^{[n]}(t) = \sum\limits_{j=0}^{n}b^{[j]}(t)c^{[n-j]}(t)\]

Division

For the \(a(t) = b(t)/c(t)\), which constantly equals \(a(t)c(t) = b(t)\), we have the iterative fomulas:

\[a^{[n]}(t)=\frac{1}{c^{[0]}(t)}\left[b^{[n]}(t)-\sum_{j=1}^n a^{[j]}(t) c^{[n-j]}(t)\right]\]

Logarithm

For the \(a(t) = \ln{b(t)}\), we have the iterative fomulas:

\[a^{[n]}(t)=\frac{1}{n b^{[0]}(t)}\left[n b^{[n]}(t)-\sum_{j=1}^{n-1} j a^{[j]}(t) b^{[n-j]}(t) \right]\]

For the \(a(t) = \log_{c} b(t)\), we can use the logarithm change of base rule \(\log_{c} b(t) = \dfrac{\ln b(t)}{\ln c}\), which gives:

\[a^{[n]}(t)=\frac{1}{nb^{[0]}(t)\ln c }\left[n b^{[n]}(t)-\sum_{j=1}^{n-1} j a^{[j]}(t) b^{[n-j]}(t) \right]\]

Exponential

For the \(a(t) = e^{b(t)}\), we have the iterative fomulas:

\[a^{[n]}(t)=\frac{1}{n} \sum_{j=0}^{n-1} (n-j) a^{[j]}(t) b^{[n-j]}(t)\]

For the \(a(t) = c^{b(t)}\), we can use \(c^{b(t)} = e^{b(t)\ln{c}}\), which gives:

\[a^{[n]}(t)=\frac{\ln{c}}{n} \sum_{j=0}^{n-1} (n-j) a^{[j]}(t) b^{[n-j]}(t)\]

Sine and cosine

For the \(a_s(t) = \sin(b(t))\) and the \(a_c(t) = \cos(b(t))\) we have the iterative fomulas:

\[ \begin{align}\begin{aligned}a_s^{[n]}(t) & = \dfrac{1}{n}\sum_{j=0}^{n-1}(n-j)a_c^{[j]}(t)b^{[n-j]}(t)\\a_c^{[n]}(t) & = -\dfrac{1}{n}\sum_{j=0}^{n-1}(n-j)a_s^{[j]}(t)b^{[n-j]}(t)\end{aligned}\end{align} \]

Hyperbolic sine and hyperbolic cosine

For the \(a_{sh}(t) = \sinh(b(t))\) and the \(a_{ch}(t) = \cosh(b(t))\) we have the iterative fomulas:

\[ \begin{align}\begin{aligned}a_{sh}^{[n]}(t) & = \dfrac{1}{n}\sum_{j=0}^{n-1}(n-j)a_{ch}^{[j]}(t)b^{[n-j]}(t)\\a_{ch}^{[n]}(t) & = \dfrac{1}{n}\sum_{j=0}^{n-1}(n-j)a_{sh}^{[j]}(t)b^{[n-j]}(t)\end{aligned}\end{align} \]

Tangent

For \(a(t) = \tan (b(t))\), let:

\[h(b(t)) = \dfrac{1}{\cos^{2}(b(t))}\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \tan^{'} (b(t)) b^{'}(t)\\& = h(b(t)) b^{'}(t)\\h^{'}(b(t)) & = \left( \dfrac{1}{\cos^{2}(b(t))} \right)^{'}\\& = \dfrac{2\sin(b(t))b^{'}(t)}{\cos^{3}(b(t))}\\& = 2 a(t) h(b(t)) b^{'}(t)\\& = 2 a(t) a^{'}(t)\end{aligned}\end{align} \]

Note

The of a quantity denote the derivative of the this quantity about the quantity in its parentheses. For example, \(a^{'}(t)\) means \(\dfrac{\partial a(t)}{\partial t}\) and \(\tan^{'}(b(t))\) means \(\dfrac{\partial tan(b(t))}{\partial b(t)}\).

with the multiplication of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \dfrac{1}{n} \sum\limits_{j=0}^{n-1}(n-j)h^{[j]}(t)b^{[n-j]}(t)\\h^{[n]}(t) & = \dfrac{2}{n} \sum\limits_{j=0}^{n-1}(n-j)a^{[j]}(t)a^{[n-j]}(t)\end{aligned}\end{align} \]

Hyperbolic tangent

For \(a(t) = \tanh (b(t))\), let:

\[h(b(t)) = \dfrac{1}{\cosh^{2}(b(t))}\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \tanh^{'} (b(t)) b^{'}(t)\\& = h(b(t)) b^{'}(t)\\h^{'}(b(t)) & = \left( \dfrac{1}{\cosh^{2}(b(t))} \right)^{'}\\& = -\dfrac{2\sinh(b(t))b^{'}(t)}{\cosh^{3}(b(t))}\\& = -2 a(t) h(b(t)) b^{'}(t)\\& = -2 a(t) a^{'}(t)\end{aligned}\end{align} \]

with the multiplication of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \dfrac{1}{n} \sum\limits_{j=0}^{n-1}(n-j)h^{[j]}(t)b^{[n-j]}(t)\\h^{[n]}(t) & = -\dfrac{2}{n} \sum\limits_{j=0}^{n-1}(n-j)a^{[j]}(t)a^{[n-j]}(t)\end{aligned}\end{align} \]

Cotangent

For \(a(t) = \cot (b(t))\), let:

\[h(b(t)) = -\dfrac{1}{\sin^{2}(b(t))}\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \cot^{'} (b(t)) b^{'}(t)\\& = h(b(t)) b^{'}(t)\\h^{'}(b(t)) & = \left( -\dfrac{1}{\sin^{2}(b(t))} \right)^{'}\\& = \dfrac{2\cos(b(t))b^{'}(t)}{\sin^{3}(b(t))}\\& = -2 a(t) h(b(t)) b^{'}(t)\\& = -2 a(t) a^{'}(t)\end{aligned}\end{align} \]

with the multiplication of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \dfrac{1}{n} \sum\limits_{j=0}^{n-1}(n-j)h^{[j]}(t)b^{[n-j]}(t)\\h^{[n]}(t) & = -\dfrac{2}{n} \sum\limits_{j=0}^{n-1}(n-j)a^{[j]}(t)a^{[n-j]}(t)\end{aligned}\end{align} \]

Hyperbolic cotangent

For \(a(t) = \coth (b(t))\), let:

\[h(b(t)) = -\dfrac{1}{\sinh^{2}(b(t))}\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \cot^{'} (b(t)) b^{'}(t)\\& = h(b(t)) b^{'}(t)\\h^{'}(b(t)) & = \left( \dfrac{1}{\sinh^{2}(b(t))} \right)^{'}\\& = \dfrac{2\cosh(b(t))b^{'}(t)}{\sinh^{3}(b(t))}\\& = -2 a(t) h(b(t)) b^{'}(t)\\& = -2 a(t) a^{'}(t)\end{aligned}\end{align} \]

with the multiplication of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \dfrac{1}{n} \sum\limits_{j=0}^{n-1}(n-j)h^{[j]}(t)b^{[n-j]}(t)\\h^{[n]}(t) & = -\dfrac{2}{n} \sum\limits_{j=0}^{n-1}(n-j)a^{[j]}(t)a^{[n-j]}(t)\end{aligned}\end{align} \]

Inverse sine

For \(a(t) = \arcsin (b(t))\), let:

\[h(b(t)) = \sqrt{1-b^2(t)}\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \arcsin^{'} (b(t)) b^{'}(t)\\& = \dfrac{b^{'}(t)}{h(b(t))}\\h^{'}(b(t)) & = \left( \sqrt{1-b^2(t)} \right)^{'}\\& = -\dfrac{b^{'}(t)b(t)}{\sqrt{1-b^2(t)}}\\& = -a^{'}(t)b(t)\end{aligned}\end{align} \]

with the multiplication and division of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \frac{1}{nh^{[0]}(t)}\left[nb^{[n]}(t)-\sum_{j=1}^{n-1}j a^{[j]}(t) h^{[n-j]}(t)\right]\\h^{[n]}(t) & = -\dfrac{1}{n} \sum\limits_{j=0}^{n-1}(n-j)a^{[n-j]}(t)b^{[j]}(t)\end{aligned}\end{align} \]

Inverse hyperbolic sine

For \(a(t) = \operatorname{arcsinh} (b(t))\), let:

\[h(b(t)) = \sqrt{1+b^2(t)}\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \operatorname{arcsinh}^{'} (b(t)) b^{'}(t)\\& = \dfrac{b^{'}(t)}{h(b(t))}\\h^{'}(b(t)) & = \left( \sqrt{1+b^2(t)} \right)^{'}\\& = \dfrac{b^{'}(t)b(t)}{\sqrt{1+b^2(t)}}\\& = a^{'}(t)b(t)\end{aligned}\end{align} \]

with the multiplication and division of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \frac{1}{nh^{[0]}(t)}\left[nb^{[n]}(t)-\sum_{j=1}^{n-1}j a^{[j]}(t) h^{[n-j]}(t)\right]\\h^{[n]}(t) & = \dfrac{1}{n} \sum\limits_{j=0}^{n-1}(n-j)a^{[n-j]}(t)b^{[j]}(t)\end{aligned}\end{align} \]

Inverse cosine

For \(a(t) = \arccos (b(t))\), let:

\[h(b(t)) = -\sqrt{1-b^2(t)}\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \arccos^{'} (b(t)) b^{'}(t)\\& = \dfrac{b^{'}(t)}{h(b(t))}\\h^{'}(b(t)) & = \left( -\sqrt{1-b^2(t)} \right)^{'}\\& = \dfrac{b^{'}(t)b(t)}{\sqrt{1-b^2(t)}}\\& = -a^{'}(t)b(t)\end{aligned}\end{align} \]

with the multiplication and division of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \frac{1}{nh^{[0]}(t)}\left[nb^{[n]}(t)-\sum_{j=1}^{n-1}j a^{[j]}(t) h^{[n-j]}(t)\right]\\h^{[n]}(t) & = -\dfrac{1}{n} \sum\limits_{j=0}^{n-1}(n-j)a^{[n-j]}(t)b^{[j]}(t)\end{aligned}\end{align} \]

Inverse hyperbolic cosine

For \(a(t) = \operatorname{arccosh} (b(t))\), let:

\[h(b(t)) = \sqrt{b^2(t)-1}\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \operatorname{arccosh}^{'} (b(t)) b^{'}(t)\\& = \dfrac{b^{'}(t)}{h(b(t))}\\h^{'}(b(t)) & = \left( \sqrt{b^2(t)-1} \right)^{'}\\& = \dfrac{b^{'}(t)b(t)}{\sqrt{1-b^2(t)}}\\& = a^{'}(t)b(t)\end{aligned}\end{align} \]

with the multiplication and division of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \frac{1}{nh^{[0]}(t)}\left[nb^{[n]}(t)-\sum_{j=1}^{n-1}j a^{[j]}(t) h^{[n-j]}(t)\right]\\h^{[n]}(t) & = \dfrac{1}{n} \sum\limits_{j=0}^{n-1}(n-j)a^{[n-j]}(t)b^{[j]}(t)\end{aligned}\end{align} \]

Inverse tangent

For \(a(t) = \arctan (b(t))\), let:

\[h(b(t)) = b^2(t)\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \arctan^{'} (b(t)) b^{'}(t)\\& = \dfrac{b^{'}(t)}{h(b(t))+1}\\h^{'}(b(t)) & = \left( b^2(t) \right)^{'}\\& = 2b(t)b^{'}(t)\end{aligned}\end{align} \]

with the multiplication and division of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \frac{1}{n(1+h^{[0]}(t))}\left[nb^{[n]}(t)-\sum_{j=1}^{n-1}j a^{[j]}(t) h^{[n-j]}(t)\right]\\h^{[n]}(t) & = \dfrac{2}{n} \sum\limits_{j=0}^{n-1}(n-j)b^{[j]}(t)b^{[n-j]}(t)\end{aligned}\end{align} \]

Inverse hyperbolic tangent

For \(a(t) = \operatorname{arctanh} (b(t))\), let:

\[h(b(t)) = -b^2(t)\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \operatorname{arctanh}^{'} (b(t)) b^{'}(t)\\& = \dfrac{b^{'}(t)}{h(b(t))+1}\\h^{'}(b(t)) & = \left( -b^2(t) \right)^{'}\\& = -2b(t)b^{'}(t)\end{aligned}\end{align} \]

with the multiplication and division of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \frac{1}{n(1+h^{[0]}(t))}\left[nb^{[n]}(t)-\sum_{j=1}^{n-1}j a^{[j]}(t) h^{[n-j]}(t)\right]\\h^{[n]}(t) & = -\dfrac{2}{n} \sum\limits_{j=0}^{n-1}(n-j)b^{[j]}(t)b^{[n-j]}(t)\end{aligned}\end{align} \]

Inverse cotangent

For \(a(t) = \operatorname{arccot} (b(t))\), let:

\[h(b(t)) = -b^2(t)\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \operatorname{arccot}^{'} (b(t)) b^{'}(t)\\& = \dfrac{b^{'}(t)}{h(b(t))-1}\\h^{'}(b(t)) & = \left( -b^2(t) \right)^{'}\\& = -2b(t)b^{'}(t)\end{aligned}\end{align} \]

with the multiplication and division of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \frac{1}{n(-1+h^{[0]}(t))}\left[nb^{[n]}(t)-\sum_{j=1}^{n-1}j a^{[j]}(t) h^{[n-j]}(t)\right]\\h^{[n]}(t) & = -\dfrac{2}{n} \sum\limits_{j=0}^{n-1}(n-j)b^{[j]}(t)b^{[n-j]}(t)\end{aligned}\end{align} \]

Inverse hyperbolic cotangent

For \(a(t) = \operatorname{arccoth} (b(t))\), let:

\[h(b(t)) = -b^2(t)\]

we have:

\[ \begin{align}\begin{aligned}a^{'}(t) & = \operatorname{arccoth}^{'} (b(t)) b^{'}(t)\\& = \dfrac{b^{'}(t)}{h(b(t))+1}\\h^{'}(b(t)) & = \left( -b^2(t) \right)^{'}\\& = -2b(t)b^{'}(t)\end{aligned}\end{align} \]

with the multiplication and division of taylor series, we have the final iterative fomulas:

\[ \begin{align}\begin{aligned}a^{[n]}(t) & = \frac{1}{n(1+h^{[0]}(t))}\left[nb^{[n]}(t)-\sum_{j=1}^{n-1}j a^{[j]}(t) h^{[n-j]}(t)\right]\\h^{[n]}(t) & = -\dfrac{2}{n} \sum\limits_{j=0}^{n-1}(n-j)b^{[j]}(t)b^{[n-j]}(t)\end{aligned}\end{align} \]

Self-Adaptive Algorithm

Using the above basic arithmetic of Taylor series, we can get the arbitrary derivative of the function. When the total calculation time of the dynamical system is decided, the order of the Taylor series method and word size of the program is settled. But when the calculation progresses over time, the average noise will growth which make the order and word size are meaningless to maintain with the same value. So we need to use the self-adaptive algorithm to speed up the calculation.

Following Qin and Liao, the word size of the program is defined as:

\[N_s=\left\lceil\frac{\gamma \kappa\left(T_c-t^{\prime}\right)}{\ln 10}-\log _{10} \varepsilon_c\right\rceil\]

where the \(\gamma\) is safety factor greater than 1, \(\kappa\) is the noise-growing exponent, \(T_c\) is the critical predictable time, \(t^{\prime}\) is the current time, \(\varepsilon_c\) is the critical predictable error, and \(\lceil \cdot \rceil\) is ceiling function.

the order of the Taylor series method is defined as:

\[M = \left\lceil -1.5\log_{10}(tol) \right\rceil = \left\lceil 1.5N_s \right\rceil\]

It should be noted that the number 1.5 in order of the Taylor series method is a empirical value, which can be changed according to the specific situation. According to [T3], for non-multi process the order of the Taylor series method is defined as:

\[M = \left\lceil 1.15N_s + 1 \right\rceil\]

Following Barrio et al. [T4], the optimal time stepsize is given by:

\[\Delta t=\min \left(\frac{\operatorname{tol}^{\frac{1}{M}}}{\left\|x_i^{[M-1]}(t)\right\|_{\infty}^{\frac{1}{M-1}}}, \frac{\operatorname{tol}^{\frac{1}{M+1}}}{\left\|x_i^{[M]}(t)\right\|_{\infty}^{\frac{1}{M}}}\right)\]

References

[T1]

Shijun Liao. On the reliability of computed chaotic solutions of non-linear differential equations. Tellus A: Dynamic Meteorology and Oceanography, 61(4):550–564, 2008.

[T2]

Shi-jun Liao. On the clean numerical simulation (cns) of chaotic dynamic systems. Journal of Hydrodynamics, Ser. B, 29(5):729–747, 2017.

[T3]

Àngel Jorba and Maorong Zou. A software package for the numerical integration of odes by means of high-order taylor methods. Experimental Mathematics, 14(1):99–117, 2005.

[T4]

R Barrio, F Blesa, and M Lara. Vsvo formulation of the taylor method for the numerical solution of odes. Computers & mathematics with Applications, 50(1-2):93–111, 2005.