A very common adage in ODE solvers is that if you run into trouble with an explicit method, usually some explicit Runge-Kutta method like RK4, then you should try an implicit method. Implicit methods, because they are doing more work, solving an implicit system via a Newton method having “better” stability, should be the thing you go to on the “hard” problems.
This is at least what I heard at first, and then I learned about edge cases. Specifically, you hear people say “but for hyperbolic PDEs you need to use explicit methods”. You might even intuit from this “PDEs can have special properties, so sometimes special things can happen with PDEs… but ODEs, that should use implicit methods if you need more robustness”. This turns out to not be true, and really understanding the ODEs will help us understand better why there are some PDE semidiscretizations that have this “special cutout”.
What I want to do in this blog post is more clearly define what “better stability” actually means, and show that it has certain consequences that can sometimes make explicit ODE solvers actually more robust on some problems. And not just some made-up problems, lots of real problems that show up in the real world.
A Quick Primer on Linear ODEs
First, let’s go through the logic of why implicit ODE solvers are considered to be more robust, which we want to define in some semi-rigorous way as “having a better chance to give an answer closer to the real answer”. In order to go from semi-rigorous into a rigorous definition, we can choose a test function, and what better test function to use than a linear ODE. So let’s define a linear ODE:
$$u’ = \lambda u$$
is the simplest ODE. We can even solve it analytically, $u(t) = \exp(\lambda t)u(0)$. For completeness, we can generalize this to a linear system of ODEs, where instead of having a scalar $u$ we can let $u$ be a vector, in which case the linear ODE has a matrix of parameters $A$, i.e.
$$u’ = Au$$
In this case, if $A$ is diagonalizable, $A = P^{-1}DP$, then we can replace $A$:
$$u’ = P^{-1}DP u$$
$$Pu’ = DPu$$
or if we let $w = Pu$, then
$$w’ = Dw$$
where $D$ is a diagonal matrix. This means that for every element of $w$ we have the equation:
$$w_i’ = \lambda_i w_i$$
where $w_i$ is the vector in the direction of the $i$th eigenvector of $A$, and $\lambda_i$ is the $i$th eigenvalue of $A$. Thus our simple linear ODE $u’ = \lambda u$ tells us about general linear systems along the eigenvectors. Importantly, since even for real $A$ we can have $\lambda$ be a complex number, i.e. real-valued matrices can have complex eigenvalues, it’s important to allow for $\lambda$ to be complex to understand all possible systems.
But why is this important for any other ODE? Well by the Hartman-Grobman theorem, for any sufficiently nice ODE:
$$u’ = f(u)$$
We can locally approximate the ODE by:
$$u’ = Au$$
where $A = f'(u)$, i.e. $A$ is the linear system defined by the Jacobian local to the point. This is effectively saying any “sufficiently nice” system (i.e. if $f$ isn’t some crazy absurd function and has properties like being differentiable), you can understand how things locally move by looking at the system approximated by a linear system, where the right linear approximation is given by the Jacobian. And we know that linear systems then boil down generally to just the scalar linear system, and so understanding the behavior of a solver on the scalar linear system tells us a lot about how it will do “for small enough h”.
Okay, there are lots of unanswered questions, such as what if $A$ is not diagonalizable? What if $f$ is not differentiable? What if the system is very nonlinear so the Jacobian changes very rapidly? But under assumptions that things are nice enough, we can say that if a solver does well on $u’ = \lambda u$ then it is probably some idea of good.
Why implicit ODE solvers are “better”, i.e. more robust
So now we have a metric by which we can analyze ODEs: if they have good behavior on $u’ = \lambda u$, then they are likely to be good in general. So what does it mean to have good behavior on $u’ = \lambda u$? One nice property would be to at least be asymptotically correct for the most basic statement, i.e. does it go to zero when it should? If you have $u’ = \lambda u$ and $\lambda$ is negative, then the analytical solution $u(t) = \exp(\lambda t)u(0)$ goes to zero as $t$ goes to infinity. So a good question to ask is, for a given numerical method, for what values of $h$ (the time step size) does the numerical method give a solution that goes to zero, and for which $h$ does it get an infinitely incorrect answer?
To understand this, we just take a numerical method and plug in the test equation. So the first thing to look at is Euler’s method. For Euler’s method, we step forward by $h$ by assuming the derivative is constant along the interval, or:
$$u_{n+1} = u_n + hf(u_n)$$
When does this method give a solution that is asymptotically consistent? With a little bit of algebra:
$$u_{n+1} = u_n + h\lambda u_n$$
$$u_{n+1} = (1 + h\lambda) u_n$$
Let $z = h\lambda$, which means
$$u_{n+1} = (1 + z) u_n$$
This is a discrete dynamical system which has the analytical solution:
$$u_n = u_0 (1+z)^{n}$$
Note that if $1 + z > 1$, then $(1+z)^n$ keeps growing as $n$ increases, so this goes to infinity, while if $1 + z < 1$ it goes to zero. But since $\lambda$ can actually be a complex number, the analysis is a little bit more complex (pun intended), but it effectively means that if $z$ is in the unit circle shifted to the left in the complex plane by 1, then $u_n \rightarrow 0$. This gives us the definition of the stability region, $G(z)$ is the region for which $u_n \rightarrow 0$, and this is the shifted unit circle in the complex plane for explicit Euler.
This shows a pretty bad property for this method. For any given $\lambda$ with negative real part, there is a maximum $h$, actually $h = 1/\lambda$, such that for any larger step size we don’t just get a bad answer, we can get an infinitely bad answer, i.e. the analytical solution goes to zero but the numerical solution goes to infinity!
So, is there a method that doesn’t have this bad property? In comes the implicit methods. If you run the same analysis with implicit Euler,
$$u_{n+1} = u_n + hf(u_{n+1})$$
$$u_{n+1} = u_n + h\lambda u_{n+1}$$
$$(1-z) u_{n+1} = u_n$$
$$u_{n+1} = \frac{1}{1-z} u_n$$
Then we have almost an “inverse” answer, i.e. $G(z)$ is everything except the unit circle in the complex plane shifted to the right. This means that for any $\lambda$ with negative real part, for any $h$ the implicit Euler method has $u_n \rightarrow 0$, therefore it’s never infinitely wrong.
Therefore it’s just better, QED.
This then generalizes to more advanced methods. For example, the stability region of RK4
an explicit method has a maximum $h$, while the stability region of BDF2
an implicit method does not. You can even prove it’s impossible for any explicit method to have this “good” property, so “implicit methods are better”. QED times 2, done deal.
Wait a second, what about that other “wrongness”?
Any attentive student should immediately throw their hand up. “Teacher, given the $G(z)$ you said, you also have that for any $\lambda$ where $\text{Re}(\lambda)>1$, you also have that $u_n \rightarrow 0$, but in reality the analytical solution has $u(t) \rightarrow \infty$, so implicit Euler is infinitely wrong! And explicit Euler has the correct asymptotic behavior since it goes to infinity!”
That is completely correct! But it can be easy to brush this off with “practical concerns”. If you have a real model which has positive real eigenvalues like that, then it’s just going to explode to infinity. Those kinds of models aren’t really realistic? Energy goes to infinity, angular momentum goes to infinity, the chemical concentration goes to infinity: whatever you’re modeling just goes crazy! If you’re in this scenario, then your model is probably wrong. Or if the model isn’t wrong, the numerical methods aren’t very good anyways. If you analyze the error propagation properties, you’ll see the error of the numerical method also increases exponentially! So this is a case you shouldn’t be modeling anyways.
Seeing this robustness in practice
Therefore if you need a more accurate result, use an implicit method. And you don’t need to go to very difficult models to see this manifest in practice. Take the linear ODE:
$$T’ = 5(300-T)$$
with $T(0) = 320$. This is a simple model of cooling an object with a constant temperature influx. It’s easy to analytically solve, you just have an exponential fall in the temperature towards $T = 300$ the steady state. But when we solve it with an explicit method at default tolerances, that’s not what we see:
using OrdinaryDiffEq function cooling ( du,u,p,t ) du [ 1 ] = 5.0 * ( 300 -u [ 1 ] ) end u0 = [ 310.0 ] tspan = ( 0.0 , 10.0 ) prob = ODEProblem ( cooling, u0, tspan ) sol = solve ( prob, Tsit5 ( ) ) using Plots plot ( sol, title= "RK Method, Cooling Problem" ) savefig ( "rk_cooling.png" )
We see that the explicit method gives oscillations in the solution! Meanwhile, if we take a “robust” implicit method like the BDF method from the classic C++ library SUNDIALS, we can solve this:
using Sundials sol = solve ( prob, CVODE_BDF ( ) ) plot ( sol, title= "BDF Method, Cooling Problem" ) savefig ( "bdf_cooling.png" )
Sure it’s not perfectly accurate, but at least it doesn’t give extremely wrong behavior. We can decrease tolerances to make this all go away,
But the main point is that the explicit method is just generally “less robust”, you have to be more careful, it can give things that are just qualitatively wrong.
This means that “good tools”, tools that have a reputation for robustness, they should default to just using implicit solvers because that’s going to be better. And you see that in tools like Modelica. For example, the Modelica University’s playground and other tools in the space like OpenModelica and Dymola, default to implicit solvers like DASSL. And you can see they do great on this problem by default!
Modelica tools gives a good answer out of the box.
So QED, that’s the “right thing to do”: if you want to be robust, stick to implicit methods.
But why oscillations?
Hold up a bit… why does the explicit method give oscillations? While we know that’s wrong, it would be good to understand why it gives the qualitatively wrong behavior that it does. It turns out that this falls right out of the definition of the method. If you go back to the definition of explicit Euler on the test problem, i.e.
$$u_{n+1} = u_n + hf(u_n)$$
then substitute in:
$$u_{n+1} = (1 + h\lambda) u_{n}$$
If we think about our stability criteria $G(z)$ another way, its boundaries are exactly the value by which the next $u_{n+1}$ would have a negative real part. So the analytical solution is supposed to go to zero, but the “bad” behavior is when we choose a step size $h$ such that if we extrapolate out with a straight line for $h$ long in time, then we will “jump” over this zero, something that doesn’t happen in the analytical solution. But now let’s think about what happens in that case. If you jump over zero, then $u_n < 0$ (think real right now), so therefore the derivative of the next update points in the other direction, i.e. we're still going towards zero, but now from the negative side we go up to zero. But since $\|1 + h\lambda\| > 1$, we have that $\|u_{n+1}\| > \|u_n\|$, i.e. the norm of the solution keeps growing. So you jump from positive to negative, then negative to positive, then positive to negative, where the jumps are growing each time. This is the phantom oscillations of the explicit ODE solver!
So what’s happening is the default tolerances of the explicit ODE solver were large enough that the chosen $h$s were in the range of the phantom oscillation behavior, and so you just need to cap $h$ below that value, which is dependent on the real part of the eigenvalue of $h$ (you can do the same analysis with complex numbers, but that just gives rotations in the complex plane along with the real part oscillation).
But if explicit methods give oscillations, what’s going on with implicit ODE solvers with large $h$? Let’s look at the update equation again:
$$u_{n+1} = \frac{1}{1-z} u_n$$
now instead of multiplying each time by $(1-z)$, we divide by it. This means that when $\lambda < 0$ (or $\text{Re}(\lambda) < 0$ to be more exact), then for any $h$ we have that $\|u_{n+1}\| < \|u_{n}\|$. Therefore, we might jump over the zero with a big enough $h$, but we are guaranteed that our "jump size" is always shrinking. Thus for any $h$, we will get to zero because we're always shrinking in absolute value. This means that implicit methods are working because they have a natural dampening effect. So:
Explicit methods introduce spurious oscillations, but implicit methods naturally damp oscillations
This explains in more detail why we saw what we saw: the explicit method when the error tolerance is sufficiently high will introduce oscillations that don’t exist, while the implicit method will not have this behavior. This is a more refined version of the “energy doesn’t go to infinity!”, now it’s “energy doesn’t come from nowhere in real systems”, and because of this implicit solvers give a better qualitative answer. This is why they are more robust, which is why robust software for real engineers just always default to them.
Wait a second… do we always want that?
You should now be the student in the front row raising your hand, “implicit methods are always dampening… is that actually a good idea? Are you sure that’s always correct?” And the answer is… well it’s not. And that then gives us exactly the failure case for which implicit methods are less robust. If you have a system that is supposed to actually oscillate, then this “hey let’s always dampen everything to make solving more robust” actually leads to very wrong answers!
To highlight this, let’s just take a simple oscillator. You can think of this as a harmonic oscillator, or you can think about it as a simple model of a planet going around a star. However you want to envision it, you can write it out as a system of ODEs:
$$u_1′ = 500u_2$$
$$u_2′ = -500u_1$$
This is the linear ODE $u’ = Au$ where $A = [0\ 500; -500\ 0]$, which has complex eigenvalues with zero real part. In other words, the analytical solution is $\sin(500t)$ and $\cos(500t)$, just a pure oscillation that just keeps going around and around in circles. If we solve this with an explicit ODE solver:
function f ( du,u,p,t ) du [ 1 ] = 500u [ 2 ] du [ 2 ] = -500u [ 1 ] end u0 = [ 1.0 , 1.0 ] tspan = ( 0.0 , 1.0 ) prob = ODEProblem ( f, u0, tspan ) sol = solve ( prob, Tsit5 ( ) ) plot ( sol, title= "RK Method" , idxs= ( 1 , 2 ) ) savefig ( "rk_oscillate.png" )
we can see that it generally gets the right answer. Over time you get some drift where the energy is slowly increasing due to numerical error in each step, but it’s going around in circles relatively well. However, our “robust implicit method”…
sol = solve ( prob, CVODE_BDF ( ) ) plot ( sol, title= "BDF Method" , idxs= ( 1 , 2 ) ) savefig ( "bdf_oscillate.png" )
is just not even close. And you can see that even our “robust Modelica tools” completely fall apart:
It says the answer goes to zero! Even when the analytical solution is just a circle! But we can understand why this is the case: the software developers made the implicit assumption that “dampening oscillations is always good, because generally that’s what happens in models, so let’s always do this by default so people get better answers”, and the result of this choice is that if someone puts in a model of the Earth going around the sun, then oops the Earth hits the sun pretty quickly.
Conclusion: ODE solvers make trade-offs, you need to make the right ones for your domain
This gives us the conclusion: there is no “better” or “more robust” ODE solver method, it’s domain-specific. This is why the Julia ODE solver package has hundreds of methods, because each domain can be asking for different properties that they want out of the method. Explicit methods are not generally faster, they are also something that tends to preserve (or generate) oscillations. Implicit methods are not generally more robust, they are methods which work by dampening transients, which is a good idea for some models but not for others. But then there’s a ton of nuance. For example, can you construct an explicit ODE solver so that on such oscillations you don’t get energy growth? You can! Anas5(w) is documented as “4th order Runge-Kutta method designed for periodic problems. Requires a periodicity estimate w which when accurate the method becomes 5th order (and is otherwise 4th order with less error for better estimates)”, i.e. if you give it a canonical frequency 500 it will be able to do extremely well on this problem (and being a bit off in that estimate still works, it just has energy growth that is small).
What about what was mentioned at the beginning of the article, “for hyperbolic PDEs you need to use explicit methods”? This isn’t a “special behavior” of PDEs, this is simply because for this domain, for example advective models of fluids, you want to conserve fluid as it moves. If you choose an implicit method, it “dampens” the solver, which means you get that as you integrate you get less and less fluid, breaking the conservation laws and giving qualitatively very incorrect solutions. If you use explicit methods, you don’t have this extraneous dampening, and this gives a better looking solution. But you can go even further and develop methods for which, if $h$ is sufficiently small, then you get little to no dampening. These are SSP methods, which we say are “for Hyperbolic PDEs (Conservation Laws)” but in reality what we mean is “when you don’t want things to dampen”.
But the point is, you can’t just say “if you want a better solution, use an implicit solver”. Maybe in some domains and for some problems that is true, but in other domains and problems that’s not true. And many numerical issues can stem from the implicit assumptions that follow from the choice being made for the integrator. Given all of this, it should be no surprise that much of the Modelica community has had many problems handling fluid models, the general flow of “everything is a DAE” → “always use an implicit solver” → “fluid models always dampen” → “we need to fix the dampening” could be fixed by making different assumptions at the solver level.
So, the next time someone tells you should just use ode15s or scipy.integrate.radau in order to make things robust without knowing anything about your problem, say “umm actually”.
Little Extra Details
The article is concluded. But here’s a few points I couldn’t fit into the narrative I want to mention:
Trapezoidal is cool
One piece I didn’t fit in here is that the Trapezoidal method is cool. The dampening property comes from L-stability, i.e. $G(z) \rightarrow 0$ as $\text{Re}(z) \rightarrow -\infty$. This is a stricter form of stability, since instead of just being stable for any finite $\lambda$, this also enforces that you are stable at the limit of bigger lambdas. “Most” implicit solvers that are used in practice, like Implicit Euler, have this property, and you can show the dampening is directly related to this property. But you can have an implicit method that isn’t L-stable. Some of these methods include Adams-Bashforth-Moulton methods, which are not even A-stable so they tend to have stability properties and act more like explicit methods. But the Trapezoidal method is A-stable without being L-stable, so it doesn’t tend to dampen while it tends to be also pretty stable. Though it’s not as stable, and the difference between “is stable for any linear ODE” versus “actually stable for nonlinear ODEs” (i.e. B-stability) is pronounced on real-world stiff problems. What this means in human terms is that the Trapezoidal method tends to not be stable enough for hard stiff problems, but it also doesn’t artificially dampen, so it can be a good default in cases where you know you have “some stiffness” but also want to keep some oscillations. One particular case of this is in some electrical circuit models with natural oscillators.
Lower order methods have purposes too
“All ODE solvers have a purpose”, I give some talks that give the justification for many high order methods, so in general “higher order is good if you solve with stricter tolerances and need more precision”. But lower order methods can be better because the higher order methods require that more derivatives of $f$ are defined, and if that’s not the case (like derivative discontinuities), then lower order methods will be more efficient. So even implicit Euler has cases where it’s better than higher order BDF methods, and it has to do with “how nice” $f$ is.
BDF methods like DASSL are actually α-stable
I said that generally implicit methods that you use are A-stable. That’s also a small lie to make the narrative simpler. The BDF methods which Sundials, DASSL, LSODE, FBDF, etc. use are actually α-stable, which means that they are actually missing some angle α of the complex plane for stability. The stability regions look like this:
So these BDF methods are actually pretty bad for other reasons on very oscillatory problems! Meanwhile, things like Rosenbrock methods can also solve DAEs while actually being L-stable, which can make them more stable in many situations where there’s oscillations towards a steady state. So there’s a trade-off there… again every method has a purpose. But this is another “ode15s is more stable than ode23s”… “well actually…”