Higher order schemes

The following algorithms are inspired by the paper “A higher order scheme for a tangentially stabilized plane curve shortening flow with a driving force“ by Martin Balazovjech and Karol Mikula [BaMaMiKa09]. It contains four algorithms, one explicit and three implicit ones. You can find them under section 3.1-3.4. The idea is to solve the following

\partial_tr=\beta n+\alpha t

which can be rewritten as

\partial_tr=\epsilon\partial_{ss}r+\alpha\partial_sr+f(\partial_sr)^\perp.

So first of all we need to define all this variables and some more that we need for computing the next time step. The exponent of these variables represents the time step the variable is from and the index represents the row in the array the value is from. Note that the indices are cyclic that means r_{n+1}=r_1 and r_{1-1}=r_n, this holds for every variable.

  • r is an n\times 2 array. It represents the vertices. So r is the array we plot in every time step. We write r_i=[x,y] for the i-th vertex.
  • h is an n\times 1 array. Which contains the euclidean difference between r_{i-1} and r_{i}.

h_i=\sqrt{(x_i-x_{i-1})^2+(y_i-y_{i-1})^2}

  • What is called n in the paper is called normal in the code and is the normal vector.

normal_i=\left[\frac{y_{i-1}-y_{i+1}}{2},\frac{x_{i+1}-x_{i-1}}{2}\right]

  • The stepwidth is called \Delta t in the paper and in the code called sigmat
  • \beta is velocity in the outer normal direction.

\beta=\epsilon k_i+f

  • f is an external driving force and in this case just coded as a constant
  • \epsilon > 0 is just a constant
  • h called capitalh in the code and H here is an n\times 2 vector which obtains the difference in both components.
  • k is the curvature of the curve. It is computed as

k_i=\frac{sgn(det(H_{i-1},H_{i+1}))}{2h_i}\arccos \left(\frac{H_{i-1}\cdot H_{i+1}}{h_{i-1}h_{i+1}}\right)

  • \omega is the so called relaxation function and is also just implemented as a constant
  • \alpha is the velocity in tangential direction.

\alpha_0=0,\\
\alpha_i=\alpha_{i-1}+h_i\beta_ik_i-h_i\frac{\sum\limits_{i=1}^n \beta_ik_ih_i}{\sum\limits_{i=1}^n h_i}+\left(\frac{\sum\limits_{i=1}^n h_i}{n}-h_i\right)\omega

3.1 First-order explicit forward Euler scheme

We use the following formula to get the vertices for the m+1 th time step

r_i^{m+1}=r_i^m+\frac{2\Delta t}{h_{i+1}^m+h_i^m}\left(\epsilon\left(\frac{r_{i+1}^m-r_i^m}{h_{i+1}^m}-\frac{r_i^m-r_{i-1}^m}{h_i^m}\right)+\alpha_i^m\frac{r_{i+1}^m-r_{i-1}^m}{2}\right)

3.2 First-order semi-implicit backward Euler scheme

This is an implicit algorithm the code is solving the following equation for all i.

\left(\frac{\alpha_i^m}{2}-\frac{\epsilon}{h_i^m}\right) r_{i-1}^{m+1}+\left(\frac{h_{i+1}^m+h_i^m}{2\Delta t}+\frac{\epsilon}{h_{i+1}^m}+\frac{\epsilon}{h_i^m}\right)r_i^{m+1}+\left(-\frac{\alpha_i^m}{2}-\frac{\epsilon}{h_{i+1}^m}\right)r_{i+1}^{m+1}=\frac{h_{i+1}^m+h_i^m}{2\Delta t}r_i^m

This is a linear system Ax=b. The function getmatrix produces the matrix A and the function getb the b vector. Solving this system gives the vertices for the next time step. Note that there is one linear system for the x and one for the y values, which are implemented in one equation in the code. In the paper there is a boundary for the step width given. As long as the stepsize is smaller than this boundary the algorithm is guaranteed to work but there is also a possibility that it works for larger values. Therefore we take in this case the boundary times 0.9 as stepsize. This boundary is:

\frac{1}{2}\frac{h_{i+1}^m+h_i^m}{|\frac{\epsilon}{h_i^m}-\frac{\alpha_i^m}{2}|+|\frac{\epsilon}{h_{i+1}^m}+\frac{\alpha_i^m}{2}|-(\frac{\epsilon}{h_i^m}+\frac{\epsilon}{h_{i+1}^m})}

Of course there is a different boundary for different i’s so the one we are interested in is the minimum over the i’s.

3.3 First-order fully implicit backward Euler scheme

This algorithm is pretty similar to the one before. It has the same linear system to solve but the difference is that not every value is accepted.

\left(\frac{\alpha_i^{m(l)}}{2}-\frac{\epsilon}{h_i^{m(l)}}\right)r_{i-1}^{m(1+1)}+\left(\frac{h_{i+1}^{m(l)}+h_i^{m(l)}}{2\Delta t}+\frac{\epsilon}{h_{i+1}^{m(l)}}+\frac{\epsilon}{h_i^{m(l)}}\right)r_i^{m(1+1)}+\left(-\frac{\alpha_i^{m(l)}}{2}-\frac{\epsilon}{h_{i+1}^{m(l)}}\right)r_{i+1}^{m(1+1)}=\frac{h_{i+1}^{m(l)}+h_i^{m}(l)}{2\Delta t}r_i^{m(l)}

for l=0,1,2... then the algorithm calculates the length L and if |L^{m(l)}-L^{m(l+1)}|\leq tol the solution r^{m(l+1)} is the value for the next time step r^{m+1}. This algorithm converges a bit faster than the one before but it highly depends on the chosen value for tol. For a large value for tol there is only little difference to the algorithm 3.2 but for a small value of tol the algorithm can perform some kind of jumps where it suddenly converges very fast. Still active is the boundary from the last algorithm. Here it is implemented that we check whether the constant stepsize is larger than the boundary. When this is the case not the constant but 0.9 times the boundary is taken as stepsize.

3.4 The higher order scheme

This scheme follows nearly the same idea like the algorithm in 3.3 but it uses a different matrix A and vector b for the linear system. The system is produced by the following equation:

\left(\frac{h_{i+1}^{m(l)}+h_i^{m(l)}+h_{i+1}^m+h_i^m}{2\Delta t}+\frac{\epsilon}{h_{i+1}^{m(l)}}+\frac{\epsilon}{h_i^{m(l)}}\right)r_i^{m(l+1)}+\left(\frac{\alpha_i^{m(l)}}{2}-\frac{\epsilon}{h_i^{m(l)}}\right)r_{i-1}^{m(l+1)}-\left(\frac{\alpha_i^{m(l)}}{2}+\frac{\epsilon}{h_{i+1}^{m(l)}}\right)r_{i+1}^{m(l+1)}=\\
\left(\frac{h_{i+1}^{m(l)}+h_i^{m(l)}+h_{i+1}^m+h_i^m}{2\Delta t}-\frac{\epsilon}{h_{i+1}^m}-\frac{\epsilon}{h_i^m}\right)r_i^m-\left(\frac{\alpha_i^m}{2}-\frac{\epsilon}{h_i^m}\right)r_{i-1}^m+\left(\frac{\alpha_i^m}{2}+\frac{\epsilon}{h_{i+1}^m}\right)r_{i+1}^m

Note that the h without the l is the h which we get from our old vertices r^m, this is called hold in the code. This h is also important for computing the boundary of the stepsize

\frac{1}{2}\frac{h_{i+1}^m+h_i^m+h_{i+1}^{m(l)}+h_i^{m(l)}}{|\frac{\epsilon}{h_i^{m(l)}}-\frac{\alpha_i^{m(l)}}{2}|+|\frac{\epsilon}{h_{i+1}^{m(l)}}+\frac{\alpha_i^{m(l)}}{2}|-(\frac{\epsilon}{h_i^{m(l)}}+\frac{\epsilon}{h_{i+1}^{m(l)}})}

Again it can happen that this algorithm makes some kind of jumps for small values of tol. Also the algorithm needs a lot of time for small values of tol.

Parameter

This functions all have the same parameter five first parameters but 3.2 which does not have the stepsize as parameter. Also there is a extra parameter for the last two algorithms.

  • r are the initial vertices
  • epsilon is the value plugged in for \epsilon
  • sigmat is the stepsize
  • omega is the value plugged in for \omega
  • f is the value plugged in for f
  • tol is the value for the difference in length between two iterations that need to be undercuted in order to accept a new time step

Disadvantages and Bugs

  • All algorithms do not work for sharp angles, so the vertices have to form a sufficient smooth body
  • This kind of algorithms can be rather slow because the boundary for the stepsize is very small sometimes
  • There is a high dependence on the chosen value for the parameters
  • Sometimes little graphical bugs occur

Todo

Make it higher order schemes work for sharp angles.

Todo

Resolve parameter dependecie for the higher order schemes.

Todo

Resolve graphical jumping of the curve if iterated with higher order schemes.

Todo

Add these information to the docstrings as well.

[BaMaMiKa09]Balažovjech, Martin; Mikula, Karol (2009), “A higher order scheme for the curve shortening flow of plane curves”, Algoritmy 2009, pp. 165–175