Calibration computations

Concepts and common values

Regression functions

Results are obtained by computing a calibration function, named \(y\), from the analysis points \((x_i,y_i), i=1..N\) (coming from the reference measurements) obtained for a given substance. The Calibration mode (see Substance Table and Height/Area views) can be one of the following:

  • Linear-1 : linear function through the origin (can be called one-reference calibration when there is only one group of replicas) corresponding to the expression \(y=ax\)

  • Linear-2 : usual linear function corresponding to the expression \(y=a_1x+a_0\)

  • Polynomial : second-degree polynomial corresponding to the expression \(y=a_2x^2+a_1x+a_0\)

  • Mime-1 : Michaelis–Menten function going through the origin, corresponding to the expression \(y=\frac{a_1x}{a_2+x}\)

  • Mime-2 : Michaelis–Menten function corresponding to the expression \(y=\frac{a_1x}{a_2+x}+a_0\)

Regression range

The calibration function \(y\) is valid only on a regression range. With:

  • \({x_i}_{min}\) and \({x_i}_{max}\) the minimum and maximum quantities applied on reference points

  • \(d\) the Range deviation defined in the Substance Table (defined in \(\%\))

The regression range is:

\[\left[max\{0,{x_i}_{min}-\frac{d({x_i}_{max}-{x_i}_{min})}{100}\},{x_i}_{max}+\frac{d({x_i}_{max}-{x_i}_{min})}{100}\right]\]

These functions are (or in Polynomial mode, must be) strictly monotonically increasing on the regression range. Therefore, with \(y=f(x)\), the regression \(x=f^{-1}(y)\) gives 0 or 1 result, depending on if \(x\) belongs to the validity area.

Additional assumptions are made:

  • By design, the regression functions are valid only if they are strictly growing in the validity area (first derivative must be positive).

  • The analysis points have always strictly positive coordinates.

Computation of the analysis points

An analysis point corresponds to a peak (see Peaks table) assigned to a given substance (see Substance assignment). The computation can be done using one of the following modes:

  • Height of the peak: profile value at the 𝑅ꜰ position determined by the peak detection algorithm

  • Area of the peak on the profile values \((x_p,y_p), p=1..N_p\) where \(x_p∈[P_s,P_e]\) with \(P_s\) and \(P_e\) the start and end 𝑅ꜰ values of the peak determined by the peak detection algorithm. The area value is computed by performing a discrete integration:

    \[\frac{1}{2}\sum_{p=1}^{N_p-1}\left((x_{p+1}-x_p)(y_{p+1}+y_p)\right)\]

Coefficient of variation of the calibration function

Formerly known as standard deviation, the coefficient of variation of the calibration function is an expression of the difference between the analysis points and the regression function computed. The result is displayed in percent. The computation only considers the y-axis because the x-axis is error-free by design. With:

  • \((x_i,y_i), i=1..N\) the analysis points (coming from the reference measurements)

  • \(y\) the regression function

  • \(\overline{y}\) the mean of \(y_i\) values

\[CV=\frac{\sqrt{\frac{\sum_{i=1}^{N}(y_i-y(x_i))^2}{N}}}{\overline{y}}\]

Correlation coefficient of the calibration function

The correlation coefficient (or determination coefficient) is computed by using the general formula, with:

  • \((x_i,y_i), i=1..N\) the analysis points (coming from the reference measurements)

  • \(y\) the regression function

  • \(\overline{y}\) the mean of \(y_i\) values

\[R=\sqrt{1-\frac{\sum_{i=1}^{N}(y_i-y(x_i))^2}{\sum_{i=1}^{N}(y_i-\overline{y})^2}}\]

Coefficient of variation of the sample results

The coefficient of variation is computed by dividing the standard deviation (applied to a sample population, sometimes called the corrected sample standard deviation) by the mean value, with:

  • \((x_i,y_i), i=1..N\) the analysis points (coming from the reference measurements)

  • \(\overline{y}\) the mean of \(y_i\) values

\[CV=\frac{\sqrt{\frac{\sum_{i=1}^{N}(y_i-\overline{y})^2}{N-1}}}{\overline{y}}\]

Computation of the calibration functions

Linear-1

Calibration function

Recall of the general formula:

\[y=ax \tag{1}\label{eq1}\]

To compute \(a\), we consider that \(y\) goes through the center of gravity of the analysis points, noted \(\left( \overline{x} , \overline{y} \right)\) so that:

\[\overline{y}=a\overline{x}\]
\[a=\frac{\overline{y}}{\overline{x}}\]

Provided that analysis points have always positive coordinates, we can state that \(a>0\). The calibration function is therefore strictly growing and strictly positive when \(x>0\) and is therefore always valid.

Regression

The inversion of \(\eqref{eq1}\) is straightforward:

\[x=\frac{y}{a}\]

Linear-2

Calibration function

With the general formula:

\[y=a_1x+a_0 \tag{2}\label{eq2}\]

The linear regression is performed by using the well-known method of the least squares. With:

  • \((x_i,y_i), i=1..N\) the analysis points (coming from the reference measurements)

  • \(\hat{y}\) the best linear function for the given analysis points (the function searched)

The square differences are given by the following formula:

\[S_y=\sum_{i=1}^{N}(y_i-\hat{y}_i)^2 \tag{4}\label{eq4}\]
\[S_y=\sum_{i=1}^{N}(y_i-a_0-a_1x_i)^2\]
\[S_y=\sum_{i=1}^{N}(y_i^2+a_0^2+a_1^2x_i^2-2y_ia_0-2y_ia_1x_i+2a_0a_1x_i)\]

\(S_y\) is a function of \(a_0\) and \(a_1\) and is a sum of squares (\(\eqref{eq4}\)), whose minimum is obtained when the first derivatives are 0:

\[\begin{split}\begin{cases} \frac{\partial{S_y}}{\partial{a_0}}=0 \\ \frac{\partial{S_y}}{\partial{a_1}}=0 \end{cases}\end{split}\]
\[\begin{split}\begin{cases} N_ia_0+a_1\sum_{i=1}^{N}x_i=\sum_{i=1}^{N}y_i \\ a_0\sum_{i=1}^{N}x_i+a_1\sum_{i=1}^{N}x_i^2=\sum_{i=1}^{N}x_iy_i \end{cases}\end{split}\]

The computation of solutions (isolation and replacement) for these equations leads to:

\[a_1=\frac{N\sum_{i=1}^{N}x_iy_i-\sum_{i=1}^{N}x_i\sum_{i=1}^{N}y_i}{N\sum_{i=1}^{N}x_i^2-(\sum_{i=1}^{N}x_i)^2}\]
\[a_0=\frac{\sum_{i=1}^{N}x_i^2\sum_{i=1}^{N}y_i-\sum_{i=1}^{N}x_i\sum_{i=1}^{N}x_iy_i}{N\sum_{i=1}^{N}x_i^2-(\sum_{i=1}^{N}x_i)^2}\]

Note

The calibration function is valid only if \(a_1>0\)

Regression

The inversion of \(\eqref{eq2}\) gives:

\[x=\frac{y-a_0}{a_1}\]

Polynomial

Calibration function

The process is similar to Linear-2, but extended to a second-degree polynomial:

\[y=a_2x^2+a_1x+a_0 \tag{3}\label{eq3}\]

The square differences function is defined the same way, which gives:

\[S_y=\sum_{i=1}^{N}(y_i-a_0-a_1x_i-a_2x_i^2)^2\]
\[\begin{split}S_y=\sum_{i=1}^{N}(y_i^2+a_0^2+a_1^2x_i^2+a_2^2x_i^4-2y_ia_0-2y_ia_1x_i-2y_ia_2x_i^2 \\ +2a_0a_1x_i+2a_0a_2x_i^2+2a_1a_2x_i^3)\end{split}\]

Again, the first derivatives are 0 at the minimum:

\[\begin{split}\begin{cases} \frac{\partial{S_y}}{\partial{a_0}}=0 \\ \frac{\partial{S_y}}{\partial{a_1}}=0 \\ \frac{\partial{S_y}}{\partial{a_2}}=0 \end{cases}\end{split}\]
\[\begin{split}\begin{cases} \sum_{i=1}^{N}(2a_0-2y_i+2a_1x_i+2a_2x_i^2)=0 \\ \sum_{i=1}^{N}(2a_1x_i^2-2y_ix_i+2a_0x_i+2a_2x_i^3)=0 \\ \sum_{i=1}^{N}(2a_2x_i^4-2y_ix_i^2+2a_0x_i^2+2a_1x_i^3)=0 \end{cases}\end{split}\]
\[\begin{split}\begin{cases} a_0N_i+a_1\sum_{i=1}^{N}x_i+a_2\sum_{i=1}^{N}x_i^2=\sum_{i=1}^{N}y_i \\ a_0\sum_{i=1}^{N}x_i+a_1\sum_{i=1}^{N}x_i^2+a_2\sum_{i=1}^{N}x_i^3=\sum_{i=1}^{N}x_iy_i \\ a_0\sum_{i=1}^{N}x_i^2+a_1\sum_{i=1}^{N}x_i^3+a_2\sum_{i=1}^{N}x_i^4=\sum_{i=1}^{N}x_i^2y_i \end{cases}\end{split}\]

Or, in matrix form:

\[\begin{split}\begin{pmatrix} N & \sum_{i=1}^{N}x_i & \sum_{i=1}^{N}x_i^2 \\ \sum_{i=1}^{N}x_i & \sum_{i=1}^{N}x_i^2 & \sum_{i=1}^{N}x_i^3 \\ \sum_{i=1}^{N}x_i^2 & \sum_{i=1}^{N}x_i^3 & \sum_{i=1}^{N}x_i^4 \end{pmatrix} \begin{pmatrix} a_0 \\ a_1 \\ a_2 \end{pmatrix}= \begin{pmatrix} \sum_{i=1}^{N}y_i \\ \sum_{i=1}^{N}x_iy_i \\ \sum_{i=1}^{N}x_i^2y_i \end{pmatrix}\end{split}\]

Then in condensed form:

\[Xa=Y\]
\[a=X^{-1}Y\]

With \(X^{-1}\) the inversion of the matrix \(X\).

Regression

By design, the second-degree polynomial function is valid only when \(a_2<0\) (decreasing acceleration assumption, or \(y''(x)<0\)), and only on the increasing side of the curve, when \(y'(x)>0\), that is, when \(x>-\frac{a_1}{2a_2}\).

The inversion of \(\eqref{eq3}\) is performed by resolving the equation:

\[(a_0-y)+a_1x+a_2x^2=0\]

The resolution usually gives 2 solutions, but at most 1 solution can be on the validity range.

Mime-1 and Mime-2

Principles

The Mime regression algorithms are based on saturation functions characterized by the formula:

\[y=\frac{a_1x}{a_2+x}+a_0 \tag{5}\label{eq5}\]

The algorithm is called Mime-1 when the assumption \(a_0=0\) (the function goes through the origin) is made, otherwise it’s called Mime-2. For both modes, it is expected that \(a_2>0\), so that the denominator is always positive when \(x>0\). Otherwise the algorithm diverges and the regression is considered not successful.

Unlike polynomial functions, the best saturation function for a given set of points can’t be directly estimated by using standard algebra. Consequently, the estimation is obtained by computing a starting set of values \(a_0^{'}\), \(a_1^{'}\) and \(a_2^{'}\) and then by using Taylor series, starting from the general formula:

\[\sum_{n=0}^{\infty}\frac{f^{(n)}(\delta)}{n!}(x-\delta)^n\]

By applying the expansion for \(\hat{y}\):

\[\hat{y}(x)=\hat{y}(\delta)+\frac{\hat{y}'(\delta)}{1!}(x-\delta)+\frac{\hat{y}''(\delta)}{2!}(x-\delta)^2+\cdots\]

With \(a=x-\delta\):

\[\hat{y}(x)=\hat{y}(x-a)+\frac{\hat{y}'(x-a)}{1!}a+\frac{\hat{y}''(x-a)}{2!}a^2+\cdots\]

By performing the expansion for \(\hat{y}(x+a)\):

\[\hat{y}(x+a)=\hat{y}(x)+\frac{a}{1!}\hat{y}'(x)+\frac{a^2}{2!}\hat{y}''(x)+\cdots\]

In our case, our \(\hat{y}\) saturation function will be estimated by:

  • Using \(a_0\), \(a_1\) and \(a_2\) as variables to expand.

  • Using the source points \((x_i,y_i ), i=1..N\) to minimize the square differences. Therefore, the \(x\) variable stays in the following equation.

  • Restricting the expansion to the 1st level.

\[\begin{split}\begin{equation}\tag{7}\label{eq7}\begin{split} \hat{y}(a_0+\delta_{a_0},a_1+\delta_{a_1},a_2+\delta_{a_2},x) = \hat{y}(a_0,a_1,a_2,x) \\ +\delta_{a_0}\hat{y}'(a_0)+\delta_{a_1}\hat{y}'(a_1)+\delta_{a_2}\hat{y}'(a_2)+\cdots \end{split}\end{equation}\end{split}\]

The minimization of square differences applied to this equation gives a value for \(\delta_{a_0}\), \(\delta_{a_1}\) and \(\delta_{a_2}\). These values are used to refine the values of \(a_0\), \(a_4\) and \(a_2\), and the process can be applied until a sufficient degree of precision is reached. The following sections describe the way the starting values are computed, and then how the iterative process is conducted.

Starting point

Mime-1

The Mime-1 function:

\[y=\frac{a_1x}{a_2+x}\]

Can be inversed:

\[y^{-1}=\left(\frac{a_1x}{a_2+x}\right)^{-1}=\frac{a_2+x}{a_1x}=\frac{a_2}{a_1}x^{-1}+\frac{1}{a_1}\]

We can simplify the computation by setting \(b_0\) and \(b_1\) as follows:

\[y^{-1}=b_1x^{-1}+b_0\]

With:

\[\begin{split}\begin{cases} b_1=\frac{a_2}{a_1} \\ b_0=\frac{1}{a_1} \\ a_1=\frac{1}{b_0} \\ a_2=b_1a_1 \end{cases}\end{split}\]

We can now compute \(b_0\) and \(b_1\) by reusing the linear regression formula \(\eqref{eq4}\):

\[S_y=\sum_{i=1}^{N}(y_i^{-1}-b_1x_i^{-1}-b_0)^2\]
\[S_y=\sum_{i=1}^{N}(y_i^{-2}+b_1^2x_i^{-2}+b_0^2-2b_1x_i^{-1}y_i^{-1}-2b_0y_i^{-1}+2b_0b_1x_i^{-1})\]

With the help of the same differential equations but this time for \(b_0\) and \(b_1\):

\[\begin{split}\begin{cases} \sum_{i=1}^{N}(2b_0+2b_1x_i^{-1}-2y_i^{-1})=0 \\ \sum_{i=1}^{N}(2b_0x_i^{-1}+2b_1x_i^{-2}-2x_i^{-1}y_i^{-1})=0 \end{cases}\end{split}\]
\[\begin{split}\begin{cases} b_0N_i+b_1\sum_{i=1}^{N}x_i^{-1}=\sum_{i=1}^{N}y_i^{-1} \\ b_0\sum_{i=1}^{N}x_i^{-1}+b_1\sum_{i=1}^{N}x_i^{-2}=\sum_{i=1}^{N}x_i^{-1}y_i^{-1} \end{cases}\end{split}\]

Or, in matrix form:

\[\begin{split}\begin{pmatrix} N & \sum_{i=1}^{N}x_i^{-1} \\ \sum_{i=1}^{N}x_i^{-1} & \sum_{i=1}^{N}x_i^{-2} \end{pmatrix} \begin{pmatrix} b_0 \\ b_1 \end{pmatrix}= \begin{pmatrix} \sum_{i=1}^{N}y_i^{-1} \\ \sum_{i=1}^{N}x_i^{-1}y_i^{-1} \end{pmatrix}\end{split}\]

Then in condensed form:

\[Xb=Y\]
\[b=X^{-1}Y\]

With \(X^{-1}\) the inversion of the matrix \(X\). With the values of \(b_0\) and \(b_1\), \(a_0\) and \(a_1\) can then be obtained.

Mime-2

The starting value for Mime-2 is itself computed by an iterative process. Indeed, a direct evaluation by minimizing the square differences and using partial derivations is not suitable. Each iteration consists in fixing 2 values for \(a_2\), named \(a_{2d}\) and \(a_{2u}\), that will evolve during the process in the interval \([a_{2_{down}},a_{2_{up}}]\). It makes sense to define the following limits:

\[a_{2_{down}}>0\]
\[a_{2_{up}}<10y_{i_{max}}\]

And to set the initial values of \(a_{2_{down}}\) and \(a_{2_{up}}\) to:

\[a_{2_{down}}=\epsilon\]
\[a_{2_{up}}=10y_{i_{max}}-\epsilon\]

With \(\epsilon\) a value that tends to be 0. In a computer program, this value is set to the smallest positive value available. In each iteration is also defined:

\[d_{a_2}=\left\lvert \frac{1}{3}(a_{2_{up}}-a_{2_{down}})\right\rvert\]

And:

\[\begin{split}\begin{cases} a_{2d}=a_{2_{down}}+d_{a_2} \\ a_{2u}=a_{2_{up}}-d_{a_2} \end{cases}\end{split}\]

The goal is now to compute \(S(a_0,a_1,a_{2d})\) and \(S(a_0,a_1,a_{2u})\) by applying the square differences equation \(\eqref{eq4}\) to the Mime-2 function \(\eqref{eq5}\):

\[S(a_0,a_1,a_2)=\sum_{i=1}^{N}\left(y_i-a_0-\frac{a_1x_i}{a_2+x_i}\right)^2 \tag{6}\label{eq6}\]
\[S(a_0,a_1,a_2)=\sum_{i=1}^{N}\left(y_i^2+a_0^2+\left(\frac{a_1x_i}{a_2+x_i}\right)^2-2a_0y_i-2\frac{a_1x_iy_i}{a_2+x_i}+2\frac{a_0a_1x_i}{a_2+x_i}\right)\]

With the help of the same differential equations but this time for \(a_0\) and \(a_1\) only:

\[\begin{split}\begin{cases} \sum_{i=1}^{N}\left(2a_0+2a_1\frac{x_i}{a_2+x_i}-2y_i\right)=0 \\ \sum_{i=1}^{N}\left(2a_0\frac{x_i}{a_2+x_i}+2a_1\left(\frac{x_i}{a_2+x_i}\right)^2-2\frac{x_iy_i}{a_2+x_i}\right)=0 \end{cases}\end{split}\]
\[\begin{split}\begin{cases} a_0N_i+a_1\sum_{i=1}^{N}\frac{x_i}{a_2+x_i}=\sum_{i=1}^{N}y_i \\ a_0\sum_{i=1}^{N}\frac{x_i}{a_2+x_i}+a_1\sum_{i=1}^{N}\left(\frac{x_i}{a_2+x_i}\right)^2=\sum_{i=1}^{N}\frac{x_iy_i}{a_2+x_i} \end{cases}\end{split}\]

Or, in matrix form:

\[\begin{split}\begin{pmatrix} N & \sum_{i=1}^{N}\frac{x_i}{a_2+x_i} \\ \sum_{i=1}^{N}\frac{x_i}{a_2+x_i} & \sum_{i=1}^{N}\left(\frac{x_i}{a_2+x_i}\right)^2 \end{pmatrix} \begin{pmatrix} a_0 \\ a_1 \end{pmatrix}= \begin{pmatrix} \sum_{i=1}^{N}y_i \\ \sum_{i=1}^{N}\frac{x_iy_i}{a_2+x_i} \end{pmatrix}\end{split}\]

Then in condensed form:

\[Xb=Y\]
\[b=X^{-1}Y\]

With \(X^{-1}\) the inversion of the matrix \(X\). By replacing \(a_2\) with respectively \(a_{2d}\) and \(a_{2u}\) in the last equation, we get in the 2 situations the corresponding values for \(a_0\) and \(a_1\). By naming those values \(a_{0d}\) and \(a_{1d}\) for \(a_{2d}\) and respectively \(a_{0u}\) and \(a_{1u}\) for \(a_{2u}\), we can use \(\eqref{eq6}\) to get \(S(a_{0d},a_{1d},a_{2d})\) and \(S(a_{0u},a_{1u},a_{2u})\). These values measure the deviation between the corresponding Mime-2 function and the analysis points. Therefore, when \(S(a_{0u},a_{1u},a_{2u})>S(a_{0d},a_{1d},a_{2d})\) we reduce \(a_{2_{up}}\) by \(d_{a_2}\), else we increase \(a_{2_{down}}\) by \(d_{a_2}\). Iterations are then repeated until \(d_{a_2}<0.001\). It has to be noted that the process does not always converge to 0. Therefore the process should be stopped after a fixed number of iterations or when the value for \(d_{a_2}\) exceeds what the computer program can accept.

Note

In visionCATS, the process is stopped after 25 iterations.

Calibration function

Mime-1

The way to get the starting values for \(a_1\) and \(a_2\) is described in section Starting point. The process is then iterative and consists in finding \(\delta_{a_1}\) and \(\delta_{a_2}\) in the equation \(\eqref{eq7}\) for Mime-1 reduced to 1st expansion:

\[\hat{y}(a_1+\delta_{a_1},a_2+\delta_{a_2},x) \approx \hat{y}(a_1,a_2,x)+\delta_{a_1}\hat{y}'(a_1)+\delta_{a_2}\hat{y}'(a_2)\]

By using the minimization of square differences \(\eqref{eq4}\) with the last equation:

\[S_y=\sum_{i=1}^{N}(y_i-\hat{y}_i)^2=\sum_{i=1}^{N}(\hat{y}_i-y_i)^2\]
\[=\sum_{i=1}^{N}\left(\frac{a_1x_i}{a_2+x_i}+\delta_{a_1}\frac{x_i}{a_2+x_i}-\delta_{a_2}\frac{a_1x_i}{(a_2+x_i)^2}-y_i\right)^2\]
\[\begin{split}=\sum_{i=1}^{N}(\left(\frac{a_1x_i}{a_2+x_i}\right)^2 + (\delta_{a_1})^2\left(\frac{x_i}{a_2+x_i}\right)^2 \\ + (\delta_{a_2})^2\left(\frac{a_1x_i}{(a_2+x_i)^2}\right)^2 + y_i^2 + 2\delta_{a_1}\frac{a_1x_i^2}{(a_2+x_i)^2} \\ - 2\delta_{a_2}\frac{a_1^2x_i^2}{(a_2+x_i)^3} - 2\frac{a_1x_iy_i}{a_2+x_i} - 2\delta_{a_1}\delta_{a_2}\frac{a_1x_i^2}{(a_2+x_i)^3} \\ - 2\delta_{a_1}\frac{x_iy_i}{a_2+x_i} + 2\delta_{a_2}\frac{a_1x_iy_i}{(a_2+x_i)^2})\end{split}\]

Apply a partial derivation for \(\delta_{a_1}\) and \(\delta_{a_2}\):

\[\begin{split}\begin{cases} \sum_{i=1}^{N}\left(2\delta_{a_1}\left(\frac{x_i}{a_2+x_i}\right)^2-2\delta_{a_2}\frac{a_1x_i^2}{(a_2+x_i)^3}+2\frac{a_1x_i^2}{(a_2+x_i)^2}-2\frac{x_iy_i}{a_2+x_i}\right)=0 \\ \sum_{i=1}^{N}\left(-2\delta_{a_1}\frac{a_1x_i^2}{(a_2+x_i)^3}+2\delta_{a_2}\left(\frac{a_1x_i}{(a_2+x_i)^2}\right)^2-2\frac{a_1^2x_i^2}{(a_2+x_i)^3}+2\frac{a_1x_iy_i}{(a_2+x_i)^2}\right)=0 \end{cases}\end{split}\]
\[\begin{split}\begin{cases} \delta_{a_1}\sum_{i=1}^{N}\left(\frac{x_i}{a_2+x_i}\right)^2 + \delta_{a_2}\sum_{i=1}^{N}-\frac{a_1x_i^2}{(a_2+x_i)^3} = \sum_{i=1}^{N}\frac{x_i}{a_2+x_i}(y_i-\frac{a_1x_i}{a_2+x_i}) \\ \delta_{a_1}\sum_{i=1}^{N}-\frac{a_1x_i^2}{(a_2+x_i)^3} + \delta_{a_2}\sum_{i=1}^{N}\left(\frac{a_1x_i}{(a_2+x_i)^2}\right)^2 = \sum_{i=1}^{N}\frac{a_1x_i}{(a_2+x_i)^2}(\frac{a_1x_i}{a_2+x_i}-y_i) \end{cases}\end{split}\]

Or, in matrix form:

\[\begin{split}\begin{pmatrix} \sum_{i=1}^{N}\left(\frac{x_i}{a_2+x_i}\right)^2 & \sum_{i=1}^{N}-\frac{a_1x_i^2}{(a_2+x_i)^3} \\ \sum_{i=1}^{N}-\frac{a_1x_i^2}{(a_2+x_i)^3} & \sum_{i=1}^{N}\left(\frac{a_1x_i}{(a_2+x_i)^2}\right)^2 \end{pmatrix} \begin{pmatrix} \delta_{a_1} \\ \delta_{a_2} \end{pmatrix}= \begin{pmatrix} \sum_{i=1}^{N}\frac{x_i}{a_2+x_i}(y_i-\frac{a_1x_i}{a_2+x_i}) \\ \sum_{i=1}^{N}\frac{a_1x_i}{(a_2+x_i)^2}(\frac{a_1x_i}{a_2+x_i}-y_i) \end{pmatrix}\end{split}\]

Then in condensed form:

\[X\delta_a=Y\]
\[\delta_a=X^{-1}Y\]

With \(X^{-1}\) the inversion of the matrix \(X\). The values obtained for \(\delta_{a_1}\) and \(\delta_{a_2}\) are an approximation of the value that should be added respectively to \(a_1\) and \(a_2\) in order for the square differences between the Mime-1 function and the analysis points to be minimal. Therefore, take respectively \(a_1+\delta_{a_1}\) and \(a_2+\delta_{a_2}\) as new values \(a_1\) and \(a_2\), and repeat iterations until:

\[max(\lvert\delta_{a_1}\rvert,\lvert\delta_{a_2}\rvert)<0.001\]

It has to be noted that the process does not always converge to 0. Therefore, the process should be stopped after a fixed number of iterations or when the value for \(\delta_a\) exceeds what the computer program can accept.

Note

In visionCATS, the process is stopped after 25 iterations.

Mime-2

The Mime-2 regression uses the same process as the Mime-1 regression. Based on the 1st level expansion \(\eqref{eq7}\), with the minimization of the square differences:

\[S_y=\sum_{i=1}^{N}(y_i-\hat{y}_i)^2=\sum_{i=1}^{N}(\hat{y}_i-y_i)^2\]
\[=\sum_{i=1}^{N}\left(\left(a_0+\frac{a_1x_i}{a_2+x_i}\right)+\delta_{a_0}+\delta_{a_1}\frac{x_i}{a_2+x_i}-\delta_{a_2}\frac{a_1x_i}{(a_2+x_i)^2}-y_i\right)^2\]
\[\begin{split}=\sum_{i=1}^{N}( \left(a_0+\frac{a_1x_i}{a_2+x_i}\right)^2+(\delta_{a_0})^2+(\delta_{a_1})^2\left(\frac{x_i}{a_2+x_i}\right)^2+(\delta_{a_2})^2\left(\frac{a_1x_i}{(a_2+x_i)^2}\right)^2+y_i^2 \\ +2\delta_{a_0}\left(a_0+\frac{a_1x_i}{a_2+x_i}\right) + 2\delta_{a_1}\frac{x_i}{a_2+x_i}\left(a_0+\frac{a_1x_i}{a_2+x_i}\right) \\ -2\delta_{a_2}\frac{a_1x_i}{(a_2+x_i)^2}\left(a_0+\frac{a_1x_i}{a_2+x_i}\right)-2y_i\left(a_0+\frac{a_1x_i}{a_2+x_i}\right) \\ +2\delta_{a_0}\delta_{a_1}\frac{x_i}{a_2+x_i} - 2\delta_{a_0}\delta_{a_2}\frac{a_1x_i}{(a_2+x_i)^2} - 2\delta_{a_0}y_i \\ -2\delta_{a_1}\delta_{a_2}\frac{a_1x_i^2}{(a_2+x_i)^3} - 2\delta_{a_1}\frac{x_iy_i}{a_2+x_i} + 2\delta_{a_2}\frac{a_1x_iy_i}{(a_2+x_i)^2} )\end{split}\]

Apply a partial derivation for \(\delta_{a_0}\), \(\delta_{a_1}\) and \(\delta_{a_2}\) (already simplified by removing the 2 factors):

\[\begin{split}\begin{cases} \sum_{i=1}^{N}\left( \delta_{a_0}+\delta_{a_1}\frac{x_i}{a_2+x_i}-\delta_{a_2}\frac{a_1x_i}{(a_2+x_i)^2}+(a_0+\frac{a_1x_i}{a_2+x_i})-y_i \right)=0 \\ \sum_{i=1}^{N}\left( \delta_{a_0}\frac{x_i}{a_2+x_i}+\delta_{a_1}\left(\frac{x_i}{a_2+x_i}\right)^2-\delta_{a_2}\frac{a_1x_i^2}{(a_2+x_i)^3}+\frac{x_i}{a_2+x_i}\left(a_0+\frac{a_1x_i}{a_2+x_i}\right)-\frac{x_iy_i}{a_2+x_i} \right)=0 \\ \sum_{i=1}^{N}\left( -\delta_{a_0}\frac{a_1x_i}{(a_2+x_i)^2}-\delta_{a_1}\frac{a_1x_i^2}{(a_2+x_i)^3}+\delta_{a_2}\left(\frac{a_1x_i}{(a_2+x_i)^2}\right)^2-\frac{a_1x_i}{(a_2+x_i)^2}\left(a_0+\frac{a_1x_i}{a_2+x_i}\right)+\frac{a_1x_iy_i}{(a_2+x_i)^2} \right)=0 \end{cases}\end{split}\]

Note

In the following expressions, \(\sum_{i=1}^{N}\) has been simplified to \(\sum\) for clarity purposes.

\[\begin{split}\begin{cases} N\delta_{a_0}+\delta_{a_1}\sum \frac{x_i}{a_2+x_i}-\delta_{a_2}\sum \frac{a_1x_i}{(a_2+x_i)^2}=\sum \left(y_i-\left(a_0+\frac{a_1x_i}{a_2+x_i}\right)\right) \\ \delta_{a_0}\sum \frac{x_i}{a_2+x_i}+\delta_{a_1}\sum \left(\frac{x_i}{a_2+x_i}\right)^2-\delta_{a_2}\sum \frac{a_1x_i^2}{(a_2+x_i)^3}=\sum \frac{x_i}{a_2+x_i}\left(y_i-\left(a_0+\frac{a_1x_i}{a_2+x_i}\right)\right) \\ -\delta_{a_0}\sum \frac{a_1x_i}{(a_2+x_i)^2}-\delta_{a_1}\sum \frac{a_1x_i^2}{(a_2+x_i)^3}+\delta_{a_2}\sum \left(\frac{a_1x_i}{(a_2+x_i)^2}\right)^2=\sum \frac{a_1x_i}{(a_2+x_i)^2}\left(a_0+\frac{a_1x_i}{a_2+x_i}-y_i\right) \end{cases}\end{split}\]

Or, in matrix form:

\[\begin{split}\begin{pmatrix} N & \sum \frac{x_i}{a_2+x_i} & -\sum \frac{a_1x_i}{(a_2+x_i)^2} \\ \sum \frac{x_i}{a_2+x_i} & \sum \left(\frac{x_i}{a_2+x_i}\right)^2 & -\sum \frac{a_1x_i^2}{(a_2+x_i)^3} \\ -\sum \frac{a_1x_i}{(a_2+x_i)^2} & -\sum \frac{a_1x_i^2}{(a_2+x_i)^3} & \sum \left(\frac{a_1x_i}{(a_2+x_i)^2}\right)^2 \end{pmatrix} \begin{pmatrix} \delta_{a_0} \\ \delta_{a_1} \\ \delta_{a_2} \end{pmatrix} = \begin{pmatrix} \sum \left(y_i-\left(a_0+\frac{a_1x_i}{a_2+x_i}\right)\right) \\ \sum \frac{x_i}{a_2+x_i}\left(y_i-\left(a_0+\frac{a_1x_i}{a_2+x_i}\right)\right) \\ \sum \frac{x_i}{(a_2+x_i)^2}\left(a_0+\frac{a_1x_i}{a_2+x_i}-y_i\right) \end{pmatrix}\end{split}\]

Then in condensed form:

\[X\delta_a=Y\]
\[\delta_a=X^{-1}Y\]

With \(X^{-1}\) the inversion of the matrix \(X\). The values obtained for \(\delta_{a_0}\), \(\delta_{a_1}\) and \(\delta_{a_2}\) are an approximation of the value that should be added respectively to \(a_0\), \(a_1\) and \(a_2\) in order for the square differences between the Mime-2 function and the analysis points to be minimal. Therefore, take respectively \(a_0+\delta_{a_0}\), \(a_1+\delta_{a_1}\) and \(a_2+\delta_{a_2}\) as new values \(a_0\), \(a_1\) and \(a_2\), and repeat iterations until:

\[max(\lvert\delta_{a_0}\rvert,\lvert\delta_{a_1}\rvert,\lvert\delta_{a_2}\rvert)<0.001\]

It has to be noted that the process does not always converge to 0. Therefore, the process should be stopped after a fixed number of iterations or when the value for \(\delta_a\) exceeds what the computer program can accept.

Note

In visionCATS, the process is stopped after 25 iterations.

Regression

The inversion of \(\eqref{eq5}\) for Mime-1 gives:

\[x=\frac{a_2y}{a_1-y}\]

The inversion of \(\eqref{eq5}\) for Mime-2 gives:

\[x=\frac{a_2(a_0-y)}{y-a_0-a_1}\]