Abstract
Fitting concentric ellipses is a crucial yet challenging task in image processing, pattern recognition, and astronomy. To address this complexity, researchers have introduced simplified models by imposing geometric assumptions. These assumptions enable the linearization of the model through reparameterization, allowing for the extension of various fitting methods. However, these restrictive assumptions often fail to hold in real-world scenarios, limiting their practical applicability. In this work, we propose two novel estimators that relax these assumptions: the Least Squares method (LS) and the Gradient Algebraic Fit (GRAF). Since these methods are iterative, we provide numerical implementations and strategies for obtaining reliable initial guesses. Moreover, we employ perturbation theory to conduct a first-order analysis, deriving the leading terms of their Mean Squared Errors and their theoretical lower bounds. Our theoretical findings reveal that the GRAF is statistically efficient, while the LS method is not. We further validate our theoretical results and the performance of the proposed estimators through a series of numerical experiments on both real and synthetic data.
Similar content being viewed by others
Notes
The regularity conditions are that the differential operator can be interchanged with the integral operator.
References
Al-Sharadqah A, Chernov N (2012) A doubly optimal ellipse fit. Comput Stat Data Anal 56(7):2771–2781. https://doi.org/10.1016/j.csda.2012.02.028
Al-Sharadqah A, Ho K (2018) Constrained cramér-rao lower bound in errors-in variables (EIV) models: revisited. Stat Probab Lett 35:118–126. https://doi.org/10.1016/j.spl.2017.10.009
Al-Sharadqah A, Rulli L (2022) New methods for detecting concentric objects with high accuracy. Measurement 188:110526. https://doi.org/10.1016/j.measurement.2021.110526
Ambartsoumian G, Xie M (2010) Tomographic reconstruction of nodular images from incomplete data. AIP Conf Proc 1301(1):167–174. https://doi.org/10.1063/1.3514053
Amemiya Y, Fuller W (1988) Estimation for the nonlinear functional relationship. Ann Stat 16:147–160. https://doi.org/10.1214/aos/1176350696
Anderson T, Sawa T (1982) Exact and approximate distributions of the maximum likelihood estimator of a slope coefficient. J R Stat Soc B 44:52–62
Bennamoun M, Mamic G (2002) Object recognition: fundamentals and case studies. Springer-Verlag, London, pp 3–160. https://doi.org/10.1007/978-1-4471-3722-1
Canny J (1986) A computational approach to edge detection. IEEE Trans Pattern Anal Machine Intell 8:679–698
CASIA (2012) Casia iris image database. http://biometrics.idealtest.org/
Chang L, Weiduo H (2019) Real-time geometric fitting and pose estimation for surface of revolution. Pattern Recognit 85:90–108. https://doi.org/10.1016/j.patcog.2018.08.002
Cheng C, van Ness J (1999) Statistical regression with measurement error. Arnold, London
Chernov N (2007) On the convergence of fitting algorithms in computer vision. J Math Imaging Vis 27:231–239
Chernov N, Lesort C (2004) Statistical efficiency of curve fitting algorithms. Comp Stat Data Anal 47:713–728. https://doi.org/10.1016/j.csda.2003.11.008
Chernov N, Wijewickrema S (2013) Algorithms for projecting points onto conics. J Comput Appl Math 251:8–21. https://doi.org/10.1016/j.cam.2013.03.031
Chojnacki W, Brooks M, van den Hengel A et al (2000) On the fitting of surfaces to data with covariances. IEEE Trans Pattern Anal Mach Intell 22:1294–1303. https://doi.org/10.1109/34.888714
Chojnacki W, Brooks M, van den Hengel A (2001) Rationalising the renormalisation method of Kanatani. J Math Imaging Vis 14:21–38. https://doi.org/10.1023/A:1008355213497
Förstner W, Wrobel B (2016) Photogrammetric computer vision: statistics, geometry, orientation, and reconstruction. L. Wiley & Son, New York. https://doi.org/10.1007/978-3-319-11550-4
Fuller W (1987) Measurement error models. L. Wiley & Son, New York
Gander W, Golub G, Strebel R (1994) Least squares fitting of circles and ellipses. BIT 34:558–578. https://doi.org/10.1007/BF01992944
Kanatani K (1993) Renormalization for unbiased estimation. In: 4th ICCV Proceedings (ICCV 93), Berlin, Germany, pp 599–606
Kanatani K (1998) Cramér rao lower bounds for curve fitting. Graph Mod Image Process 60:93–99
Kanatani K (2008) Statistical optimization for geometric fitting: theoretical accuracy bound and high order error analysis. Int J Comput Vis 80:167–188. https://doi.org/10.1007/s11263-007-0098-0
Kanatani K, Rangarajan P (2011) Hyper least squares fitting of circles and ellipses. Comput Stat Data Anal 55:2197–2208. https://doi.org/10.1016/j.csda.2010.12.012
Kanatani K, Sugaya Y, Kanazawa Y (2016) Ellipse fitting for computer vision: implementation and applications. Morgan & Claypool, San Rafael. https://doi.org/10.1007/978-3-031-01815-2
Kochenderfer M, Wheeler T (2019) Algorithms for optimization. The MIT Press, Cambridge
Kong A (2012) Iriscode decompression based on the dependence between its bit pairs. IEEE Trans Pattern Anal Mach Intell 34(3):506–520. https://doi.org/10.1109/TPAMI.2011.159
Kumar P, Belchamber E, Miklavcic S (2018) Pre-processing by data augmentation for improved ellipse fitting. PLoS One. https://doi.org/10.1371/journal.pone.0196902
Kunitomo N (1980) Asymptotic expansions of the distributions of estimators in a linear functional relationship and simultaneous equations. JASA 75:693–700
Kweon I, Kim J, Kim H (2002) A camera calibration method using concentric circles of vision applications. In Asian Conf Comput Vision pp 512–520
Leedan Y, Meer P (2000) Heteroscedastic regression in computer vision: problems with bilinear constrain. Int J Comput Vis 37:127–150. https://doi.org/10.1023/A:1008185619375
Ma Z, Ho K (2014) Asymptotically efficient estimators for the fittings of coupled circles and ellipses. Digit Signal Process 25(2):28–40. https://doi.org/10.1016/j.dsp.2013.10.022
Matei B, Meer P (2000) Reduction of bias in maximum likelihood ellipse fitting. In: 15th Intern. Conf. Computer Vision Pattern Recogn., pp 802–806
Mathai A, Provost S (1992) Quadratic forms in random variables. New York
O’Leary P, Harker M, Zsombor Murray P (2005) Direct and least square sitting of coupled geometric objects for metric vision. IEE Proc Vis Image Signal Process 152(6):687–694. https://doi.org/10.1049/ip-vis:20045206
Salo H et al (2015) Spitzer survey of stellar structure in galaxies. The pipeline 4: multi-component decomposition strategies and data release. Astrophys J Suppl Ser 21(4):55. https://doi.org/10.1088/0067-0049/219/1/4
Stoica P, Ng B (1998) On the cramér-rao bound under parametric constraints. IEEE Signal Process Lett 5(7):177–179. https://doi.org/10.1109/97.700921
Taubin G (1991) Estimation of planar curves, surfaces and nonplanar space curves defined by implicit equations, with applications to edge and range image segmentation. IEEE Trans Pattern Anal Machine Intell 13:1115–1138
Waibel P, Matthes J, Gröll L (2015) Constrained ellipse fitting with center on a line. J Math Imaging Vis 534:364–382. https://doi.org/10.1007/s10851-015-0584-x
Acknowledgements
The authors express their sincere gratitude to the anonymous reviewers and the associate editor for their valuable suggestions, which have significantly enhanced the quality of this paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Code availability
The code used to generate the results presented in this paper is available on GitHub at https://github.com/gpiga3/ConcentricEllipseFitting. The code is also available upon request from the corresponding author.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A Overview of fitting ellipse
Although the methods proposed in this paper are for fitting concentric ellipses to data and must be handled differently than that of the single ellipse fitting problem, we devote this section to discuss briefly single ellipse case.
Figure 7 shows the geometric (natural) parameters of the ellipse and the orthogonal distance \(d_i\) from the observation \((x_{i}, y_{i})\) to the true curve. If the observation is close to the curve then the algebraic distance is also close to zero, i.e.,
In terms of \(\varvec{\mathcal {A}}=\bigl (A,B,C,D,E,F\bigr )^\top \) and \(\varvec{\xi }_i=\bigl (x_{i}^2, 2x_{i}y_{i},y_{i}^2,2x_{i},2y_{i},1\bigr )^\top \), then \(\varvec{\xi }_{i}^\top \varvec{\mathcal {A}}\approx 0\).
The simplest approach to estimate the parameters of \(\varvec{\mathcal {A}}\) is the Least Squares (LS), which minimizes \( {\mathcal {J}}(\varvec{\mathcal {A}})=\varvec{\mathcal {A}}^\top \varvec{M}\varvec{\mathcal {A}}\), where \(\varvec{M}=\sum _{j=1}^{n}\varvec{\xi }_j\varvec{\xi }_j^\top \). The advantage of using this algebraic representation \(\varvec{\xi }_j^\top \varvec{\mathcal {A}}\) is that it provides closed-form solutions for \({\mathcal {J}}\). The parametric space of the conic includes an ellipse, hyperbola, parabola, parallel lines, imaginary lines, and many other shapes, depending on the values of the parameters. For example, if \(AC-B^2=0\), then (A1) will be the expression for a parabola, but if \(AC-B^2 < 0\), the expression is that of a hyperbola. Only if \(AC-B^2>0\) will the expression be that of an ellipse. Therefore, a constraint shall be imposed on the parametric space to remove any indeterminacy. To avoid this, a constraint, for example, \(\Vert \varvec{\mathcal {A}}\Vert ^2=1\), is imposed. This is a constrained minimizing problem. Differentiating \(\varvec{\mathcal {A}}^\top \varvec{M}\varvec{\mathcal {A}}-\lambda \left( \Vert \varvec{\mathcal {A}}\Vert ^2-1\right) \) with respect to \(\varvec{\mathcal {A}}\) and setting the result equal to zero yields \(\varvec{M}\hat{\varvec{\mathcal {A}}_{\textrm{L}}}=\lambda \hat{\varvec{\mathcal {A}}_{\textrm{L}}}\), which is an eigenvalue problem. The LS estimator suffers from heavy bias. Consequently, other methods under the algebraic approach have been proposed by imposing different constraints. These constraints are typically of the form \(\varvec{\mathcal {A}}^\top \varvec{N}\varvec{\mathcal {A}}=1\). In this paper, we will use the widely known Taubin method to get an initial guess for our iterative methods. However, the Taubin method imposes a different constraint which depends on the data and it takes the form \( \varvec{N}_\textrm{T}=4\sum _{j=1}^{n}\varvec{V}_i\), where
Therefore, the Taubin method solves \(\varvec{M}\hat{\varvec{\mathcal {A}}_\textrm{T}}=\lambda \varvec{N}_\textrm{T}{\hat{\varvec{\mathcal {A}}}}_\textrm{T}\). Since \(\varvec{M}\) and \(\varvec{N}_\textrm{T}\) are positive semi-definite matrices, and \(\lambda \) is non-negative too. Thus, the estimator \(\hat{\varvec{\mathcal {A}}_T}\) is the generalized eigenvector corresponding to the smallest generalized eigenvalue of the generalized eigenvalue problem. The Taubin method is considered to be a good method for obtaining an estimate for the parameter vector \( {\varvec{\mathcal {A}}}\).
Note that after \(\varvec{\mathcal {A}}=\bigl (A,B,C,D,E,F\bigr )^\top \) is estimated, the geometric parameters \(\varvec{\theta } = \bigl (c_x,c_y,a,b,\psi \bigr )^\top \) can be computed from the following relations:
Also, the conversion formulas for transforming the geometric parameters to algebraic parameters are
Appendix B Derivation of CCRB
To derive the CCRB of \(\varvec{\tilde{\phi }}\) and \(\tilde{\varvec{m}}\), we need to state our problem in the benchmark of this formulation. Then, with the aid of some elements of linear algebra, we will derive the CCRB for our parameters. First, we have here
Therefore, \(\varvec{\tilde{F}}\) can be written in the compact form \(\varvec{\tilde{F}}_{n\times m } = \left( \varvec{\tilde{C}}_{n\times 2n}\, \,\, \varvec{\tilde{S}}_{n \times d} \right) ,\) where
where \(\varvec{\tilde{C}}_i= \textrm{Diag}\bigl (\varvec{\tilde{r}}_{11}^\top , \ldots , \varvec{\tilde{r}}_{k1}^\top \bigr )_{n_i\times 2n_i}\) with
Here, \(\varvec{\tilde{C}}_{i}\) is an \(n_i\times 2n_i\) matrix that has its \(j{\mathrm{{th}}}\) row equal to \(\bigl ( \tfrac{\partial {\tilde{p}}_{ij}}{\partial \tilde{\varvec{m}}_{11}^\top },\ldots ,\tfrac{\partial {\tilde{p}}_{ij}}{\partial \tilde{\varvec{m}}_{1n_1}^\top },\ldots ,\tfrac{\partial {\tilde{p}}_{ij}}{\partial \tilde{\varvec{m}}_{k1}^\top },\ldots ,\tfrac{\partial {\tilde{p}}_{ij}}{\partial \tilde{\varvec{m}}_{kn_k}^\top } \bigr )_{1\times 2n_i}\). The \(n\times d\) matrix \(\varvec{\tilde{S}}\) is defined as
where the \(j{\mathrm{{th}}}\) row of \(\varvec{\tilde{V}}_{i}\) and \(\varvec{\tilde{S}}_{i}\) are, respectively, equal to
and \( \nabla _{({\tilde{A}}_i, {\tilde{B}}_i, {\tilde{\psi }}_i)}{\tilde{p}}_{ij}=\bigl ({\tilde{T}}_{ij}^2,,\,{\tilde{T}}_{ij}^{'2}\,,\, 2{\tilde{T}}_{ij}{\tilde{T}}_{ij}'({\tilde{A}}_i-{\tilde{B}}_i) \bigr )\). For this structure of \(\varvec{\tilde{F}}\), we need to find \(\varvec{\tilde{U}}\) such that \(\varvec{\tilde{F}}\varvec{\tilde{U}}=\varvec{0}_{n\times (m-n)}\), where \(n=\sum _{i=1}^kn_i\). If we define
and \( \varvec{\tilde{C}}_i^\perp = \textrm{Diag}\bigl ((\varvec{\tilde{r}}_{i1}^\perp )^\top , \ldots , (\varvec{\tilde{r}}_{in_i}^\perp )^\top \bigr )_{{2n_i}\times n_i} \) with \( \varvec{\tilde{r}}_{ij}^\perp = 2\bigl ({\tilde{T}}_{ij}{\tilde{A}}_i\tilde{S_i}+{\tilde{T}}_{ij}'{\tilde{B}}_i\tilde{C_i}\,,\, {\tilde{T}}_{ij}'{\tilde{B}}_i\tilde{S_i}-{\tilde{T}}_{ij}{\tilde{A}}_i\tilde{C_i}\bigr ) ^\top \), then it is easy to see that \(\varvec{\tilde{C}}\varvec{\tilde{C}}^\perp =\varvec{0}_{n \times n}\). Thus, in order to show that \(\varvec{\tilde{F}}\varvec{\tilde{U}}= \varvec{0}_{n\times (m-n)}\) we need to define \(\varvec{\tilde{V}}\) such that \(\varvec{\tilde{C}}\varvec{\tilde{V}}+\varvec{\tilde{S}}= \varvec{0}_{n\times d}\). If we define
where
and \(\tilde{\varvec{u}}_{ij}=\tfrac{\varvec{\tilde{r}}_{ij}}{\Vert \varvec{\tilde{r}}_{ij}\Vert ^2}\), then \(\varvec{\tilde{C}}\varvec{\tilde{V}}= -\varvec{\tilde{S}}\). Thus, this choice of \(\varvec{\tilde{U}}\) satisfies \(\varvec{\tilde{F}}\varvec{\tilde{U}}=\varvec{0}_{n \times (m-n)}\). Now, using the definition of \(\varvec{J}\), we have
Next, we apply the following lemma in order to obtain the CCRB of \(\varvec{{\hat{\phi }}}\).
Lemma 2
(Mathai and Provost 1992) Let \({\varvec{A}}\) be a nonsingular matrix of size \((n+p) \) and \({\varvec{B}}={\varvec{A}}^{-1}\), where
If \(\varvec{A_{22}}^{-1}\) exists, then
(a) The CCRB of \(\varvec{{\hat{\phi }}}\) We are mainly interested in finding the lower bound for \(\varvec{{\hat{\phi }}}\). Because of the structure of \(\varvec{\tilde{F}}\), the bottom right block of \((\varvec{\tilde{U}}^\top \varvec{J}\varvec{\tilde{U}})^{-1}\) is all we are interested in. To obtain this, we apply Lemma 2 to (B9). Accordingly, the bottom right block matrix \((\varvec{\tilde{U}}^\top \varvec{J}\varvec{\tilde{U}})^{-1}\) is
where \(\varvec{\tilde{U}}=\varvec{\Sigma }_{2n}^{-1/2}\varvec{\tilde{C}}^{\perp }\). It is worth mentioning here that \(\varvec{I}_d-\varvec{\tilde{U}}\bigl (\varvec{\tilde{U}}^\top \varvec{\tilde{U}}\bigr )^{-1}\varvec{\tilde{U}}^\top \) is the projection matrix onto the orthogonal components of the column space of \(\varvec{\tilde{U}}\). Since \(\varvec{\tilde{C}}\varvec{\tilde{C}}^{\perp }={\varvec{0}}\), then \(\varvec{\tilde{U}}^{\perp }=\varvec{\tilde{C}}\varvec{\Sigma }_{2n}^{1/2}\) is orthogonal to \(\varvec{\tilde{U}}\). Then \(\varvec{I}_d-\varvec{\tilde{U}}\bigl (\varvec{\tilde{U}}^\top \varvec{\tilde{U}}\bigr )^{-1}\varvec{\tilde{U}}^\top \) is the projection matrix onto the column space of \((\varvec{\tilde{U}}^{\perp })^\top \). This implies that
Accordingly, substituting \(\varvec{\tilde{U}}^{\perp }=\varvec{\tilde{C}}\varvec{\Sigma }_{2n}^{1/2}\) and using the fact \(\varvec{\tilde{C}}\varvec{\tilde{V}}= -\varvec{\tilde{S}}\) in \({\varvec{B}}_{22}\) yields
and as such,
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Al-Sharadqah, A., Piga, G. Fitting concentric elliptical shapes under general model. Comput Stat (2024). https://doi.org/10.1007/s00180-024-01460-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00180-024-01460-x