Skip to main content
Log in

Formability classifier for a TV back panel part with machine learning

  • Developments in modelling and simulation..Japan, South Korea and China
  • Published:
International Journal of Material Forming Aims and scope Submit manuscript

Abstract

This study proposes a machine learning-based methodology for evaluating the formability of sheet metals. An XGBoost (eXtreme Gradient Boosting) machine learning classifier is developed to classify the formability of the TV back panel based on the forming limit curve (FLC). The input to the XGBoost model is the blank thickness and cross-sectional dimensions of the screw holes, AC (Alternating Current), and AV (Audio Visual) terminals on the TV back panel. The training dataset is generated using finite element simulations and verified through experimental strain measurements. The trained classification model maps the panel geometry to one of three formability classes: safe, marginal, and cracked. Strain values below the FLC are classified as safe, those within 5% margin of the FLC are classified as marginal, and those above are classified as cracked. The statistical accuracy and performance of the classifier are quantified using the confusion matrix and multiclass Receiver Operating Characteristic (ROC) curve, respectively. Furthermore, in order to demonstrate the practical viability of the proposed methodology, the punch radius of the screw holes is optimized using Brent's method in a Java environment. Remarkably, the optimization process is completed swiftly, taking only 3.11 s. Hence, the results demonstrate that formability for a new design can be improved based on the predictions of the machine learning model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22

Similar content being viewed by others

References

  1. Har-Peled, S, Roth, D, Zimak, D, (2002) Constraint Classification for Multiclass Classification and Ranking. Adv Neural Inf Process Syst 15: Proceedings of the 2002 Conference, MIT Press

  2. Breiman, L, (1996) Bias, Variance, And Arcing Classifiers. Statistics Department: University of California. Technical report 460

  3. Li, P, (2010) Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI’10), 897–904

  4. Bennett J, Lanning S (2007) The netflix prize. In Proceedings of the KDD Cup Workshop 2007:3–6

    Google Scholar 

  5. He, X, Pan, J, Jin, O, Xu, , Liu, B, Xu, T, Shi, Y, Atallah, A, Herbrich, R, Bowers, S, Candela JQ N, (2014) Practical lessons from predicting clicks on ads at facebook. In Proceedings of the Eighth International Workshop on Data Mining for Online Advertising, ADKDD’14

  6. Choi DK (2019) Data-Driven Materials Modeling with XGBoost Algorithm and Statistical Inference Analysis for Prediction of Fatigue Strength of Steels. Int J Precis Eng Manuf 20(1):129–138

    Article  MathSciNet  Google Scholar 

  7. Chheda AM, Nazro L, Sen FG, Hegadekatte V (2019) Prediction of forming limit diagrams using machine learning. IOP Conf Ser Mater Sci Eng 651:012107

    Article  Google Scholar 

  8. Finamor FP, Wolff MA, Lage VS (2021) Prediction of forming limit diagrams from tensile tests of automotive grade steels by a machine learning approach. IOP Conf Ser Mater Sci Eng 1157:012080

    Article  Google Scholar 

  9. Elangovan K, Narayanan CS, Narayanasamy R (2011) Modelling of forming limit diagram of perforated commercial pure aluminium sheets using artificial neural network. Comput Mater Sci 47(4):1072–1078

    Article  Google Scholar 

  10. Bonatti C, Mohr D (2021) Neural network model predicting forming limits for Bi-linear strain paths. Int J Plast 137:102886

    Article  Google Scholar 

  11. Keeler, SP, (1961) Plastic instability and fracture in sheets stretched over rigid punches. MIT. Dept. of Metallurgy

  12. Goodwin GM (1968) Application of Strain Analysis to Sheet Metal Forming Problems in the Press Shop. SAE Trans 77:380–387

    Google Scholar 

  13. Marciniak Z, Kuczyński K (1967) Limit strains in the processes of stretch-forming sheet metal. Int J Mech Sci 9:609–620

    Article  MATH  Google Scholar 

  14. Nakazima K, Kikuma T, Hasuka K (1968) Study on the formability of steel sheets. Yawata Tech Report 264:141–154

    Google Scholar 

  15. Ragab AR, Baudelet B (1982) Forming limit curves: Out-of-plane and in-plane stretching. J Mech Work Technol 6(4):267–276

    Article  Google Scholar 

  16. Raghavan KS (1995) A simple technique to generate in-plane forming limit curves and selected applications. Metall Mater Trans A 26:2075–2084

    Article  Google Scholar 

  17. ASTM E2218–15, (2016) Standard Test Method for Determining Forming Limit Curves

  18. Hill R (1952) On discontinuous plastic states, with special reference to localized necking in thin sheets. J Mech Phys Solid 1(1):19–30

    Article  MathSciNet  Google Scholar 

  19. Swift HW (1952) Plastic instability under plane stress. J Mech Phys Solid 1(1):1–18

    Article  MathSciNet  Google Scholar 

  20. Stören S, Rice JR (1975) Localized necking in thin sheets. J Mech Phys Solid 23(6):421–441

    Article  MATH  Google Scholar 

  21. Hutchinson, JW, Neale, KW, (1978) Sheet Necking-II. Time-Independent Behavior. In: Mechanics of Sheet Metal Forming. 127–153

  22. Brunet M, Morestin F (2001) Experimental and analytical necking studies of anisotropic sheet metals. J Mater Process Technol 112(2–3):214–226

    Article  Google Scholar 

  23. Zhang R, Shao Z, Lin J (2018) A review on modelling techniques for formability prediction of sheet metal forming. Int J Lightweight Mater Manuf 1(3):115–125

    Google Scholar 

  24. Stoughton TB, Zhu X (2004) Review of theoretical models of the strain-based FLD and their relevance to the stress-based FLD. Int J Plast 20(8–9):1463–1486

    Article  MATH  Google Scholar 

  25. Stoughton TB, Yoon JW (2012) Path independent forming limits in strain and stress spaces. Int J Plast 49(25):3616–3625

    Google Scholar 

  26. Song WJ, Heo SC, Kim J, Kang BS (2006) Investigation on preformed shape design to improve formability in tube hydroforming process using FEM. J Mater Process Technol 177(1–3):658–662

    Article  Google Scholar 

  27. Ko D-C, Cha S-H, Lee S-K, Lee C-J, Kim B-M (2010) Application of a feasible formability diagram for the effective design in stamping processes of automotive panels. Mater Des 31(3):1262–1275

    Article  Google Scholar 

  28. Attanasio A, Ceretti E, Fiorentino A, Mazzoni L, Giardini C (2009) Experimental Tests to Study Feasibility and Formability in Incremental Forming Process. In Key Engineering Materials 410–411:391–400

    Article  Google Scholar 

  29. Kim H-K, Kim H-W, Cho J-H, Lee J-C (2013) High-formability Al alloy sheet produced by asymmetric rolling of strip-cast sheet. Mater Sci Eng A 574:31–36

    Article  Google Scholar 

  30. Zimmerling C, Dörr D, Henning F, Kärger L (2019) A machine learning assisted approach for textile formability assessment and design improvement of composite components. Compos - A: Appl 124:105459

    Article  Google Scholar 

  31. Bae MH, Kim M, Yu J, Lee MS, Lee SW, Lee T (2022) Enhanced processing map of Ti–6Al–2Sn–2Zr–2Mo–2Cr–0.15Si aided by extreme gradient boosting. Heliyon 8(10):10991

    Article  Google Scholar 

  32. Lu W, Xiao W, Li Y, Zheng K, Wu Y (2023) Machine learning in the prediction of formability in aluminum hot stamping process with multiple variable blank holder force. Int J Comput Integr Manuf 36(5):702–771

    Article  Google Scholar 

  33. Marques AE, Dib MA, Khalfallah A, Soares MS, Oliveira MC, Fernandes JV, Ribeiro BM, Prates PA (2022) Machine Learning for Predicting Fracture Strain in Sheet Metal Forming. Metals 12(11):1799

    Article  Google Scholar 

  34. Singh AR, Bashford-Rogers T, Marnerides D, Debattista K, Hazra S (2023) HDR image-based deep learning approach for automatic detection of split defects on sheet metal stamping parts. Int J Adv Manuf Technol 125:2393–2408

    Article  Google Scholar 

  35. Chen, T, Guestrin C, (2016) XGBoost: A Scalable Tree Boosting System. arXiv:1603.02754

  36. ASTM E8/E8M-16a, (2016) Standard Test Methods for Tension Testing of Metallic Material. ASTM. E8/E8MQuery

  37. Kuwabara T, Ikeda S, Kuroda K (1998) Measurement and analysis of differential work hardening in cold-rolled steel sheet under biaxial tension. J Mater Process Technol 80:517–523

    Article  Google Scholar 

  38. Wu X-D, Wan M, Zhou X-B (2005) Biaxial tensile testing of cruciform specimen under complex loading. J Mater Process Technol 168–1:181–183

    Article  Google Scholar 

  39. Barlat F, Brem JC, Yoon JW, Chung K, Dick RE, Lege DJ, Pourboghrat F, Choi S-H, Chu E (2003) Plane stress yield function for aluminum alloy sheets—part 1: theory. Int J Plast 19:1297–1319

    Article  MATH  Google Scholar 

  40. Ozturk, F, Dilmeç, M, Turkoz, M, Ece, RE, Halkaci, HS (2009) Grid Marking and Measurement Methods for Sheet Metal Formability. In: 5th International Conference and Exhibition on Design and Production of Machines and Dies, Molds

  41. Gorodkin J (2004) Comparing two K-category assignments by a K-category correlation coefficient. Comput Biol Chem 28(5–6):367–374

    Article  MATH  Google Scholar 

  42. Jurman G, Riccadonna S, Furlanello C (2012) A Comparison of MCC and CEN Error Measures in Multi-Class Prediction. PLoS ONE 7(8):e41882

    Article  Google Scholar 

  43. Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay É (2011) Scikit-learn: Machine Learning in Python. J Mach Learn Res 12(85):2825–2830

    MathSciNet  MATH  Google Scholar 

  44. Bishop CM (2006) Pattern Recognition and Machine Learning. Springer

    MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (2023R1A2C2005661). The authors are grateful for the supports. Also, this work is partially supported from the Technology development Program (S3288770) funded by the Ministry of SMEs and Startups (MSS, Korea).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeong Whan Yoon.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix 1

Appendix 1

A. Tree Boosting

  1. (I)

    Decision Tree

Decision tree learning is a supervised learning approach that utilizes if-else or true–false feature questions to predict a category in a classification problem or continuous numeric value in a regression problem. For a given dataset \(\mathcal{D}={\{\left({\mathbf{x}}_{i}, {y}_{i}\right)\}}_{i=1}^{n}\), a tree model is given by,

$$f\left({\mathbf{x}}_{i}\right)=\sum_{j=1}^{T}{w}_{j}\mathrm{I}\left({\mathbf{x}}_{i}\in {R}_{j}\right)$$
(6)

where \({w}_{j}\) is the score (prediction) of the \(j\)-th leaf, referred to as weight in the region \({R}_{j}\), \(\mathrm{I}\) is the set of indices of data points assigned to the \(j\)-th leaf, and T is the total number of leaves in the tree (see Fig. 1).

  1. (II)

    Boosting

Boosting is a class of machine learning algorithms that iteratively combines multiple base learners to form a prediction model. The base learners are generally weak but provide accurate predictions when combined in an ensemble hence the term 'boosting.' Given the base learner to be decision trees with K trees, the predicted output \({\widehat{y}}_{i}\) corresponding to the input vector \({\mathbf{x}}_{i}\) of the i-th instance is given by,

$${\widehat{y}}_{i}=\sum_{k=1}^{K}{f}_{k}\left({\mathbf{x}}_{i}\right),{f}_{k}\in \mathcal{F}$$
(7)

where \({f}_{k}\) is the output of the k-th tree, and \(\mathcal{F}\) is a set of all possible classification and regression tree (CART) functions. Tree boosting learns by iteratively adding \({f}_{t}\left({x}_{i}\right)\) to base learners, such that it minimizes the following objective function,

$$\begin{array}{c}{\mathcal{L}}^{\left(t\right)}=\sum\limits_{\mathcal{D}}l\left({y}_{i},{{\widehat{y}}_{i}}^{\left(t\right)}\right)\\ =\sum\limits_{\mathcal{D}}l\left({y}_{i},{{\widehat{y}}_{i}}^{\left(t-1\right)}+{f}_{t}\left({\mathbf{x}}_{i}\right)\right)\end{array}$$
(8)

where,

$${\widehat{y}}_{i}^{\left(t\right)}=\sum_{k=1}^{t}{f}_{k}\left({x}_{i}\right)={\widehat{y}}_{i}^{\left(t-1\right)}+{f}_{t}\left({x}_{i}\right)$$
(9)

\(D\) is the size of the training set, \({\widehat{y}}_{i}^{(t)}\) and \({y}_{i}\) are the predicted and target values at the \(t\)-th iteration, respectively, and the \(l\) term is a differentiable convex loss function that measures the difference between \({\widehat{y}}_{i}^{(t)}\) and \({y}_{i}\). In general settings, boosting utilizes a second-order taylor approximation of the loss function, and the objective function is re-written as follows,

$${\widetilde{\mathcal{L}}}^{\left(t\right)}=\sum_{\mathcal{D}}\left[{g}_{i}{f}_{t}\left({\mathbf{x}}_{i}\right)+\frac{1}{2}{h}_{i}{f}_{t}^{2}\left({\mathbf{x}}_{i}\right)\right]$$
(10)

where \({g}_{i}\) and \({h}_{i}\) are the gradient and hessian, respectively, defined as,

$${g}_{i}={\partial }_{{\widehat{y}}^{\left(t-1\right)}}l\left({y}_{i},{\widehat{y}}^{\left(t-1\right)}\right);{h}_{i}={\partial }_{{\widehat{y}}^{\left(t-1\right)}}^{2}l\left({y}_{i},{\widehat{y}}^{\left(t-1\right)}\right)$$
(11)
  1. (III)

    XGBoost method

XGBoost employs Newton tree boosting to fit additive tree models. In Newton boosting, the base learners are the tree models and for a given dataset with \(n\) examples \(\mathcal{D}={\{\left({\mathbf{x}}_{i}, {y}_{i}\right)\}}_{i=1}^{n}\), a tree model from Equation 6 is given by,

$${f}_{t}\left({\mathbf{x}}_{i}\right)=\sum_{j=1}^{T}{w}_{j}\mathrm{I}\left({\mathbf{x}}_{i}\in {R}_{j}\right)$$
(12)

XGBoost learns by iteratively adding \({f}_{t}\left({\mathbf{x}}_{i}\right)\) to base learners, such that it minimizes the following objective function (as described in Equation 10),

$$\begin{array}{c}{\widetilde{\mathcal{L}}}^{\left(t\right)}=\sum\limits_{\mathcal{D}}\sum\limits_{j=1}^{T}\left[{g}_{i}{w}_{j}+\frac{1}{2}{h}_{i}{w}_{j}^{2}\right]\\ =\sum\limits_{j=1}^{T}\left[{G}_{j}{w}_{j}+\frac{1}{2}{H}_{j}{w}_{j}^{2}\right]\end{array}$$
(13)

where

$${G}_{i}=\sum_{\mathcal{D}}{g}_{i};{H}_{i}=\sum_{\mathcal{D}}{h}_{i}$$

At each iteration, XGBoost learns the leaf weights and tree structure through the following steps:

  1. 1.

    For each leaf \(j\), the optimization problem in Equation 13 is quadratic in \({w}_{j}\). Hence, the optimal leaf weight or prediction \({w}_{j}^{*}\) for a proposed (fixed) tree structure is obtained by setting \(d{\widetilde{\mathcal{L}}}^{(t)}/d{w}_{j}=0\) as follows,

    $${w}_{j}^{*}=-\frac{{G}_{j}}{{H}_{j}},j=1,\dots ,T.$$
    (14)
  2. 2.

    Learning the tree structure involves searching for splits of internal nodes. To compute the optimal split, the objective reduction following Equations 13 and 14 is,

    $${obj}^{*}=-\frac{1}{2}\sum_{j=1}^{T}\frac{{G}_{j}^{2}}{{H}_{j}}$$
    (15)

    Equivalently, the splits are determined that maximizes the gain given by,

    $${\text{Gain}}=\frac{1}{2}\left[\frac{{G}_{L}^{2}}{{H}_{L}}+\frac{{G}_{R}^{2}}{{H}_{R}}-\frac{{G}^{2}}{H}\right]$$
    (16)

    Here, the terms with subscripts \(L\) and \(R\) correspond to the left and right leaf scores, respectively, and the last term is the score of the original node. The nodes with negative gain are pruned out in bottom-up order.

  3. 3.

    The final leaf weights \({\widehat{w}}_{j}\) are computed following Equation 14 for the learned tree structure.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fazily, P., Cho, D., Choi, H. et al. Formability classifier for a TV back panel part with machine learning. Int J Mater Form 16, 70 (2023). https://doi.org/10.1007/s12289-023-01791-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12289-023-01791-y

Keywords

Navigation