Abstract
This study proposes a machine learning-based methodology for evaluating the formability of sheet metals. An XGBoost (eXtreme Gradient Boosting) machine learning classifier is developed to classify the formability of the TV back panel based on the forming limit curve (FLC). The input to the XGBoost model is the blank thickness and cross-sectional dimensions of the screw holes, AC (Alternating Current), and AV (Audio Visual) terminals on the TV back panel. The training dataset is generated using finite element simulations and verified through experimental strain measurements. The trained classification model maps the panel geometry to one of three formability classes: safe, marginal, and cracked. Strain values below the FLC are classified as safe, those within 5% margin of the FLC are classified as marginal, and those above are classified as cracked. The statistical accuracy and performance of the classifier are quantified using the confusion matrix and multiclass Receiver Operating Characteristic (ROC) curve, respectively. Furthermore, in order to demonstrate the practical viability of the proposed methodology, the punch radius of the screw holes is optimized using Brent's method in a Java environment. Remarkably, the optimization process is completed swiftly, taking only 3.11 s. Hence, the results demonstrate that formability for a new design can be improved based on the predictions of the machine learning model.
Similar content being viewed by others
References
Har-Peled, S, Roth, D, Zimak, D, (2002) Constraint Classification for Multiclass Classification and Ranking. Adv Neural Inf Process Syst 15: Proceedings of the 2002 Conference, MIT Press
Breiman, L, (1996) Bias, Variance, And Arcing Classifiers. Statistics Department: University of California. Technical report 460
Li, P, (2010) Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI’10), 897–904
Bennett J, Lanning S (2007) The netflix prize. In Proceedings of the KDD Cup Workshop 2007:3–6
He, X, Pan, J, Jin, O, Xu, , Liu, B, Xu, T, Shi, Y, Atallah, A, Herbrich, R, Bowers, S, Candela JQ N, (2014) Practical lessons from predicting clicks on ads at facebook. In Proceedings of the Eighth International Workshop on Data Mining for Online Advertising, ADKDD’14
Choi DK (2019) Data-Driven Materials Modeling with XGBoost Algorithm and Statistical Inference Analysis for Prediction of Fatigue Strength of Steels. Int J Precis Eng Manuf 20(1):129–138
Chheda AM, Nazro L, Sen FG, Hegadekatte V (2019) Prediction of forming limit diagrams using machine learning. IOP Conf Ser Mater Sci Eng 651:012107
Finamor FP, Wolff MA, Lage VS (2021) Prediction of forming limit diagrams from tensile tests of automotive grade steels by a machine learning approach. IOP Conf Ser Mater Sci Eng 1157:012080
Elangovan K, Narayanan CS, Narayanasamy R (2011) Modelling of forming limit diagram of perforated commercial pure aluminium sheets using artificial neural network. Comput Mater Sci 47(4):1072–1078
Bonatti C, Mohr D (2021) Neural network model predicting forming limits for Bi-linear strain paths. Int J Plast 137:102886
Keeler, SP, (1961) Plastic instability and fracture in sheets stretched over rigid punches. MIT. Dept. of Metallurgy
Goodwin GM (1968) Application of Strain Analysis to Sheet Metal Forming Problems in the Press Shop. SAE Trans 77:380–387
Marciniak Z, Kuczyński K (1967) Limit strains in the processes of stretch-forming sheet metal. Int J Mech Sci 9:609–620
Nakazima K, Kikuma T, Hasuka K (1968) Study on the formability of steel sheets. Yawata Tech Report 264:141–154
Ragab AR, Baudelet B (1982) Forming limit curves: Out-of-plane and in-plane stretching. J Mech Work Technol 6(4):267–276
Raghavan KS (1995) A simple technique to generate in-plane forming limit curves and selected applications. Metall Mater Trans A 26:2075–2084
ASTM E2218–15, (2016) Standard Test Method for Determining Forming Limit Curves
Hill R (1952) On discontinuous plastic states, with special reference to localized necking in thin sheets. J Mech Phys Solid 1(1):19–30
Swift HW (1952) Plastic instability under plane stress. J Mech Phys Solid 1(1):1–18
Stören S, Rice JR (1975) Localized necking in thin sheets. J Mech Phys Solid 23(6):421–441
Hutchinson, JW, Neale, KW, (1978) Sheet Necking-II. Time-Independent Behavior. In: Mechanics of Sheet Metal Forming. 127–153
Brunet M, Morestin F (2001) Experimental and analytical necking studies of anisotropic sheet metals. J Mater Process Technol 112(2–3):214–226
Zhang R, Shao Z, Lin J (2018) A review on modelling techniques for formability prediction of sheet metal forming. Int J Lightweight Mater Manuf 1(3):115–125
Stoughton TB, Zhu X (2004) Review of theoretical models of the strain-based FLD and their relevance to the stress-based FLD. Int J Plast 20(8–9):1463–1486
Stoughton TB, Yoon JW (2012) Path independent forming limits in strain and stress spaces. Int J Plast 49(25):3616–3625
Song WJ, Heo SC, Kim J, Kang BS (2006) Investigation on preformed shape design to improve formability in tube hydroforming process using FEM. J Mater Process Technol 177(1–3):658–662
Ko D-C, Cha S-H, Lee S-K, Lee C-J, Kim B-M (2010) Application of a feasible formability diagram for the effective design in stamping processes of automotive panels. Mater Des 31(3):1262–1275
Attanasio A, Ceretti E, Fiorentino A, Mazzoni L, Giardini C (2009) Experimental Tests to Study Feasibility and Formability in Incremental Forming Process. In Key Engineering Materials 410–411:391–400
Kim H-K, Kim H-W, Cho J-H, Lee J-C (2013) High-formability Al alloy sheet produced by asymmetric rolling of strip-cast sheet. Mater Sci Eng A 574:31–36
Zimmerling C, Dörr D, Henning F, Kärger L (2019) A machine learning assisted approach for textile formability assessment and design improvement of composite components. Compos - A: Appl 124:105459
Bae MH, Kim M, Yu J, Lee MS, Lee SW, Lee T (2022) Enhanced processing map of Ti–6Al–2Sn–2Zr–2Mo–2Cr–0.15Si aided by extreme gradient boosting. Heliyon 8(10):10991
Lu W, Xiao W, Li Y, Zheng K, Wu Y (2023) Machine learning in the prediction of formability in aluminum hot stamping process with multiple variable blank holder force. Int J Comput Integr Manuf 36(5):702–771
Marques AE, Dib MA, Khalfallah A, Soares MS, Oliveira MC, Fernandes JV, Ribeiro BM, Prates PA (2022) Machine Learning for Predicting Fracture Strain in Sheet Metal Forming. Metals 12(11):1799
Singh AR, Bashford-Rogers T, Marnerides D, Debattista K, Hazra S (2023) HDR image-based deep learning approach for automatic detection of split defects on sheet metal stamping parts. Int J Adv Manuf Technol 125:2393–2408
Chen, T, Guestrin C, (2016) XGBoost: A Scalable Tree Boosting System. arXiv:1603.02754
ASTM E8/E8M-16a, (2016) Standard Test Methods for Tension Testing of Metallic Material. ASTM. E8/E8MQuery
Kuwabara T, Ikeda S, Kuroda K (1998) Measurement and analysis of differential work hardening in cold-rolled steel sheet under biaxial tension. J Mater Process Technol 80:517–523
Wu X-D, Wan M, Zhou X-B (2005) Biaxial tensile testing of cruciform specimen under complex loading. J Mater Process Technol 168–1:181–183
Barlat F, Brem JC, Yoon JW, Chung K, Dick RE, Lege DJ, Pourboghrat F, Choi S-H, Chu E (2003) Plane stress yield function for aluminum alloy sheets—part 1: theory. Int J Plast 19:1297–1319
Ozturk, F, Dilmeç, M, Turkoz, M, Ece, RE, Halkaci, HS (2009) Grid Marking and Measurement Methods for Sheet Metal Formability. In: 5th International Conference and Exhibition on Design and Production of Machines and Dies, Molds
Gorodkin J (2004) Comparing two K-category assignments by a K-category correlation coefficient. Comput Biol Chem 28(5–6):367–374
Jurman G, Riccadonna S, Furlanello C (2012) A Comparison of MCC and CEN Error Measures in Multi-Class Prediction. PLoS ONE 7(8):e41882
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay É (2011) Scikit-learn: Machine Learning in Python. J Mach Learn Res 12(85):2825–2830
Bishop CM (2006) Pattern Recognition and Machine Learning. Springer
Acknowledgements
This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (2023R1A2C2005661). The authors are grateful for the supports. Also, this work is partially supported from the Technology development Program (S3288770) funded by the Ministry of SMEs and Startups (MSS, Korea).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix 1
Appendix 1
A. Tree Boosting
-
(I)
Decision Tree
Decision tree learning is a supervised learning approach that utilizes if-else or true–false feature questions to predict a category in a classification problem or continuous numeric value in a regression problem. For a given dataset \(\mathcal{D}={\{\left({\mathbf{x}}_{i}, {y}_{i}\right)\}}_{i=1}^{n}\), a tree model is given by,
where \({w}_{j}\) is the score (prediction) of the \(j\)-th leaf, referred to as weight in the region \({R}_{j}\), \(\mathrm{I}\) is the set of indices of data points assigned to the \(j\)-th leaf, and T is the total number of leaves in the tree (see Fig. 1).
-
(II)
Boosting
Boosting is a class of machine learning algorithms that iteratively combines multiple base learners to form a prediction model. The base learners are generally weak but provide accurate predictions when combined in an ensemble hence the term 'boosting.' Given the base learner to be decision trees with K trees, the predicted output \({\widehat{y}}_{i}\) corresponding to the input vector \({\mathbf{x}}_{i}\) of the i-th instance is given by,
where \({f}_{k}\) is the output of the k-th tree, and \(\mathcal{F}\) is a set of all possible classification and regression tree (CART) functions. Tree boosting learns by iteratively adding \({f}_{t}\left({x}_{i}\right)\) to base learners, such that it minimizes the following objective function,
where,
\(D\) is the size of the training set, \({\widehat{y}}_{i}^{(t)}\) and \({y}_{i}\) are the predicted and target values at the \(t\)-th iteration, respectively, and the \(l\) term is a differentiable convex loss function that measures the difference between \({\widehat{y}}_{i}^{(t)}\) and \({y}_{i}\). In general settings, boosting utilizes a second-order taylor approximation of the loss function, and the objective function is re-written as follows,
where \({g}_{i}\) and \({h}_{i}\) are the gradient and hessian, respectively, defined as,
-
(III)
XGBoost method
XGBoost employs Newton tree boosting to fit additive tree models. In Newton boosting, the base learners are the tree models and for a given dataset with \(n\) examples \(\mathcal{D}={\{\left({\mathbf{x}}_{i}, {y}_{i}\right)\}}_{i=1}^{n}\), a tree model from Equation 6 is given by,
XGBoost learns by iteratively adding \({f}_{t}\left({\mathbf{x}}_{i}\right)\) to base learners, such that it minimizes the following objective function (as described in Equation 10),
where
At each iteration, XGBoost learns the leaf weights and tree structure through the following steps:
-
1.
For each leaf \(j\), the optimization problem in Equation 13 is quadratic in \({w}_{j}\). Hence, the optimal leaf weight or prediction \({w}_{j}^{*}\) for a proposed (fixed) tree structure is obtained by setting \(d{\widetilde{\mathcal{L}}}^{(t)}/d{w}_{j}=0\) as follows,
$${w}_{j}^{*}=-\frac{{G}_{j}}{{H}_{j}},j=1,\dots ,T.$$(14) -
2.
Learning the tree structure involves searching for splits of internal nodes. To compute the optimal split, the objective reduction following Equations 13 and 14 is,
$${obj}^{*}=-\frac{1}{2}\sum_{j=1}^{T}\frac{{G}_{j}^{2}}{{H}_{j}}$$(15)Equivalently, the splits are determined that maximizes the gain given by,
$${\text{Gain}}=\frac{1}{2}\left[\frac{{G}_{L}^{2}}{{H}_{L}}+\frac{{G}_{R}^{2}}{{H}_{R}}-\frac{{G}^{2}}{H}\right]$$(16)Here, the terms with subscripts \(L\) and \(R\) correspond to the left and right leaf scores, respectively, and the last term is the score of the original node. The nodes with negative gain are pruned out in bottom-up order.
-
3.
The final leaf weights \({\widehat{w}}_{j}\) are computed following Equation 14 for the learned tree structure.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Fazily, P., Cho, D., Choi, H. et al. Formability classifier for a TV back panel part with machine learning. Int J Mater Form 16, 70 (2023). https://doi.org/10.1007/s12289-023-01791-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s12289-023-01791-y