当前位置: X-MOL 学术ACM Trans. Softw. Eng. Methodol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning from Very Little Data: On the Value of Landscape Analysis for Predicting Software Project Health
ACM Transactions on Software Engineering and Methodology ( IF 4.4 ) Pub Date : 2024-03-14 , DOI: 10.1145/3630252
Andre Lustosa 1 , Tim Menzies 1
Affiliation  

When data is scarce, software analytics can make many mistakes. For example, consider learning predictors for open source project health (e.g., the number of closed pull requests in 12 months time). The training data for this task may be very small (e.g., 5 years of data, collected every month means just 60 rows of training data). The models generated from such tiny datasets can make many prediction errors.

Those errors can be tamed by a landscape analysis that selects better learner control parameters. Our niSNEAK tool (a) clusters the data to find the general landscape of the hyperparameters, then (b) explores a few representatives from each part of that landscape. niSNEAK is both faster and more effective than prior state-of-the-art hyperparameter optimization algorithms (e.g., FLASH, HYPEROPT, OPTUNA).

The configurations found by niSNEAK have far less error than other methods. For example, for project health indicators such as C = number of commits, I = number of closed issues, and R = number of closed pull requests, niSNEAK’s 12-month prediction errors are {I=0%, R=33% C=47%}, whereas other methods have far larger errors of {I=61%,R=119% C=149%}. We conjecture that niSNEAK works so well since it finds the most informative regions of the hyperparameters, then jumps to those regions. Other methods (that do not reflect over the landscape) can waste time exploring less informative options.

Based on the preceding, we recommend landscape analytics (e.g., niSNEAK) especially when learning from very small datasets. This article only explores the application of niSNEAK to project health. That said, we see nothing in principle that prevents the application of this technique to a wider range of problems.

To assist other researchers in repeating, improving, or even refuting our results, all our scripts and data are available on GitHub at https://github.com/zxcv123456qwe/niSneak.



中文翻译:

从很少的数据中学习:关于景观分析对于预测软件项目健康状况的价值

当数据稀缺时,软件分析可能会犯很多错误。例如,考虑学习开源项目健康状况的预测因素(例如,12 个月内关闭的拉取请求的数量)。该任务的训练数据可能非常小(例如,5 年的数据,每月收集意味着只有 60 行训练数据)。从如此小的数据集生成的模型可能会出现许多预测错误。

这些错误可以通过景观分析来控制,选择更好的学习器控制参数。我们的潜行工具 (a) 对数据进行聚类以查找超参数的总体情况,然后 (b) 探索该情况每个部分的一些代表。潜行比之前最先进的超参数优化算法(例如 FLASH、HYPEROPT、OPTUNA)更快、更有效。

找到的配置潜行与其他方法相比,误差要小得多。例如,对于项目健康指标,例如C = 提交数量、I = 已关闭问题数量、R = 已关闭拉取请求数量,潜行的 12 个月预测误差为 {I=0%, R=33% C=47%},而其他方法的误差要大得多,为 {I=61%,R=119% C=149%}。我们推测潜行效果非常好,因为它找到了超参数中信息最丰富的区域,然后跳转到这些区域。其他方法(不反映在景观上)可能会浪费时间探索信息较少的选项。

基于前面的内容,我们推荐景观分析(例如,潜行)尤其是在从非常小的数据集学习时。本文仅探讨应用潜行来预测健康。也就是说,我们认为原则上没有什么可以阻止该技术应用于更广泛的问题。

为了帮助其他研究人员重复、改进甚至反驳我们的结果,我们所有的脚本和数据都可以在 GitHub 上找到:https://github.com/zxcv123456qwe/niSneak。

更新日期:2024-03-15
down
wechat
bug