- Title
- On the Evaluation of Neural Code Summarization
- Creator
- Shi, Ensheng; Wang, Yanlin; Du, Lun; Chen, Junjie; Han, Shi; Zhang, Hongyu; Zhang, Dongmei; Sun, Hongbin
- Relation
- ICSE '22: 44th International Conference on Software Engineering. Proceedings of the 44th International Conference on Software Engineering (Pittsburgh, PA 21-29 May, 2022) p. 1597-1608
- Publisher Link
- http://dx.doi.org/10.1145/3510003.3510060
- Publisher
- Institute of Electrical and Electronics Engineers (IEEE)
- Resource Type
- conference paper
- Date
- 2022
- Description
- Source code summaries are important for program comprehension and maintenance. However, there are plenty of programs with missing, outdated, or mismatched summaries. Recently, deep learning techniques have been exploited to automatically generate summaries for given code snippets. To achieve a profound understanding of how far we are from solving this problem and provide suggestions to future research, in this paper, we conduct a systematic and in-depth analysis of 5 state-of-the-art neural code summarization models on 6 widely used BLEU variants, 4 pre-processing operations and their combinations, and 3 widely used datasets. The evaluation results show that some important factors have a great influence on the model evaluation, especially on the performance of models and the ranking among the models. However, these factors might be easily overlooked. Specifically, (1) the BLEU metric widely used in existing work of evaluating code summarization models has many variants. Ignoring the differences among these variants could greatly affect the validity of the claimed results. Besides, we discover and resolve an important and previously unknown bug in BLEU calculation in a commonly-used software package. Furthermore, we conduct human evaluations and find that the metric BLEU-DC is most correlated to human perception; (2) code preprocessing choices can have a large (from -18% to +25%) impact on the summarization performance and should not be neglected. We also explore the aggregation of pre-processing combinations and boost the performance of models; (3) some important characteristics of datasets (corpus sizes, data splitting methods, and duplication ratios) have a significant impact on model evaluation. Based on the experimental results, we give actionable suggestions for evaluating code summarization and choosing the best method in different scenarios. We also build a shared code summarization toolbox to facilitate future research.
- Subject
- code summarization; deep learning; empirical study; evaluation
- Identifier
- http://hdl.handle.net/1959.13/1465586
- Identifier
- uon:47307
- Identifier
- ISBN:978-145039221-1
- Identifier
- ISSN:0270-5257
- Language
- eng
- Reviewed
- Hits: 802
- Visitors: 801
- Downloads: 0