Evaluating Large Language Models for the Generation of Unit Tests with Equivalence Partitions and Boundary Values

cic.institucionOrigenLaboratorio de Investigación y Formación en Informática Avanzada (LIFIA)
cic.isFulltextSI
cic.isPeerReviewedNO
cic.lugarDesarrolloLaboratorio de Investigación y Formación en Informática Avanzada (LIFIA)
cic.parentTypeObjeto de conferencia
cic.versionAceptada
dc.date.accessioned2026-03-20T12:04:08Z
dc.date.available2026-03-20T12:04:08Z
dc.identifier.urihttps://digital.cic.gba.gob.ar/handle/11746/12673
dc.titleEvaluating Large Language Models for the Generation of Unit Tests with Equivalence Partitions and Boundary Valuesen
dc.typeDocumento de conferencia
dcterms.abstractThe design and implementation of unit tests is a complex task many programmers neglect. This research evaluates the potential of Large Language Models (LLMs) in automatically generating test cases, comparing them with manual tests. An optimized prompt was developed, that integrates code and requirements, covering critical cases such as equivalence partitions and boundary values. The strengths and weaknesses of LLMs versus trained programmers were compared through quantitative metrics and manual qualitative analysis. The results show that the effectiveness of LLMs depends on well-designed prompts, robust implementation, and precise requirements. Although flexible and promising, LLMs still require human supervision. This work highlights the importance of manual qualitative analysis as an essential complement to automation in unit test evaluation.en
dcterms.creator.authorRodríguez, Martín
dcterms.creator.authorRossi, Gustavo Héctor
dcterms.creator.authorFernández, Alejandro
dcterms.identifier.otherarXiv:2505.09830
dcterms.identifier.urlhttps://arxiv.org/abs/2505.09830
dcterms.isPartOf.series13th Conference on Cloud Computing, Big Data & Emerging Topics (JCC-BD&ET 2025) (La Plata, 24 al 26 de junio de 2025)
dcterms.issued2025
dcterms.languageInglés
dcterms.licenseAttribution-NonCommercial-NoDerivatives 4.0 International (BY-NC-ND 4.0)
dcterms.subjectEvaluationen
dcterms.subjectUnit Testingen
dcterms.subjectLLMes
dcterms.subject.materiaCiencias de la Computación e Información

Archivos

Bloque original

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
Evaluating Large Language Models.pdf-PDFA.pdf
Tamaño:
277.24 KB
Formato:
Adobe Portable Document Format
Descripción:
Documento completo

Bloque de licencias

Mostrando 1 - 1 de 1
Cargando...
Miniatura
Nombre:
license.txt
Tamaño:
3.46 KB
Formato:
Item-specific license agreed upon to submission
Descripción: