arXiv:2411.08254v2 Announce Type: replace-cross Abstract: Modern Large Language Model (LLM)-based programming agents often rely on test execution feedback to refine their generated code. These tests are synthetically generated by LLMs. However, LLMs may produce invalid or hallucinated test cases, which can mislead feedback loops and degrade the performance of agents in refining and improving code. This paper introduces VALTEST, a novel framework that leverages semantic entropy to automatically validate test cases generated by LLMs. Analyzing the semantic structure of test cases and computing entropy-based uncertainty measures, VALTEST trains a machine learning model to classify test cases as valid or invalid and filters out invalid test cases. Experiments on multiple benchmark datasets and various LLMs show that VALTEST not only boosts test validity by up to 29% but also improves code generation performance, as evidenced by significant increases in pass@1 scores. Our extensive experiments also reveal that semantic entropy is a reliable indicator to distinguish between valid and invalid test cases, which provides a robust solution for improving the correctness of LLM-generated test cases used in software testing and code generation.