The GitHub Blog 2024年12月18日
How to generate unit tests with GitHub Copilot: Tips and examples
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了使用GitHub Copilot进行单元测试的实践方法。文章强调了单元测试在软件开发中的重要性,并阐述了GitHub Copilot如何通过AI技术帮助开发者自动生成测试用例,从而节省时间并提高代码质量。文章还介绍了如何通过Copilot的多种方式生成测试,包括IDE中的右键菜单、斜杠命令和聊天功能。此外,文章还分享了使用Copilot生成测试的最佳实践,如明确测试目标、提供代码上下文、仔细审查建议以及利用测试覆盖率工具,以确保测试的有效性和全面性。总而言之,GitHub Copilot为开发者提供了一个高效的单元测试解决方案,帮助他们编写更健壮、更可靠的代码。

💡单元测试的重要性: 单元测试是创建可靠和易于维护的软件的基础,通过测试函数或类等独立单元,可以确保每个组件按预期工作,提高代码库的完整性,简化调试,并促进团队协作。

🤖GitHub Copilot辅助测试生成: GitHub Copilot利用生成式AI,根据代码上下文或聊天查询,实时提供测试建议。它能覆盖各种场景,如边界情况、常见输入和故障模式,从而提高代码覆盖率和应用程序的弹性。

✅使用Copilot生成测试的最佳实践: 包括高亮要测试的代码,明确测试目标,提供代码上下文,仔细审查建议,并灵活迭代。此外,还可以询问Copilot是否遗漏了任何测试,并利用测试覆盖率工具进行全面评估。

Developers writing enough unit tests? Sure, and my code never has bugs on a Friday afternoon.

Whether you’re an early-career developer or a seasoned professional, writing tests—or writing enough tests—is a challenge. That’s especially true with unit tests, which help developers catch bugs early, validate code, aid with refactoring, improve code quality, and play a core role in Test-Driven Development (TDD).

All of this to say, you can save a lot of time (and write better, more robust code) by automating your test generation—and AI coding tools are making that easier and quicker than ever.

GitHub Copilot, GitHub’s AI-powered coding assistant, helps generate test cases on the fly and can save you time. I’ll be honest: I heavily rely on GitHub Copilot to generate tests in my own workflows—but I still manually write a number of them to help formulate my thoughts.

In this article, I’ll walk you through why unit tests are essential, how GitHub Copilot can assist with generating unit tests, and practical tips for getting the most from Copilot’s test generation capabilities. We’ll also dive into specific examples across languages and frameworks so you can get started with using Copilot to generate unit tests.

Oh, and if you’re curious I used Anthropic’s Claude model to generate the unit test examples you’ll find later in this article (in case you missed it, GitHub Copilot offers support for Anthropic’s Claude, Google’s Gemini, and OpenAI’s GPT o1 models).

Let’s jump in.

? Oh, and if you’re a visual learner, we have you covered.?

Why unit tests matter (and what differentiates good unit tests from bad ones)

If you already know all of this, feel free to skip past this section—but just in case you don’t, unit tests are fundamental to creating reliable, maintainable software. When you’re writing code, testing individual units, such as functions or classes, can help you ensure each component works as expected. This improves the codebase’s integrity, simplifies debugging, and fosters collaboration, as other developers can understand and trust the code.

The challenge, however, is that writing unit tests is often time consuming—and far too often, it can be easy to write unit tests that have less value than they should. Simply writing tests because you’re told to or because you’re trying to check off a box doesn’t make them useful; you need to understand their purpose and ensure they add value.

You should always start with the purpose of your unit tests and the ultimate audience and role they’ll play. Here are a few helpful things to consider:

How GitHub Copilot helps generate unit tests

GitHub Copilot uses generative AI to provide real-time code suggestions in your IDE and via chat-based functions in your IDE and across your GitHub projects.

Based on the context in your code or chat-based queries (or even slash commands you use after highlighting specific code blocks), it can suggest relevant unit tests, covering typical scenarios like edge cases, common inputs, and failure modes. This ability to anticipate and generate test code can lead to better code coverage and more resilient applications.

So, how does this work in practice? Imagine you’re testing a piece of business logic—like validating your inputs with a regular expression. Writing unit tests can feel (and often is) repetitive and time consuming because you need to test various edge cases to ensure the code works as expected.

Instead of manually writing every test case, you can use GitHub Copilot to generate tests on your behalf by highlighting your code or logic, and let Copilot suggest unit tests to cover a range of inputs and edge cases.

There are a number of ways to generate unit tests with GitHub Copilot. For instance, you can select the code you want to test, right click in your IDE and select Copilot->Generate Tests. You can also use the slash command /tests in your IDE to generate tests (you’ll want to highlight the code or logic block first that you’re looking to test). And then you always have GitHub Copilot Chat—both in your IDE and across your online GitHub experience—that you can prompt to find existing tests or use to generate new ones.

When should you avoid using GitHub Copilot to generate unit tests?

I tend to write tests manually in the same scenarios where I write code manually, because I know what I want, so I just do it and get it done. But sometimes I need to formulate my thoughts, and the process of manually writing code can help me determine what I’m trying to do and how to do it. From there, I ask GitHub Copilot to expand what I’ve already built.

Key benefits of using GitHub Copilot to generate unit tests

Even if I don’t always use GitHub Copilot for unit tests, I use it a lot when it comes to unit tests. Some of the biggest benefits I find when using GitHub Copilot to generate unit tests include:

Best practices for using GitHub Copilot to generate unit tests

During my time using GitHub Copilot for test generation, I’ve come away with a number of personal best practices that may prove useful.

Generating unit tests with GitHub Copilot: Practical examples

Now, let’s look at some practical examples in Python and TypeScript—two of the biggest languages used on GitHub today.

Example 1: Creating unit tests in Python with GitHub Copilot

Suppose we have a Python function to check that the price of something is greater than 0 or less than or equal to 1000. Let’s use Copilot to generate test cases for it.

def validate_price(price: float): if price <= 0:   raise ValueError("Price must be greater than 0") if price > 1000:   raise ValueError("Price must be less than or equal to 1000")

With Copilot enabled, you’d want to enter the following slash command and prompt in Copilot Chat: /tests create tests for validate_price. Check the edge cases as well.

From there, you would see an output like this (note: since LLMs are nondeterministic, your output may look a bit different):

import unittestfrom generate_unit_tests import validate_priceclass TestValidatePrice(unittest.TestCase): def test_price_zero(self):   with self.assertRaises(ValueError) as context:     validate_price(0)   self.assertEqual(str(context.exception), "Price must be greater than 0") def test_price_negative(self):   with self.assertRaises(ValueError) as context:     validate_price(-10)   self.assertEqual(str(context.exception), "Price must be greater than 0") def test_price_above_limit(self):   with self.assertRaises(ValueError) as context:     validate_price(1500)   self.assertEqual(str(context.exception), "Price must be less than or equal to 1000") def test_price_edge_case_zero(self):   with self.assertRaises(ValueError) as context:     validate_price(0)   self.assertEqual(str(context.exception), "Price must be greater than 0") def test_price_edge_case_max(self):   try:     validate_price(1000)   except ValueError:     self.fail("validate_price() raised ValueError unexpectedly!")if __name__ == '__main__': unittest.main()

Here, we see GitHub Copilot generating unit tests to validate the behavior of the validate_price function. Here’s a breakdown of what each test does:

    test_price_zero: Confirms that calling validate_price(0) raises a ValueError with the message “Price must be greater than 0”. This tests the lower bound condition for an invalid price.test_price_negative: Confirms that calling validate_price(-10) raises a ValueError with the same message, ensuring negative values are handled correctly.test_price_above_limit: Validates that calling validate_price(1500) raises a ValueError with the message “Price must be less than or equal to 1000”, testing the upper bound for invalid prices.test_price_edge_case_zero: A duplicate of test_price_zero (it checks the same condition). It is redundant and can be removed.test_price_edge_case_max: Verifies that calling validate_price(1000) does not raise an error. If an exception is raised, the test fails.

While these tests work, they can also be improved with a follow-up prompt to remove redundancies (such as test_price_edge_case_zero and test_price_zero) and adding a test to validate prices within the acceptable range (for example., validate_price(500)). Also remember: GitHub Copilot is powered by nondeterministic LLMs that don’t always produce the same results, and you should always review code suggested by Copilot—and re-prompt Copilot to improve that code.

Example 2: Creating unit tests in Jest for TypeScript with GitHub Copilot

Now, let’s create the same function in TypeScript to check whether a given numeric price falls within a specific range. It should look something like this:

function validatePrice(price: number) { if (price < 0) {   throw new Error('Price must be greater than 0'); } if (price > 1000) {   throw new Error('Price must be less than 1000'); }}

You’d then enter the same slash command and prompt in Copilot Chat that we used above: /tests create tests for validate_price. Check the edge cases as well.

The generated unit tests would look something like this (since LLMs are nondeterministic, your output may look a bit different):

describe('validatePrice', () => { // Valid cases it('should accept valid price in middle range', () => {   expect(() => validatePrice(500)).not.toThrow(); }); it('should accept price of 0', () => {   expect(() => validatePrice(0)).not.toThrow(); }); it('should accept price of 1000', () => {   expect(() => validatePrice(1000)).not.toThrow(); }); // Error cases it('should throw error for negative price', () => {   expect(() => validatePrice(-1)).toThrow('Price must be greater than 0'); }); it('should throw error for price over 1000', () => {   expect(() => validatePrice(1001)).toThrow('Price must be less than 1000'); }); // Edge cases it('should accept small decimal price', () => {   expect(() => validatePrice(0.01)).not.toThrow(); }); it('should accept price close to maximum', () => {   expect(() => validatePrice(999.99)).not.toThrow(); }); it('should throw error for NaN', () => {   expect(() => validatePrice(NaN)).toThrow('Price must be greater than 0'); }); it('should throw error for Infinity', () => {   expect(() => validatePrice(Infinity)).toThrow('Price must be less than 1000'); });});

You can see Copilot generated unit tests in Jest that mirror the Python unit tests above—they validate expected cases, error cases, and edge cases to ensure the validatePrice function works correctly.

Take this with you

Unit testing is a vital part of software development, but it can be tedious and time-consuming. GitHub Copilot automates much of this process, making it easier to generate meaningful, comprehensive tests without the grunt work. Whether you’re validating complex business logic, working in a TDD workflow, or expanding an existing test suite, Copilot can be a powerful ally.

The key to getting the most out of Copilot lies in clear communication and iteration. Be specific in your prompts, highlight the code you want tested, and don’t hesitate to refine your prompt (or Copilot’s output). Use tools like slash commands or Copilot Chat to provide broader context or request additional test cases. And while Copilot can speed up the process, always make sure to review and validate any generated tests to ensure accuracy. In the meantime, happy testing!

Learn more about generating unit tests with GitHub Copilot >

Explore everything about GitHub Copilot >

The post How to generate unit tests with GitHub Copilot: Tips and examples appeared first on The GitHub Blog.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

GitHub Copilot 单元测试 AI辅助 代码测试 软件开发
相关文章