Blog
Beyond the Prompt with GitHub Copilot Agent Mode
Randy Pagels

Introduction
For many developers, GitHub Copilot started as a way to quickly generate code from short prompts. Then came Chat and Edit modes, expanding its usefulness to answering questions, modifying files, and assisting with focused changes. Now, GitHub Copilot Agent Mode takes things a step further. Instead of only suggesting code, it can carry out multi-step development tasks across your repository, all from natural language instructions.
Think of Agent Mode as a capable teammate that understands your codebase, can perform actions, run commands, and make coordinated changes without you manually switching tools. Whether you need to add new tests, refactor large sections of code, generate reports, or fix bugs, Agent Mode can automate much of the work.
This article focuses on how to use GitHub Copilot Agent Mode to automate common and advanced developer workflows. We will use an aviation-themed example project based on the Wright Brothers era, written in C#, to keep the scenarios realistic and engaging. You will see how Agent Mode can:
- Generate unit tests that match your existing patterns
- Refactor code while keeping dependencies intact
- Produce code coverage reports
- Troubleshoot and fix issues with minimal manual intervention
- Improve frontend user experiences linked to backend logic
The goal is not to show full code implementations, but to walk through the practical steps for using Agent Mode effectively. By the end, you should have a clear picture of when and how to use Agent Mode to speed up your work and keep your focus on solving problems, not on repetitive setup and maintenance tasks.
What Agent Mode Is and How It Works
GitHub Copilot Agent Mode is designed to go beyond simple code suggestions. While Chat mode answers questions and Edit mode applies targeted file changes, Agent Mode can execute multi-step workflows across your project without requiring you to manually break the task into smaller parts.
In practical terms, Agent Mode can:
- Interpret natural language requests in the context of your repository.
- Perform actions across multiple files or directories.
- Use your existing tools and configurations to execute commands or run scripts.
- Chain related tasks together so the outcome matches your overall intent.
Where Chat and Edit often work in isolation, Agent Mode can take a broader view. For example, if you ask it to “add unit tests for the FlightsController and update the CI pipeline to include them,” it can create the tests, adjust the pipeline configuration, and validate the changes from a single instruction.
How It Works
Agent Mode operates by combining several capabilities:
- Verification: It can confirm whether the change had the intended effect, for example by running tests or linting code.
- Repository Context Awareness: It understands your project’s structure, code, and dependencies.
- Task Planning: It breaks down your request into smaller steps that can be executed in sequence.
- Action Execution: It applies code changes, runs commands, or updates configuration files as needed.
When to Use Each Mode
Each Copilot mode has its own sweet spot. Choosing the right one depends on how big the change is and how much context you need to give.
- Chat is best when you need a quick answer or explanation. Use it to explore ideas, clarify code, or get examples without changing files.
- Edit is for targeted changes. Highlight a block of code or a file and ask for a refactor, rewrite, or improvement. It works well when you already know the scope.
- Agent is for larger, multi-step workflows. Use it when you want to add new features, generate tests, fix bugs, or coordinate changes across multiple files and tools. It understands repository context and can run checks until the job is complete.
Quick Decision Guide
- Exploring or learning the code → Ask Mode
- Clear on what to change and where → Edit Mode
- Automating multi-step work → Agent Mode
In the rest of this article, we will focus on Agent Mode as the primary driver of automation. The key difference is that instead of issuing many small, separate commands, you can express your intent in one natural language request and let the Agent handle the orchestration.
Automation in Action: Wright Brothers Examples
Agent Mode is easiest to understand when you see it in practice. The following examples use the Wright Brothers API and frontend as a backdrop to illustrate how automation supports everyday development work. Each scenario shows a different kind of task, such as adding tests, refactoring services, fixing bugs, or improving the user interface. In each case the agent not only makes the change but also verifies the result before handing the work back to you.
Example 1: Expanding Test Coverage
The Wright Brothers API included a FlightsController with methods that had little or no automated testing. Normally this would mean writing boilerplate test files by hand, copying patterns from other parts of the project, and slowly building up coverage. Instead, Agent Mode stepped in to handle the process end to end.
Example Prompt
Goal: Add focused unit tests for WrightBrothers.API FlightsController and ensure they pass.
Context and constraints:
- Match the patterns and assertions used in PilotsControllerTests.
- Target methods: GetFlights, ScheduleFlight, CancelFlight.
- Place new tests in the WrightBrothers.Tests project.
- If tests fail, update either the tests or the FlightsController as appropriate, not both at the same time unless necessary.
- Re-run tests after each change.
- Stop when tests for these targets are green and provide a short summary of what changed.
With one request, the agent scanned the existing PilotsControllerTests to learn the test conventions, created a new FlightsControllerTests file, and filled it with cases for the targeted methods. It then ran the test suite, identified failures, applied minimal fixes, and re-ran until everything passed.
The final output was a set of consistent, working tests that matched the project’s style and expanded the safety net for future changes. Agent Mode not only wrote the tests but also verified that they worked, giving the team confidence in the result without slowing down development.
Example 2: Introducing a Service Layer
The PlanesController in the Wright Brothers API was doing too much. It contained both controller responsibilities and business logic, which made the code harder to test, maintain, and extend. A common best practice is to introduce a service layer, so the controller stays lean while the service handles the core operations. This is the kind of multi-step refactor that would normally require careful planning and multiple edits across files.
Example Prompt
Goal: Refactor WrightBrothers.API PlanesController to use a dedicated PlaneService with DI, then run tests and fix any failures until green.
Context and constraints:
- Create IPlaneService and PlaneService in WrightBrothers.Core or the appropriate domain layer.
- Move business logic out of PlanesController into PlaneService.
- Keep API contracts and routes unchanged.
- Register PlaneService in Program.cs.
- Preserve logging behavior.
- If tests fail, apply minimal fixes and re-run until passing.
- Summarize changes in markdown with file paths and rationale.
With this request, Agent Mode created an IPlaneService interface and a PlaneService implementation, migrated business logic from the controller into the service, and updated the controller to rely on constructor injection. It then modified Program.cs to register the new service. After refactoring, the agent ran the test suite, fixed small mismatches in the tests, and re-ran until everything passed.
The end result was a cleaner architecture with clear separation of concerns. The public API remained stable, so clients of the system never saw a difference, but the code behind the scenes became easier to work with and more maintainable.
Example 3: Measuring Code Coverage
As the Wright Brothers API grew, the test suite started to cover many scenarios, but nobody knew exactly how much of the codebase was truly tested. Without a clear measurement, it was easy to assume the system was safe when gaps still existed. Code coverage reporting solves this problem by showing which methods and classes are exercised by tests, but setting it up and chasing down missing coverage can take a lot of time.
Agent Mode can both configure coverage reporting and automate the process of raising coverage to a specific target.
Example Prompt
Goal: Generate code coverage for WrightBrothers.API and increase coverage to at least 85%, then run all tests until passing.
Context and constraints:
- Use coverlet or the project’s preferred coverage tool integrated with xUnit.
- Output an HTML coverage report in a 'coverage' folder at the solution root.
- Identify untested public methods in FlightsController, PlanesController, and PilotsController.
- Identify untested public methods in all controllers located in the /Controllers/ folder.
- For uncovered code, create new unit tests in WrightBrothers.Tests matching the existing style.
- If tests fail, fix them or adjust code with minimal changes and re-run until passing.
- Provide a final markdown summary with coverage percentage, files updated, and key tests added.
With this instruction, Agent Mode configured coverage collection, ran the suite, and produced an HTML report. It parsed the results to locate methods that were not covered by tests, then generated new unit tests for those cases. When the newly added tests revealed issues, the agent made small fixes in either the tests or the code and re-ran until everything passed.
By the end of the loop, the project had consistent, working tests and a coverage report showing more than 88% coverage. The team could open the coverage/index.html file to review the details and know exactly which areas were well tested and which still needed attention.
The process showed how Agent Mode can do more than add tests. It can measure, improve, and confirm quality in a structured way that saves time and reduces manual effort.

Example 4: Debugging a Backend Bug
One of the most valuable uses of Agent Mode is troubleshooting bugs that are reported by users. In the Wright Brothers API, a strange issue surfaced: requesting a flight by ID returned the wrong record. For example, navigating to:
http://localhost:1903/flights/2
should have returned the flight with Id = 2. Instead, it showed the flight with Id = 3.
Reviewing the repository revealed the cause. The FlightRepository.GetFlightById method was written incorrectly:
public Flight GetFlightById(int id)
{
return Flights.ElementAt(id); // ❌ Wrong — ElementAt is zero-based, IDs are not
}
By using ElementAt(id), the method treated the flight ID as a zero-based index. So when you asked for ID 2, it returned the third item in the list.
Example Prompt
There's a bug in the FlightsController. When I go to /flights/2, the app returns the wrong flight — it shows the flight with ID 3 instead of 2.
Please investigate and fix the bug so that the correct flight is returned when requesting by ID.
- Make any updates necessary to resolve this.
- Run the backend project to verify the fix.
- Hint: After starting the backend, navigate to http://localhost:1903/flights/2 to test the fix.
Fix, Rebuild, Rerun, and verify the endpoint works.
With this request, Agent Mode inspected the FlightsController and FlightRepository, pinpointed the issue, and applied a minimal fix by searching for a matching ID instead of treating it as an index:
public Flight GetFlightById(int id)
{
return Flights.FirstOrDefault(f => f.Id == id);
}
The agent then rebuilt the project, restarted the backend, and re-ran the test by calling http://localhost:1903/flights/2. This time, the correct flight record was returned. It finished by leaving behind a short summary and, ideally, a regression test so the problem could not reappear unnoticed.
The key takeaway is that Agent Mode did more than just edit code. It reproduced the bug, applied the smallest fix possible, verified the behavior, and documented the result.
Example 5: Improving the Flight Schedule UX
The Wright Brothers project included a React client that displayed a flight schedule. The grid worked, but it was difficult to use. Rows could not be sorted, paging was not supported, and editing a flight required navigating away from the grid to a separate page. The development team wanted a better experience with sortable columns, paging for long lists, and an inline modal to edit flights directly.
This type of update usually requires changes in both the frontend and backend. The client needs new UI components, while the backend must confirm that the update endpoint exists and is working correctly. Coordinating these pieces by hand can be tedious, but Agent Mode is well suited for this kind of end-to-end workflow.
Example Prompt
Goal: Improve the flight schedule UX in the React client with sorting, paging, and an Edit Flight modal, then run all checks until green.
Context and constraints:
- The frontend lives in /client.
- Use the existing GET /api/flights endpoint for the grid.
- For updates, prefer the existing PUT /api/flights/{id}. If it does not exist, add a minimal UpdateFlight endpoint that matches project conventions.
- Preserve public API contracts if possible.
- Implement client-side sorting and paging using accessible table semantics.
- Add an Edit Flight modal that allows editing Departure, Arrival, PlaneId, and PilotId with validation.
- After changes, run type checks, lint, unit tests, and smoke tests if available. Fix issues with minimal changes and re-run.
- Summarize changes with file paths, validation rules, and any new API surface.
With this prompt, Agent Mode inspected the React grid component, added sorting and paging controls, and introduced a new modal component for editing flights inline. On the backend, it checked for the existence of **PUT /api/flights/{id}**. If the endpoint was missing, it created a minimal version consistent with the rest of the API, using the same DTO patterns.
The agent then ran type checks, linting, and unit tests for both the backend and frontend. Where errors surfaced, for example mismatched types between the client and server, it corrected them with minimal changes, re-ran the tests, and confirmed everything passed. Finally, it provided a summary of the files touched and the validations enforced in the modal.
The result was a smoother user experience. The grid became easier to navigate with sorting and paging, and editing flights no longer required leaving the page. By coordinating both client and server updates, Agent Mode demonstrated its ability to deliver practical improvements across the stack while keeping the project stable.
Best Practices and Advanced Uses
Agent Mode can do a lot, but it works best when you give it the right framing. Clear goals, smart constraints, and repeatable patterns help the agent stay reliable and keep your project stable. Below are habits, advanced ideas, and prompt templates you can apply directly to your own work.
Best Practices for Day-to-Day Use
- State intent, scope, and stop conditions clearly. This ensures the agent plans properly and knows when to stop.
- Use a verify–fix–verify loop. Always have the agent run tests, lint, or type checks after each change, apply minimal fixes, then re-run until green.
- Anchor on existing conventions. Point to a file or example that shows your preferred style and structure.
- Prefer minimal changes. Ask for the smallest safe edit so the agent avoids broad rewrites that create churn.
- Request a change summary. A short markdown note listing files and rationale makes review much easier.
- Keep CI/CD protections in place. Branch rules, required reviews, and checks remain essential.
Advanced Uses – The “Art of the Possible”
- Scale repetitive work. Pair Agent Mode with the Copilot coding agent to roll out changes across multiple repositories.
- Tackle security issues. Combine with security campaigns and Copilot Autofix to remediate vulnerabilities at scale.
- Raise and enforce coverage. Set coverage gates in CI and have the agent grow tests until the threshold is met.
- Guard against API drift. Use contract checks to confirm DTOs and endpoints remain aligned between backend and client.
- Keep docs current. Ask the agent to update README sections, diagrams, or usage snippets as part of a change.
- Seed data for testing. Generate aviation-themed fixtures for Airfields, Planes, Pilots, and Flights to make demos more realistic.
Practical Prompt Patterns
- Minimal change policy “Apply the smallest safe change, do not alter public routes or DTO shapes, summarize edits in markdown.”
- Self-healing loop “After any change, run tests, lint, and type checks. If something fails, fix only what is needed, re-run until green, then stop.”
- Style anchor “Match the conventions in PilotsControllerTests and Program.cs for DI, keep naming and assertion helpers consistent.”
- Traceability “Output a summary with file paths, the reason for each change, and any follow-ups.”
Conclusion and Next Steps
GitHub Copilot Agent Mode shifts AI assistance from suggestion to execution. In the Wright Brothers examples, we saw it create and repair tests, refactor services, raise coverage, reproduce and fix bugs, and enhance the frontend while continuously verifying its work.
Next steps for your own projects:
- Try Agent Mode in a safe branch first so you can review results freely.
- Anchor prompts on your project conventions to keep changes consistent.
- Review and test every output as you would with a junior developer’s pull request.
- Experiment with scaling up repetitive work using the Copilot coding agent.
Agent Mode works best when it becomes part of the regular development flow rather than a one-off tool. Use it for your next refactor, bug fix, or UI improvement and see how much manual effort it can remove. For the latest guidance, visit GitHub Copilot documentation at https://docs.github.com/copilot.

This article is part of XPRT#19
Step into the era of intelligent transformation with the <strong>XPRT. Magazine Gold Edition, </strong>a collection of cutting-edge insights from Xebia’s Microsoft experts. <br><br>This issue dives deep into <strong>AI innovation, cloud modernization, and data-driven growth, </strong>showcasing how technology and people come together to drive progress across industries.
This article is part of XPRT#19
Step into the era of intelligent transformation with the
XPRT. Magazine Gold Edition,
a collection of cutting-edge insights from Xebia’s Microsoft experts.
This issue dives deep into AI innovation, cloud modernization, and
data-driven growth, showcasing how technology and people come
together to drive progress across industries.
Written by
Daniel Rodriguezrey
Our Ideas
Explore More Blogs
Contact


