We’ve all been there — you prompt GitHub Copilot with a few vague sentences, it spits out a working prototype, and you feel like a 10x developer… until you open the code a week later.
Note: in this article we’ll use Visual Studio Code 1.104 for our examples. Other IDEs might not support all features Visual Studio Code has.
Introduction – What is vibe coding?
Vibe coding is a term that was popularized in the beginning of 2025. Wikipedia describes it as follows:
“Vibe coding describes a chatbot-based approach to creating software where the developer describes a project or task to a large language model (LLM), which generates code based on the prompt. The developer does not review or edit the code, but solely uses tools and execution results to evaluate it and asks the LLM for improvements. Unlike traditional AI-assisted coding or pair programming, the human developer avoids examination of the code, accepts AI-suggested completions without human review, and focuses more on iterative experimentation than code correctness or structure.” (Source: https://en.wikipedia.org/wiki/Vibe_coding).
Using GitHub Copilot in Agent Mode supports vibe coding scenarios. Many people use it to turn awesome ideas into prototypes that actually work. It fosters creativity, removes barriers and accelerates development. They deploy it to production, make a few more modifications and they’re happy, or so they think. A month later, they find themselves tracing mysterious issues. What happened?
Technical debt – The old problem in a new disguise
This is what happened, something we’ve known for years; technical debt! Some examples of what I’ve seen in vibe-coded projects include:
- Too much code in one file, e.g. not separating classes into separate files
Absence of unit tests
Too high code complexity
SOLID principles not followed
Low maintainability by creating lengthy inline scripts in YAML files
LLMs are trained on code written by humans. Although LLM manufacturers try to gather as much high quality content as possible in their models, technical debt is simply part of what we create. So the LLMs that GitHub Copilot uses also introduce technical debt. It’s not a new type of technical debt, but using GitHub Copilot, it accumulates much faster. Technical debt occurs because of rushed decisions and shortcuts taken, overconfidence in generated code and other reasons. These same factors apply when using GitHub Copilot. Before the AI era, we had several patterns, practices and technologies in place to prevent technical debt from occurring. It is important to keep these items in place when using tools like GitHub Copilot.
Clear context — Setting the scene for better AI output
There are several ways to improve this behavior and prevent technical debt from occurring. The first one I would like to start with is context engineering. Prompt engineering (the process of structuring or crafting an instruction in order to produce better outputs from a generative AI model - https://en.wikipedia.org/wiki/Prompt_engineering) has been an important practice to learn when using tools like GitHub Copilot, but nowadays you see more and more people use the term context engineering. Why is that?
Context is everything GitHub Copilot needs to know, apart from the question you ask it. It could be an additional code file, a readme, a screenshot of an error, or a pointer to a specific section in your code. Including the right context is paramount to getting expected results. GitHub Copilot within Visual Studio Code supports different ways of including context. Many of these options are implemented as “#” variables. Amongst others, there are:
- Use “#” to mention:
- Files
- Folders
- Symbols (variables, methods, classes etc)
- Use #codebase to point GitHub Copilot to the (local) index it made from your entire repository
- Use #selection to mention the active selection in your editor
- Use #terminalLastCommand to include the last terminal command and its output
- Use #terminalSelection to include the active selection in the terminal
- Use #changes to include access to (uncommitted) changes you or GitHub Copilot made
- Use #usages to find all references to a method or variable
- Use #fetch to enable GitHub Copilot to retrieve the webpage you included in your prompt
In the example below, we point GitHub Copilot specifically to the UpdateFlightStatus method:

There are many more # variables that can be used. Just type “#” in the chat and you’ll see what is available:

Apart from what’s built in to your IDE, you can install additional extensions that provide extra functionality to involve context (e.g. the Python extension in VS Code which exposes tools like configurePythonEnvironment, getPythonEnvironmentInfo and installPythonPackage) or by installing and running MCP Servers. Although this is a very important functionality, I’ll leave this out not to make this article too lengthy. If you want to read more on this topic, check out the website set up by the developer of MCP, Anthropic: https://modelcontextprotocol.io/.
Custom instructions — Teaching AI your standards
Custom instructions are where most value can be found. By providing better context, you help GitHub Copilot make better decisions, reducing the risk of vibe coding issues. The next step after improving context is setting consistent standards for how GitHub Copilot should behave — that’s where custom instructions come in.
The context in the previous paragraph can help if you want to look into the code, but real vibe coders don’t review the code. With custom instructions, you can determine guardrails upfront, before you even start building. There are different types of custom instructions and I’ll go over all of them. All of them are in markdown format. This allows you to use simple formatting (lists, headings, etc.), which helps GitHub Copilot as well. Great examples for all of the files we discuss below can be found in the Awesome Copilot repository on GitHub:
https://github.com/github/awesome-copilot (examples are taken from this repository as well). Creating these files is very simple, through the settings menu of GitHub Copilot chat;

AGENTS.md
Agents.md (see https://agents.md) is a very common format that is supported by many different AI tools. The format is markdown and it has to reside in the root of the repository. You can also create nested AGENTS.md files which apply to specific parts of the project. The file typically will include information on how to run the application, how to test it, what the code style is and what the PR should look like.
Copilot-instructions.md
The copilot-instructions.md needs to be in the .github folder that resides in the root of your repository. You don’t need GitHub for this folder to work, but your IDE needs to support it. It will also work on other version control systems. It will automatically be included in every chat request. Hence it is advisable to keep it as short as possible to prevent negative impact on GitHub Copilot performance.
The file will typically contain the same information as AGENTS.md, so you might want to consider only using AGENTS.md and skipping copilot-instructions.md altogether. Keep in mind though, that AGENTS.md is only supported from VS Code 1.104 onwards. Support for AGENTS.md in other IDEs might vary.
Other good examples of information to put here could be:
- This code will run in a high-traffic environment
- Security and data privacy are critical
- Use TDD (Test Driven Development) for all implementation. When touching code, first check if it’s covered by a unit test before making any changes
I’ve had some great experiences introducing TDD to GitHub Copilot. Even more so when you ask it to actually show the red/green when running the tests.
There are several ways to implement this, but what I did was to include this in my copilot-instructions.md:
Development for this application should be done in a TDD way. TDD means writing tests before writing the actual game code. This approach helps ensure that the game logic is correct and that we can catch any bugs early in the development process. It is VERY IMPORTANT TO FIRST WRITE THE TESTS, THEN THE CODE. When running tests, run them in the terminal so I can see you're running them and their results. Unit test files should only cover one single class or module. If you have multiple classes or modules, create separate test files for each. Put tests in a /tests subdirectory in the /src directory.
Instruction files
The Awesome Copilot repository also has some information on doing TDD with GitHub Copilot: https://github.com/github/awesome-copilot/blob/main/collections/testing-automation.md.
Instruction files
Instruction files need to be put in the .github/instructions folder in the root of your repository. Their format is {name}.instructions.md. They contain a Frontmatter header where you can put a filter to indicate to which filetypes it should apply. A good example is a githubactions.instructions.md that could contain something like this:
---
applyTo: '.github/workflows/*.yml'
description: 'Comprehensive guide for building robust, secure, and efficient CI/CD pipelines using GitHub Actions. Covers workflow structure, jobs, steps, environment variables, secret management, caching, matrix strategies, testing, and deployment strategies.'
---
## Security Best Practices in GitHub Actions
### **1. Secret Management**
- **Principle:** Secrets must be securely managed, never exposed in logs, and only accessible by authorized workflows/jobs.
[…]
These instruction files are picked up based on the applyTo filter in the header. If GitHub Copilot generates a new file that matches this filter, it will also start applying the instructions to this file.
Chat modes
By default, GitHub Copilot knows three chat modes:
- Ask – ask questions, get answers and suggestions, Copilot doesn’t change code.
- Edit - select files, ask for a change, Copilot will apply the requested change in the files.
- Agent – ask for a change, Copilot will determine in which files it needs to make changes, apply the changes and then if required, verify them by e.g. running a compiler, unit tests or other terminal commands and then verify and iterate over the outcome until the request is satisfied.
In VS Code it’s possible to define your own chat mode by putting your markdown file in .github/chatmodes with the format {chatmode name}.chatmode.md. With your own chat mode, you can exactly define how GitHub Copilot should behave. When creating a custom chat mode, it’s always based on Agent, meaning it will interactively implement the request. An example of a custom chat mode is the Janitor. It will go over your code and clean up any unnecessary code. Check the example below:
---
description: 'Perform janitorial tasks on any codebase including cleanup, simplification, and tech debt remediation.'
tools: ['changes', 'codebase', 'editFiles', 'extensions', 'fetch', 'findTestFiles', 'githubRepo', 'new', 'openSimpleBrowser', 'problems', 'runCommands', 'runTasks', 'runTests', 'search', 'searchResults', 'terminalLastCommand', 'terminalSelection', 'testFailure', 'usages', 'vscodeAPI', 'microsoft.docs.mcp', 'github']
---
# Universal Janitor
Clean any codebase by eliminating tech debt. Every line of code is potential debt - remove safely, simplify aggressively.
## Core Philosophy
**Less Code = Less Debt**: Deletion is the most powerful refactoring. Simplicity beats complexity.
When you add it to the .github/chatmodes folder, it appears in the chat modes selection box.

In the header of the file, you can select the tools the chat mode has access to. Agent mode uses tools to accomplish specialized tasks while processing a user request. Examples of such tasks are listing the files in a directory, editing a file in your workspace, running a terminal command, getting the output from the terminal, and more. Again, choose wisely here as more tools might negatively affect performance.
I also like chat modes that start by asking questions back to me, prompting me for missing information in my prompt. GitHub Copilot makes assumptions and using this technique, it reveals (part of) the assumptions.
Prompt files
The last option you have to control the outcome is a prompt file. Actually, it’s a stored prompt that can easily be reused. You can access it from the chat by typing “/{promptfilename}”. Let’s assume we create the file .github/prompts/editorconfig.prompt.md. After creation, it will be available as can be seen from this screenshot as a slash command:

When executed, it will inject the contents of the prompt file at the start of the chat conversation. This makes it a great way to simplify recurring tasks. The file contains a Frontmatter header, just like the other custom instructions. In this case, you can specify the mode (ask, edit, agent) and the model (optional) it uses. It will automatically switch to the required mode/model when used. This is an example:
---
title: 'Refactoring Java Methods with Extract Method'
mode: 'agent'
model: GPT-4.1 (copilot)
description: 'Refactoring using Extract Methods in Java Language'
---
# Refactoring Java Methods with Extract Method
## Role
You are an expert in refactoring Java methods.
Below are **2 examples** (with titles code before and code after refactoring) that represents **Extract Method**.
[...]
In this case, the prompt file will take care that refactoring is done in a structured and repeatable way, enabling multiple team members to execute the same refactoring.
Because we can store these files in the repository, I love the fact that we can make it so easy to share powerful prompts with our fellow team members.
LLM fine-tuning — When to (and not to) go deeper
If you want to go beyond configuration and make GitHub Copilot adapt to your style, fine-tuning might come to mind. Fine-tuning a model on your own codebase can make it more consistent with how your team writes code. However, it also tends to include old practices that are better left out of new work. In many cases, you’ll get most of the same benefit by combining good custom instructions with context retrieval — for example, letting GitHub Copilot look up relevant code or documentation while generating new code. That approach usually gives you about 80% of the results of fine-tuning, without the extra effort to train and maintain a custom model.
You can find more information here: https://docs.github.com/en/enterprise-cloud@latest/copilot/how-tos/use-ai-models/create-a-custom-model
Deterministic tools — Bringing engineering discipline back in
Although non-deterministic methods like custom instructions can certainly improve the results of GitHub Copilot, nothing replaces the solid tools we’ve been using since before the AI era. Think about unit tests, static code analysis, linting tools, and compilers — they’re still the guardrails that keep generated code from drifting too far. Security checks such as SAST, automated test runs, and CI/CD quality gates should continue to play their part. We should also keep applying good practices like MVP (Minimum Viable Product) thinking, pair programming (not only with GitHub Copilot, but with another developer), and the DevOps mindset we already know works. In the end, we still do software development, and our name is linked to the commit we make — even when part of the code comes from Copilot.
Review overload — When the AI outpaces the team
Although code review generally doesn’t happen in vibe coding, I do want to touch upon this. Many developers run into this issue. GitHub Copilot can sometimes generate a lot of code. Reviewing can quickly become a cumbersome task. How can we reduce this overload and make sure we can keep up with GitHub Copilot?
- Smaller, faster cycles.
Focus on an MVP. GitHub Copilot is generally pretty bad at MVP. It tries to implement as many features as possible to match the sometimes ambiguous request. Be very specific in your prompt on how the MVP should look. Ask GitHub Copilot to outline a plan first, so you can review and leave functionality out if applicable. - Use review automation.
As discussed before, use static code analysis, unit tests and other tools to do part of the review work. Then you can focus on what matters: the functionality. - Determine “Copilot-safe” areas.
Some parts of the code are less risky to create and change, like boilerplate code. Focus your review on code that matters most. - GitHub Copilot as the first reviewer.
Have GitHub Copilot critique its own output before a human sees it. Use the different prompt files and chat modes available for this. Think about a security review or a performance review. This shifts part of the review burden back to the machine. Of course this doesn’t replace the human review.
GitHub Copilot can write faster than we can review. Without the right guardrails, that speed only creates problems faster.
Conclusion — Vibe coding is fun, but give it guardrails
Vibe coding enables people to achieve things they wouldn’t have thought they could do, but only dream of. It’s really amazing at what speed we can turn creativity with GitHub Copilot into working products that make life easier. The downside of lower quality code is just around the corner, so be sure to put guardrails in place to have both a great working product, and at the same time ensure a maintainable codebase. Be sure to check out the Awesome Copilot repository for inspiration: https://github.com/github/awesome-copilot.
Happy coding!
Written by
Fokko Veegens
Hi, I’m Fokko from Voorburg, where I live with my wonderful girlfriend and our two lovely daughters. I’m passionate about outdoor activities like cycling, sailing, skiing, and snowboarding, as well as working on classic cars. My favorite moments, though, are those spent with my family. As a DevOps consultant, I specialize in blending People, Process, and Products to streamline software delivery and ensure top quality. I love tackling bottlenecks, working closely with teams, and focusing on delivering exceptional value to end-users through DevOps best practices.
Our Ideas
Explore More Blogs
Contact




