
Photo by the author
# Entry
Most programmers don’t need facilitate writing faster. What slows down projects are the endless loops of configuration, review, and rework. This is where artificial intelligence starts to make a real difference.
Over the past year, tools like GitHub Copilot, Claude, and Google’s Jules have evolved from autocomplete assistants to coding agents that can asynchronously schedule, build, test, and even review code. Instead of waiting for you to complete each step, they can now follow instructions, explain their reasoning, and push working code back into the repository.
The change is subtle but crucial: AI no longer just helps write code; it’s learning to cooperate with you. With the right approach, these systems can save many hours by handling the repetitive, mechanical aspects of programming, allowing you to focus on the architecture, logic, and decisions that truly require human judgment.
In this article we will check five AI-powered coding techniques that save a lot of time without sacrificing quality, from entering design documents directly into models to combining two AIs as a programmer and reviewer. Each is basic enough to implement today, and together they create a smarter, faster development workflow.
# Technique 1: Let the AI read the design documentation before writing the code
One of the easiest ways to get better results from encoding models is to stop giving them isolated prompts and start giving them context. When you provide a design document, architecture overview, or feature specification before querying code, you give the model a complete picture of what you’re trying to build.
For example, instead:
# faint prompt
"Write a FastAPI endpoint for creating new users."
try something like this:
# context-rich prompt
"""
You're helping implement the 'User Management' module described below.
The system uses JWT for auth, and a PostgreSQL database via SQLAlchemy.
Create a FastAPI endpoint for creating up-to-date users, validating input, and returning a token.
"""
When the model “reads” design context first, its responses become more aligned with the architecture, naming conventions, and data flow.
You spend less time rewriting or debugging mismatched code and more time integrating.
Tools like Google Jules AND Anthropic Claude deal with it naturally; they can swallow Price reduction, system documentationOr AGENTS.md files and exploit this knowledge in various tasks.
# Technique 2: Using one for encoding, the other for viewing
Every experienced team has two primary roles: builder and reviewer. You can now recreate this pattern using two AI models working together.
One model (e.g. Claudius 3.5 Sonnet) can work as code generatorcreating an initial implementation based on your specifications. The second model (e.g. Gemini 2.5 Pro Or GPT-4o), then reviews the difference, adds inline comments, and suggests fixes or tests.
Sample Python pseudocode workflow:
code = coder_model.generate("Implement a caching layer with Redis.")
review = reviewer_model.generate(
f"Review the following code for performance, clarity, and edge cases:n{code}"
)
print(review)
This pattern became common in multi-agent frameworks such as AutoGen Or CrewAIand is built directly into Jules, which allows an agent to write code and have another agent validate it before creating a pull request.
Why does it save time?
- The model finds its own logical errors
- Review feedback appears instantly, so you can connect with greater confidence
- This reduces the burden of human review, especially for routine or formulaic updates
# Technique 3: Test automation and validation with AI agents
Writing tests is not hard; it’s just dull. Therefore, this is one of the best areas to delegate AI. State-of-the-art coding agents can now read an existing test suite, infer missing coverage, and automatically generate up-to-date tests.
For example, at Google, Jules, after completing a feature rollout, runs the install script in a secure cloud virtual machine, discovers test frameworks such as pytest Or Isand then adds or fixes the failing tests before creating the pull request.
Here’s what this workflow might look like conceptually:
# Step 1: Run tests in Jules or your local AI agent
jules run "Add tests for parseQueryString in utils.js"
# Step 2: Review the plan
# Jules will show the files to be updated, the test structure, and reasoning
# Step 3: Approve and wait for test validation
# The agent runs pytest, validates changes, and commits working code
Other tools can also analyze the structure of the repository, identify edge cases, and generate high-quality unit or integration tests in a single pass.
The biggest time savings come not from writing entirely up-to-date tests, but from allowing the model to fix those that failed during an update or refactoring. This is the type of snail-paced, repetitive debugging task that AI agents do consistently well.
In fact:
- Your CI pipeline stays green with minimal human intervention
- Tests stay current as the code evolves
- You catch regressions early, without having to manually rewrite tests
# Technique 4: Using AI to refactor and modernize legacy code
Venerable codebases snail-paced everyone down, not because they are bad, but because no one remembers why everything was written this way. AI-powered refactoring can bridge this gap by safely and incrementally reading, understanding, and modernizing code.
Tools like Google Jules and GitHub Copilot really come into their own here. You can ask them to update dependencies, rewrite modules in a newer environment, or convert classes to functions without breaking the original logic.
For example, Jules might accept a request like this:
"Upgrade this project from React 17 to React 19, adopt the new app directory structure, and ensure tests still pass."
Here’s what he does behind the scenes:
- Clones your repository to a file secure virtual machine in the cloud
- Runs the install script (to install dependencies)
- Generates plan and differentiate showing all changes
- Runs a set of tests to confirm that the update worked
- Pushes A pull request with verified changes
# Technique 5: Generate and explain code in parallel (asynchronous workflows)
When you’re deep in a coding sprint, waiting for model responses can interrupt your flow. State-of-the-art agent tools now support asynchronous workflows, allowing you to offload multiple coding or documentation tasks at once to focus on your core work.
Imagine this with Google Jules:
# Create multiple AI coding sessions in parallel
jules remote up-to-date --repo . --session "Write TypeScript types for API responses"
jules remote up-to-date --repo . --session "Add input validation to /signup route"
jules remote up-to-date --repo . --session "Document auth middleware with docstrings"
You can then continue working locally while Jules performs these tasks on secure cloud virtual machines, reviewing the results, and reporting on completion. Each position has its own branch and plan for approval, meaning you can manage yourAI team members” like real colleagues.
This asynchronous, multi-session approach saves enormous time in distributed teams:
- You can queue 3-15 tasks (depending on Jules’ plan)
- Results appear gradually, so nothing blocks your work
- You can review diffs, accept pull requests, or independently restart failed tasks
Gemini 2.5 Prothe model Jules relies on is optimized for multi-step reasoning in a long context, so it doesn’t just generate code; tracks previous steps, understands dependencies, and synchronizes progress between tasks.
# Putting it all together
Each of these five techniques works well on its own, but the real advantage is combining them into a continuous feedback-driven workflow. Here’s what it might look like in practice:
- Project-based prompting: Start with a well-organized specification or design document. Pass it to your coding agent as context to learn your architecture, patterns, and constraints.
- Dual Agent Coding Loop: Run two models in tandem, one acting as an encoder, the other as a reviewer. The coder generates diffs or pull requests, while the reviewer performs verification, suggests improvements, or flags inconsistencies.
- Automated test and validation: Let your AI agent create or fix tests whenever up-to-date code is released. This ensures that every change remains verifiable and ready for CI/CD integration.
- AI-powered refactoring and maintenance: Operate asynchronous agents like Jules to handle repetitive updates (dependency increases, configuration migrations, rewriting deprecated APIs) in the background.
- Rapid evolution: Report on the results of previous tasks – both successes and failures – to improve your suggestions over time. In this way, AI workflows mature into semi-autonomous systems.
Here’s a basic, high-level flow:

Photo by the authorEach agent (or model) supports a layer of abstraction, keeping people focused on why the code is crucial
# Summary
AI-powered development doesn’t involve writing code for you. The idea is that you can focus on architecture, creativity and problem formulation, parts that no AI or machine can replace.
If you exploit these tools thoughtfully, they will turn hours of templating and refactoring into a solid foundation of code while giving you the space to think deeply and create intentionally. Whether Jules is handling your GitHub PRs, Copilot suggesting contextual features, or a custom Gemini agent reviewing your code, the pattern is the same.
Shittu Olumid is a software engineer and technical writer with a passion for using cutting-edge technology to create compelling narratives, with an eye for detail and a knack for simplifying convoluted concepts. You can also find Shittu on Twitter.
