Photo by the author
# Entry
Running the most effective AI model locally no longer requires a high-end workstation or high-priced cloud setup. With lightweight tools and smaller, open-source models, you can now turn even an older laptop into a practical local AI environment for agent-style coding, experimentation, and workflows.
In this tutorial you will learn how to run Qwen3.5 locally using To be and connect it to Open code to create a elementary local agent configuration. The goal is to keep everything elementary, accessible, and beginner-friendly so you can get a working local AI assistant without dealing with a complicated stack.
# Ollama installation
The first step is to install Ollama, which makes it basic to run vast language models locally on your computer.
If you utilize Windowsyou can download Ollama directly from the official Download Ollama for Windows page and install it like any other application or run the following command PowerShell: :
irm https://ollama.com/install.ps1 | iex

The Ollama download page also includes installation instructions Linux AND macOSso you can follow the steps there if you are using a different operating system.
Once the installation is complete, you will be ready to run Ollama and download your first local model.
# I run Ollama
In most cases, Ollama starts automatically after installation, especially the first time you start it. This means you may not need to do anything else before running your model locally.
If the Ollama server is not already running, you can start it manually with the following command:
# Running Qwen3.5 locally
Once you have Ollama running, the next step is to download and run Qwen3.5 on your computer.
If you visit the Qwen3.5 model page on Ollama, you’ll see a variety of model sizes, from larger variants to smaller, lighter options.
We will utilize version 4B for this tutorial because it provides a good balance between performance and hardware requirements. This is a practical choice for older laptops and typically requires about 3.5 GB of random access memory (RAM).

To download and run the model from the terminal, utilize the following command:
The first time you run this command, Ollama will download the model files to your computer. Depending on the speed of your internet, this may take a few minutes.

Once the download is complete, Ollama may need a moment to load the model and prepare everything needed to run it locally. Once everything is ready, you will see an interactive terminal chat interface where you can start prompting your model directly.

At this point you can already utilize Qwen3.5 in the terminal for elementary local calls, quick tests, and airy coding assistance before connecting it to OpenCode for a more agentic workflow.
# Installing OpenCode
Once you have Ollam and Qwen3.5 set up, the next step is to install OpenCode, a local coding agent that can work with models running on your own computer.
You can visit the OpenCode website to see the available installation options and learn more about how it works. For this tutorial, we’ll utilize the quick install method because it’s the easiest way to get started.

Run the following command in your terminal:
curl -fsSL https://opencode.ai/install | bash
This installer handles the installation process for you and installs the required dependencies, including Node.js when you need it, so you don’t have to configure everything manually.

# Running OpenCode with Qwen3.5
Now that both Ollama and OpenCode are installed, you can plug OpenCode into your local Qwen3.5 model and start using it as a lightweight coding agent.
If you look at the Qwen3.5 page on Ollama, you’ll notice that Ollama now supports elementary integrations with external AI tools and coding agents. This makes it much easier to utilize local models in a more practical workflow, rather than just talking to them in a terminal.

To run OpenCode with the Qwen3.5 4B model, run the following command:
ollama launch opencode --model qwen3.5:4b
This command tells Ollama to run OpenCode using the locally available Qwen3.5 model. Once launched, you will be taken to the OpenCode interface with Qwen3.5 4B already connected and ready to utilize.

# Building a elementary Python project with Qwen3.5
Once OpenCode is running with Qwen3.5, you can start showing it elementary prompts to build software directly from the terminal.
For this tutorial, we asked him to create a tiny file Python design the game from scratch using the following command:
Create a up-to-date Python project and build a contemporary Guess the Word game with pristine code, elementary gameplay, score tracking, and an easy-to-use terminal interface.

After a few minutes, OpenCode generated the project structure, wrote the code, and performed the configuration necessary to run the game.
We also asked him to install any required dependencies and test the design, which made the workflow much closer to working with a lightweight local coding agent than a elementary chatbot.

The final result was a fully working Python game that ran smoothly in the terminal. The gameplay was elementary, the code structure was pristine, and the score tracking worked as expected.

For example, when you type the correct character, the game will immediately display the matching letter in the hidden word, showing that the logic works correctly out of the box.

# Final thoughts
I was really impressed with how basic it was to set up a local agent on an older laptop running Ollama, Qwen3.5 and OpenCode. For a lightweight and inexpensive setup, this works surprisingly well and makes local AI seem much more practical than many people expect.
That said, not everything is silky sailing.
Since this setup is based on a smaller and quantized model, the results are not always good enough for more convoluted encoding tasks. In my experience, it does quite well with elementary projects, basic scripting, research assistance, and general purpose tasks, but starts to struggle when software engineering work becomes more demanding or multi-step.
One problem I encountered many times was that the model would sometimes stop mid-task. When this happened, I had to write by hand To continue to continue working and complete the job. This can be done for experiments, but it makes the workflow less resilient if you want to get consistent results for larger coding tasks.
Abid Ali Awan (@1abidaliawan) is a certified data science professional who loves building machine learning models. Currently, he focuses on creating content and writing technical blogs about machine learning and data science technologies. Abid holds a Master’s degree in Technology Management and a Bachelor’s degree in Telecommunications Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.
