Photo by the author
# Entry
Artificial intelligence no longer simply talks vast language models (LLM) to give them arms and legs that allow them to perform activities in the digital world. These are often called Python AI agents – autonomous LLM-based programs that can perceive their environment, make decisions, apply external tools (such as APIs or code execution), and take actions to achieve specific goals without constant human intervention.
If you’ve been wanting to experiment with building your own AI agent but felt overwhelmed by complicated frameworks, you’ve come to the right place. Today we will take a look smolagentsa powerful yet extremely elementary library developed by Face Hugging.
By the end of this article, you will understand what makes smolagents unique and, more importantly, you will have a working code agent that can pull live data from the Internet. Let’s take a look at the implementation.
# Understanding Code Agents
Before we start coding, let’s review the concept. An agent is essentially an LLM equipped with tools. You give the model a goal (e.g. “find out the current weather in London”) and it decides what tools to apply to achieve that goal.
What makes the Cuddled Face agents stand out in the smolagents library is their approach to reasoning. Unlike many frameworks that generate JSON or text to decide which tool to apply, smolagents are code agents. This means they write pieces of Python code to put their tools and logic together.
This is powerful because the code is precise. This is the most natural way to express convoluted instructions such as loops, conditionals, and data manipulation. Instead of guessing how to put the tools together, LLM simply writes a Python script to do it. As an open-source agent platform, smolagents is neat, lightweight, and perfect for learning the basics.
// Prerequisites
To follow along you will need:
- Knowledge of Python. You should be comfortable with pip variables, functions and installations.
- Hugging Face Token. Since we apply the Hugging Face ecosystem, we will apply their free inference API. You can receive the token by registering at huggingface.co and visiting settings.
- A Google account is optional. If you don’t want to install anything locally, you can run this code in a file Google Co notebook.
# Configuring the environment
Let’s prepare our place for work. Open a terminal or a recent Colab notebook and install the library.
mkdir demo-project
cd demo-project
Next, let’s configure our security token. It’s best to store this as an environment variable. If you are using Google Colab, you can add secrets in the left panel HF_TOKEN and then access it via userdata.get('HF_TOKEN').
# Building your first agent: weather fetcher
In our first project, we will build an agent that collects weather data for a given city. To do this, an agent needs a tool. A tool is simply a function that LLM can call. We will apply a free, public API called inclwhich provides weather data in JSON format.
// Installation and configuration
Create a virtual environment:
The virtual environment isolates project dependencies from the system. Now let’s activate the virtual environment.
Windows:
macOS/Linux:
You’ll see (env) in the terminal when it is energetic.
Install required packages:
pip install smolagents requests python-dotenv
We install smolagents, Hugging Face’s lightweight agent platform for creating tool-able AI agents; requestsHTTP library for making API calls; AND python-dotenvwhich will load the environment variables from file a .env file.
That’s it – all with just one command. This simplicity is a fundamental part of the smolagents philosophy.

Figure 1: Smolagent installation
// Configuring the API token
Create .middle file in the root of your project and paste this code. Replace the placeholder with the actual token:
HF_TOKEN=your_huggingface_token_here
Get your token from huggingface.co/settings/tokens. Your project structure should look like this:

Figure 2: Project structure
// Importing libraries
Open yours demo.py file and paste the following code:
import requests
import os
from smolagents import tool, CodeAgent, InferenceClientModel
requests: To make HTTP calls to the weather APIos: To read environment variables safelysmolagents: Hugging Face’s lightweight agent platform providing:@tool: A decorator that defines the functions called by the agent.CodeAgent: An agent that writes and executes Python code.InferenceClientModel: Connects to LLMs hosted by Hugging Face.
In smolagents, defining a tool is elementary. We will create a function that takes the city name as input and returns the weather status. Add the following code to yours demo.py file:
@tool
def get_weather(city: str) -> str:
"""
Returns the current weather forecast for a specified city.
Args:
city: The name of the city to get the weather for.
"""
# Using wttr.in which is a lovely free weather service
response = requests.get(f"https://wttr.in/{city}?format=%C+%t")
if response.status_code == 200:
# The response is plain text like "Partly cloudy +15°C"
return f"The weather in {city} is: {response.text.strip()}"
else:
return "Sorry, I couldn't fetch the weather data."
Let’s break it down:
- We import
tooldecorator from smolagents. This decorator transforms our ordinary Python function into a tool that the agent can understand and apply. - Documentation (
""" ... """) Inget_weatherfunction is critical. The agent reads this description to understand what the tool does and how to apply it. - Inside the function we make a elementary HTTP request to incla free weather service that displays forecasts in plain text.
- Enter directions (
city: str) tell the agent what input to provide.
This is a perfect example of calling a tool in action. We give the agent a recent opportunity.
// LLM Setup
hf_token = os.getenv("HF_TOKEN")
if hf_token is None:
raise ValueError("Please set the HF_TOKEN environment variable")
model = InferenceClientModel(
model_id="Qwen/Qwen2.5-Coder-32B-Instruct",
token=hf_token
)
An agent needs a brain – a vast language model (LLM) that can reason about tasks. Here we apply:
Qwen2.5-Coder-32B-Instruct: A powerful code-centric model hosted on Hugging FaceHF_TOKEN: Your Hugging Face API token, stored on file.envfile for security
Now we need to create the agent itself.
agent = CodeAgent(
tools=[get_weather],
model=model,
add_base_tools=False
)
CodeAgent is a type of special agent who:
- Writes Python code to solve problems
- Executes this code in a sandbox environment
- Multiple tool calls can be chained together
Here we create an instance of a CodeAgent. We give him a list containing ours get_weather tool and model object. The add_base_tools=False argument tells it not to include any default tools, which will keep our agent elementary for now.
// Starting the agent
This is the exhilarating part. Let’s give our agent a task. Start the agent with a specific prompt:
response = agent.run(
"Can you tell me the weather in Paris and also in Tokyo?"
)
print(response)
When you call agent.run()agent:
- He reads your prompt.
- Reasons what tools he needs.
- Generates the code that calls
get_weather("Paris")ANDget_weather("Tokyo"). - Executes code and returns results.

Figure 3: Response to smolagents
When you run this code, you will witness the magic of the Hugging Face agent. The agent receives your request. He sees that he has a tool called get_weather. He then writes a diminutive Python script in his “mind” (using LLM) that looks something like this:
It’s what the agent thinks, not the code you write.
weather_paris = get_weather(city="Paris")
weather_tokyo = get_weather(city="Tokyo")
final_answer(f"Here is the weather: {weather_paris} and {weather_tokyo}")

Figure 4: Final response to tars
It executes this code, retrieves the data, and returns a amiable response. You’ve just built a code agent that can browse the Internet via APIs.
// How it works behind the scenes

Figure 5: The inner workings of an AI code agent
// Next steps: Adding more tools
Agents’ power grows with their toolkit. What if we wanted to save the weather report to a file? We can create another tool.
@tool
def save_to_file(content: str, filename: str = "weather_report.txt") -> str:
"""
Saves the provided text content to a file.
Args:
content: The text content to save.
filename: The name of the file to save to (default: weather_report.txt).
"""
with open(filename, "w") as f:
f.write(content)
return f"Content successfully saved to {filename}"
# Re-initialize the agent with both tools
agent = CodeAgent(
tools=[get_weather, save_to_file],
model=model,
)
agent.run("Get the weather for London and save the report to a file called london_weather.txt")
Now your agent can retrieve data and interact with the local file system. This combination of skills makes Python AI agents so versatile.
# Application
In just a few minutes and with less than 20 lines of baseline logic, you’ve built a functional AI agent. We’ve seen how smolagents simplifies the process of creating code agents that write and execute Python to solve problems.
The beauty of this open source agent framework is that it strips away the templates, allowing you to focus on the fun part: building tools and defining tasks. You’re no longer just talking to AI; you work with someone who can act. This is just the beginning. You may now consider giving the agent access to the Internet via search APIs, connecting it to a database, or allowing it to control a web browser.
// References and educational resources
Shittu Olumid is a software engineer and technical writer with a passion for using cutting-edge technology to create compelling narratives, with an eye for detail and a knack for simplifying convoluted concepts. You can also find Shittu on Twitter.
