
Photo by the author Ideogram
# Entry
You just pushed your Python application for production and suddenly everything breaks. The application worked perfectly on a laptop, has passed all tests in you, but now it throws mysterious import errors in production. Sounds familiar? Or maybe you are returning to a novel programmer who spends three days trying to launch the project locally. They are in Windows, developed on Mac, the production server launches Ubuntu, in some way everyone has different versions of Python and conflicting packet installations.
We were all there, feverishly debugging the problems related to the environment instead of building functions. Docker It solves this mess, packing the entire application environment in a container that works identically everywhere. Never more excuses “work on my machine”. No more expenses on weekends debugging implementation problems. In this article you were introduced to Docker and the way you can employ Docker to simplify the development of the application. You will also learn how a container for a elementary Python application with Docker.
# How docker works and why do you need it
Think about Docker as analogous to sending containers, but for code. When you contain a Python application, you not only pack your code. You pack the whole environmental environment: a specific version of Python, all dependencies, system libraries, environmental variables, and even the operating system expected by the application.
Result? Your application works in the same way on a laptop, Windows computer, friend, staging and production server. Every time. But how do you do it?
Well, when you contain Python applications with Docker, you do the following activities. You pack your application in a portable artifact called “image”. Then you start the “containers” – starting the images instance – and you run applications in the container environment.
# Building the API Web Python interface
Instead of starting with examples of toys, we contain a realistic Python application. We will build a elementary one Fastapy-Parted on API TODO (z UVICORN As an ASGA server), which shows the patterns that you will employ in real projects and employ Pydantic To check the correctness of the data.
In the project catalog, create a requirement file.txt:
fastapi==0.116.1
uvicorn[standard]==0.35.0
pydantic==2.11.7
Now let’s create the basic structure of the application:
# app.py
from fastapi import FastAPI
from pydantic import BaseModel
from typing import List
import os
app = FastAPI(title="Todo API")
todos = []
next_id = 1
Add data models:
class TodoCreate(BaseModel):
title: str
completed: bool = False
class Todo(BaseModel):
id: int
title: str
completed: bool
Create a health control end point:
@app.get("https://www.kdnuggets.com/")
def health_check():
return {
"status": "healthy",
"environment": os.getenv("ENVIRONMENT", "development"),
"python_version": os.getenv("PYTHON_VERSION", "unknown")
}
Add the basic functionality of TODO:
@app.get("/todos", response_model=List[Todo])
def list_todos():
return todos
@app.post("/todos", response_model=Todo)
def create_todo(todo_data: TodoCreate):
global next_id
new_todo = Todo(
id=next_id,
title=todo_data.title,
completed=todo_data.completed
)
todos.append(new_todo)
next_id += 1
return new_todo
@app.delete("/todos/{todo_id}")
def delete_todo(todo_id: int):
global todos
todos = [t for t in todos if t.id != todo_id]
return {"message": "Todo deleted"}
Finally, add server starting code:
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
If you run it locally with pip install -r requirements.txt && python app.pyYou will have an API interface operating locally. Now let’s get to the application of the application.
# Writing the first doctper file
You have your application, you have a list of requirements and a specific environment to start the application. So how do you go from these various components into one Docker image that contains both code and dependencies? You can specify this by writing the Docker file for the application.
Think about this as a recipe for building a picture from various elements of your project. Create a docker file in the project catalog (without extension).
# Start with a base Python image:
FROM python:3.11-slim
# Set environment variables:
ENV PYTHONDONTWRITEBYTECODE=1
PYTHONUNBUFFERED=1
ENVIRONMENT=production
PYTHON_VERSION=3.11
# Set up the working directory:
WORKDIR /app
# Install dependencies (this order is critical for caching):
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy your application code:
COPY . .
# Expose the port and set the startup command:
EXPOSE 8000
CMD ["python", "app.py"]
This Docker file is building a Python application container. Uses Python 3.11 (Slim version) image as a base, configures a working catalog, installs depending on the requirements. TXT, copies the application code, reveals the port of 8000 and launches the application with python app.py. The structure complies with the best practices, installing dependencies before copying the code for employ Docker layer buffering.
# Building and running the first container
Now let’s build and launch our processedized application:
# Build the Docker image
docker build -t my-todo-app .
# Run the container
docker run -p 8000:8000 my-todo-app
When you run docker buildYou will see that each line in the dock is built as a layer. The first compilation may take a while when Docker downloads the basic picture of Python and installs your dependencies.
⚠️ Utilize
docker buildx buildTo build a picture from the instructions at Dockerfile using Buildkit.
. -t my-todo-app Flag tagged your image with a better name instead of a random shortcut. . -p 8000:8000 Part Maps Port 8000 inside the container to the port of 8000 on the host computer.
You can visit http://localhost:8000 To check if your API interface works inside the container. The same container will work identically on any computer that has Docker installed.
# Docker’s necessary commands for everyday employ
Here are the Docker commands, which you most often employ:
# Build an image
docker build -t myapp .
# Run a container in the background
docker run -d -p 8000:8000 --name myapp-container myapp
# View running containers
docker ps
# View container logs
docker logs myapp-container
# Get a shell inside a running container
docker exec -it myapp-container /bin/sh
# Stop and remove containers
docker stop myapp-container
docker rm myapp-container
# Immaculate up unused containers, networks, images
docker system prune
# Some of the best docker practices that matter
After working with Docker in production, here are practices that actually make the difference.
Always employ specific version tags for basic images:
# Instead of this
FROM python:3.11
# Utilize this
FROM python:3.11.7-slim
Create. Dockerignore file to exclude unnecessary files:
__pycache__
*.pyc
.git
.pytest_cache
node_modules
.venv
.env
README.md
Keep your paintings by cleaning packaging managers:
RUN apt-get update && apt-get install -y --no-install-recommends
build-essential
&& rm -rf /var/lib/apt/lists/*
Always start containers as non-root users in production.
# Wrapping
This tutorial covered the basics, but the Docker ecosystem is huge. Here are the next areas to discover. In the case of production implementation, learn about container orchestration platforms such as Kubernetes or services specific for clouds, such as AWS Elastic Container Service (ECS)IN Google Cloud RunOr Azure container instances.
Browse Docker’s safety functions, including secrets management, scanning of images and a dock without roots. Find out about the optimization of Docker images for faster compilation and smaller sizes. Configure automated construction pipelines and implementation using continuous integration/continuous delivery systems (CI/CD), such as GitHub activities AND Gitlab..
Joyful science!
Bala Priya C He is a programmer and technical writer from India. He likes to work at the intersection of mathematics, programming, data science and content creation. Its interest areas and specialist knowledge include Devops, Data Science and Natural Language Processing. He likes to read, write, cod and coffee! He is currently working on learning and sharing his knowledge of programmers, creating tutorials, guides, opinions and many others. Bal also creates a coding resource and tutorial review.
