Scaling Out with Nginx, Docker and FastAPI apps
この記事の目次
Scaling out with Nginx, Docker and FastAPI apps
According to recent surveys, Docker adoption in deploying microservices has increased by 70% over the past year, with Nginx emerging as a top choice for load balancing.
In today’s interconnected digital world, smoothly deploying and managing multiple applications is key for keeping them fast and flexible. This blog explores how Docker, a popular tool for packaging software, can make it easier to deploy two different FastAPI applications. We’ll also look at using Nginx as a reliable server that directs and balances incoming requests. This setup ensures your applications handle traffic well and stay dependable.
Let’s start the journey 😉
1. Prerequisites
- Docker or Orbstack (Docker lightweight alternatives)
- Basic knowledge of Python (code, package manager, compiler)
2. Setup some simple FastAPI applications
First, let’s set up our development environment using Poetry as our package manager and FastAPI as our application server. You can go with other package managers like pip, hatch, PDM, etc.
Poetry simplifies dependency management and virtual environment creation, ensuring clean and isolated setups. We’ll start by initializing a new Poetry project and adding FastAPI as a dependency. Next, we’ll define our FastAPI applications within separate directories or modules, each encapsulating its endpoints and business logic. This modular approach helps in organizing and scaling our applications effectively.
a. Clone the sample resource from Github
To start without hesitation, I also prepared a sample repository here for quick demostration.
https://github.com/mavisphung/py-micros
b. Explore folder structure
└── 📁your-root-folder
└── .dockerignore
└── .gitignore
└── 📁.jenkins
└── Jenkinsfile
└── README.md
└── book.Dockerfile
└── 📁book_service
└── __init__.py
└── main.py
└── docker-compose.yaml
└── location.Dockerfile
└── 📁location_service
└── __init__.py
└── main.py
└── 📁nginx
└── Dockerfile
└── server.conf
└── poetry.lock
└── pyproject.toml
└── 📁tests
└── __init__.py
c. Explanation:
1. .dockerignore and .gitignore:
.dockerignore
: Specifies files and directories that Docker should ignore when building images.
.gitignore
: Specifies files and directories that Git should ignore, typically build artifacts and sensitive data.
2 .jenkins/Jenkinsfile:
Jenkinsfile
: Configuration file for Jenkins pipelines, defining the steps to build, test, and deploy the project.
I will update the Jenkinsfile for next blog. We can skip it now
3. README.md:
Markdown file providing an overview of the project, including setup instructions, usage guidelines, and other relevant information.
4. book.Dockerfile and location.Dockerfile:
Dockerfiles for building Docker containers for the ‘book’ and ‘location’ services respectively. They define the environment and dependencies needed to run each service.
5. book_service/ and location_service/:
Directories containing the code for the ‘book’ and ‘location’ services.
Each directory contains an __init__.py
file, making it a Python package, and a main.py
file, which contains the main logic for each service.
6. docker-compose.yaml:
YAML file defining a multi-container Docker application setup using Docker Compose.
It specifies services, networks, volumes, and other configurations needed to orchestrate the containers.
7. nginx/:
Directory containing configuration for an Nginx server.
Dockerfile
: Specifies how to build the Nginx container.
server.conf
: an Nginx configuration file that defines how Nginx should handle incoming HTTP requests for different paths
Upstream Definitions:
Two upstreams are defined, location
and book
, each pointing to a different backend service running on python-location-api:8000
and python-book-api:8000
, respectively. These upstreams are used to proxy requests to the corresponding backend services.
Server Block:
The server listens on port 80 (HTTP) and is configured for the hostname localhost
. You can change it once go dev/staging/production. For example, http://example.com
The root directory for serving static files is set to /usr/share/nginx/html
, with index.html
and index.htm
as the default files to serve if a directory is requested.
Error Page Handling:
Custom handling for error codes 500, 502, 503, and 504 is defined, redirecting users to /50x.html
where a static error page is served from the root directory.
Location Blocks:
/book
and/book/...
:- Two location blocks handle requests starting with
/book
. The exact path/book
is proxied to the root (/
) of thebook
upstream service. This means that accessinghttp://localhost:9000/book
will internally forward the request tohttp://python-book-api:8000/
. - The regular expression location block
~ ^/book/(.*)$
captures any path that follows/book/
and proxies it to thebook
upstream, preserving the path. For example,http://localhost:9000/book/page
would be proxied tohttp://python-book-api:8000/page
.
- Two location blocks handle requests starting with
/location
and/location/...
:- Similar to the
/book
configuration, there’s an exact match location block for/location
that proxies requests to the root of thelocation
upstream service. - The regular expression location block for
/location/...
works similarly to the/book
configuration, proxying additional path segments to thelocation
upstream service.
- Similar to the
Proxy Settings:
For all proxied requests, several headers are set to ensure the backend services receive necessary information about the original request, including the original host (Host
), the client’s IP address (X-Real-IP
), the addresses of the proxies the request has traveled through (X-Forwarded-For
), and the original protocol (X-Forwarded-Proto
).
8. poetry.lock and pyproject.toml:
poetry.lock
: Lock file generated by Poetry, locking dependencies to specific versions for reproducibility.
pyproject.toml
: Configuration file for Poetry, defining project metadata and dependencies.
9. tests/:
Directory containing unit tests for the project. Currently we can skip this directory.
3. Conclusion
The project sets up Nginx as a gateway that directs web traffic to the right parts of our applications. This setup makes it easy to manage traffic for different services, like one for location information and another for book details. Nginx also quickly serves up static files, making our website faster for those requests.
By grouping backend services under names (upstreams), we can change or scale these services without adjusting the Nginx setup. The detailed instructions we give Nginx ensure that our backend services get all the information they need about who’s visiting our site and how they got there.
In simple terms, this project shows how to use Nginx to handle web traffic smartly and efficiently. It’s a great setup for managing different services and content types, making our web application more reliable and easier to maintain.
Thank you for reading 😉.
カテゴリー: