How to deploy Kafka on Docker
Apache Kafka is an open-source messaging system that can process, store, and move data quickly. Companies often use it to send vast numbers of messages in an organized queue, which people can access in real-time using their devices. It can be applied in everything, from inventory management to patient monitoring.
There are many ways to set up and run Kafka, but using Docker has several benefits, such as easier setup, better reproducibility, and improved scalability. That’s because Docker containers allow you to package Kafka into isolated environments so that it doesn’t interfere with the other applications on your server.
This guide will show you how to deploy an Apache Kafka cluster on an Ubuntu VPS using Docker. We’ll cover each step, from setting up Docker to configuring and running Kafka containers.
Prerequisites
Before you begin deploying Kafka on Docker, you’ll need a hosting provider that offers the basic hardware requirements. We recommend a VPS server with:
- At least 4GB of RAM (16GB recommended for production)
- 2 CPU cores (4+ CPU cores needed for production)
- 500GB of disk space (SSDs are preferred for better speed)
Note that these requirements may vary depending on your specific use case and expected load. For development and testing purposes, you can start with the lower end of hardware specifications and scale up as needed.
A few other general prerequisites include:
- Ubuntu 24.04 pre-installed on your VPS server
- SSH access with root or sudo privileges
- A reliable network connection (1 GbE+ recommended)
- A basic understanding of Docker concepts, such as containers, images, and volumes
How to deploy Apache Kafka on Docker
Let’s go over how to deploy Apache Kafka using Docker on an Ubuntu VPS. We’ll walk you through each step, from setting up Docker Compose to testing your Kafka deployment.
1. Set up your environment
First, ensure your VPS environment is properly configured and ready for Kafka deployment.
The easiest way to get started is using Hostinger’s Ubuntu 24.04 with Docker template, which comes with Docker, Docker Engine, and Docker Compose pre-installed. But if you’d rather do it manually, you can also install Docker on a regular Ubuntu VPS by following our comprehensive Docker setup guide.
![](https://www.hostinger.com/tutorials/wp-content/uploads/sites/2/2023/02/VPS-hosting-banner-1024x300.png)
To use the template on your VPS, follow these steps:
- Log in to hPanel and navigate to VPS → Manage.
- Go to Settings → OS & Panel → Operating System.
- Select Application → Ubuntu 24.04 with Docker.
After this, you’ll need to access your VPS server using a terminal to proceed with the rest of these steps. You can either do this using Hostinger’s built-in Browser Terminal, or connect to your VPS server via SSH using Terminal on your Windows or Mac.
Once done, check if the latest versions of Docker and Docker Compose are installed on your system by running the following commands using the Terminal:
docker --version docker compose --version
If, for some reason, Docker Compose is not already installed, you can install it now by running the following command:
sudo apt install docker compose-plugin
Since the template pre-allocates resources depending on your VPS plan, you don’t typically need to configure anything else.
![Hostinger offers a pre-configured Ubuntu 24.04 Docker template.](https://www.hostinger.com/tutorials/wp-content/uploads/sites/2/2025/02/8-1024x523.png)
2. Dockerize Apache Kafka
Apache Kafka is a distributed system where different components work together smoothly. Kafka uses brokers to store and process messages, while ZooKeeper traditionally manages the cluster’s metadata and coordinates the brokers. Each broker can handle thousands of reads and writes per second, making Kafka highly scalable for real-time data streaming.
The easiest way to deploy Kafka with Docker is by using the official Docker images from Confluent Inc. The confluentinc/cp-kafka image provides Kafka’s Community Version, and confluentinc/cp-zookeeper offers the ZooKeeper service.
These Docker images make deployment easier by ensuring consistency across environments, simplifying configuration, and enabling easy scaling.
To get started, we need to first create the necessary directories for data persistence and set proper permissions:
mkdir -p ./kafka/data ./zookeeper/data ./zookeeper/log chmod -R 777 ./kafka ./zookeeper
To set up Kafka with Docker, you need to create a docker-compose.yml file. This file should be located in the root directory of your project inside the Ubuntu VPS (Example: /kafka). More on that in the next step.
![To get started, we need to first create the necessary directories for data persistence and set proper permissions.](https://www.hostinger.com/tutorials/wp-content/uploads/sites/2/2025/02/3-1024x523.png)
3. Create a Docker Compose file
The Docker Compose file serves as the blueprint for your Kafka deployment, defining all necessary services and their configurations. Create a new file named docker-compose.yml in your project directory and configure it with the following essential components:
version: '3.8' networks: kafka-net: driver: bridge services: zookeeper: image: confluentinc/cp-zookeeper:7.8 container_name: zookeeper ports: - "2181:2181" environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 ZOOKEEPER_INIT_LIMIT: 5 ZOOKEEPER_SYNC_LIMIT: 2 volumes: - zookeeper_data:/var/lib/zookeeper/data - zookeeper_log:/var/lib/zookeeper/log networks: - kafka-net healthcheck: test: ["CMD", "nc", "-z", "localhost", "2181"] interval: 30s timeout: 10s retries: 3 kafka: image: confluentinc/cp-kafka:7.8 container_name: kafka depends_on: - zookeeper ports: - "9092:9092" - "29092:29092" environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://your-vps-ip:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_LOG_DIRS: "/var/lib/kafka/data" KAFKA_LOG_RETENTION_HOURS: 168 KAFKA_MESSAGE_MAX_BYTES: 1000000 KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" KAFKA_DELETE_TOPIC_ENABLE: "true" KAFKA_NUM_PARTITIONS: 3 KAFKA_DEFAULT_REPLICATION_FACTOR: 1 volumes: - kafka_data:/var/lib/kafka/data - kafka_logs:/var/log/kafka networks: - kafka-net healthcheck: test: ["CMD", "nc", "-z", "localhost", "9092"] interval: 30s timeout: 10s retries: 3 volumes: kafka_data: driver: local kafka_logs: driver: local zookeeper_data: driver: local zookeeper_log: driver: local
This setup includes the following key components:
- Kafka container settings, such as the broker ID, listener configurations, and connection to ZooKeeper.
- ZooKeeper service settings, including the client port and tick time, for managing cluster coordination.
- Network configurations that allow Kafka and ZooKeeper containers to communicate using Docker’s internal networking.
- Volume mappings for Kafka and ZooKeeper to ensure data is saved even if containers restart.
The result is a single-broker Kafka cluster with one ZooKeeper instance, which is ideal for development and testing. It uses the latest Confluent Community Platform images, a reliable and well-maintained distribution of Apache Kafka.
For detailed instructions on creating and managing additional Docker containers, including step-by-step commands and best practices, check out our comprehensive guide on creating Docker containers.
![The Docker Compose file serves as the blueprint for your Kafka deployment, defining all necessary services and their configurations.](https://www.hostinger.com/tutorials/wp-content/uploads/sites/2/2025/02/4-1024x523.png)
4. Start Kafka
Launch your Kafka cluster using the Docker Compose command:
docker compose up -d
Verify that both services are running properly by checking the container status and logs:
docker compose ps docker compose logs kafka docker compose logs zookeeper
You should see both containers in the Up state with no error messages in the logs. The Kafka broker typically takes a few seconds to start up completely after ZooKeeper is ready.
![You should see both containers in the “Up” state with no error messages in the logs.](https://www.hostinger.com/tutorials/wp-content/uploads/sites/2/2025/02/5-1024x609.png)
5. Test the deployment
Let’s verify the deployment by creating a test topic and exchanging messages. First, create a new topic named test-topic:
docker compose exec kafka kafka-topics --create --topic test-topic --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1
Now, open two terminal windows to test message production and consumption. In the first terminal, start the console producer:
docker compose exec kafka kafka-console-producer --topic test-topic --bootstrap-server localhost:9092
In the second terminal, start the console consumer:
docker compose exec kafka kafka-console-consumer --topic test-topic --bootstrap-server localhost:9092 --from-beginning
Type some messages in the producer terminal and press Enter after each message. You should see these messages appearing in real-time in the consumer terminal, confirming that your Kafka deployment is working correctly. To exit either the producer or consumer, press Ctrl+C.
![You should see these messages appearing in real-time in the consumer terminal.](https://www.hostinger.com/tutorials/wp-content/uploads/sites/2/2025/02/6-1024x523.png)
6. Optimize your Apache Kafka deployment
Optimizing your Kafka deployment is key to ensuring strong performance and reliability. Here are some approaches you can use:
Environment Variables Configuration
Customize your Kafka setup using environment variables. Prefix Kafka properties with KAFKA_, convert them to uppercase and replace dots with underscores.
For example, use KAFKA_BROKER_ID and KAFKA_ZOOKEEPER_CONNECT to set basic broker settings. This method makes it easier to manage different configurations without editing core files.
Scaling Kafka Brokers
To scale your Kafka cluster with Docker Compose, you can adjust the number of broker instances. The command docker compose up –scale kafka=n works, but defining separate broker services in your docker-compose.yml file is a better option. This avoids broker ID conflicts and ensures smooth cluster coordination while giving you more control over configurations and resource allocation.
External Access Configuration
If you need to expose Kafka to external applications, set the KAFKA_ADVERTISED_LISTENERS property properly to maintain connectivity. For production, use node ports or load balancers for secure and reliable external access.
To maintain optimal performance, pay attention to hardware and configurations. Use high-speed SSDs for storage, manage partitions effectively, and set appropriate replication factors. Regularly monitor system metrics and tweak configurations based on your workload and use case.
For production environments, prioritize security with TLS encryption and MFA. Tools like Prometheus and Grafana can help monitor cluster health and performance, enabling you to identify and resolve bottlenecks before they become issues.
Conclusion
Deploying Apache Kafka with Docker offers an easy and efficient way to set up a powerful event-streaming platform. This guide walks you through the key steps, from preparing your Hostinger VPS to setting up a Docker Compose file that manages both Kafka and ZooKeeper.
But setting up Kafka is just the beginning. You can see its real potential only if it’s part of a larger data ecosystem.
Keep in mind that while this guide is great for development and testing, production setups require extra steps for security, monitoring, and high availability. As your needs grow, you can scale your Kafka cluster by adding brokers, managing partitions effectively, and optimizing for specific workloads.
How to deploy Kafka FAQ
What are the system requirements to deploy Kafka?
A production Kafka deployment requires a minimum of 4GB RAM (16GB recommended), 2 CPU cores, and 500GB storage, preferably on SSDs. Network requirements include 1-10GbE connectivity.
Can I run Kafka without Zookeeper in Docker?
Yes, since Kafka 3.3, you can run Kafka without Zookeeper using KRaft mode. KRaft is production-ready for new clusters and simplifies deployment by eliminating the need for separate Zookeeper management.
How do I connect Kafka to external clients?
External client access requires proper listener configuration. Use node ports, loadbalancers, or ingress controllers to expose Kafka outside the cluster. Clients must be able to connect to all brokers, typically through a load balancer that serves as the bootstrap endpoint.