Docker, as a lightweight container technology, has become one of the most commonly used deployment methods in cloud computing and DevOps environments. Whether it's building microservices, conducting multi-environment testing, rapidly deploying projects, or deploying databases, caching systems, and web applications, almost everything can be accomplished with Docker. Deploying a Docker environment on a cloud server is a fundamental skill that all cloud users must master. As long as the server has a Linux system, SSH access, and root or sudo privileges enabled, a Docker runtime environment can be quickly built, and business applications can be deployed using images and containerization.
Installing Docker on a cloud server typically involves four key steps: system environment preparation, Docker installation and startup, image management, and container execution and persistence. Whether you are using Alibaba Cloud, Tencent Cloud, Huawei Cloud, AWS, or an overseas VPS, as long as the system version meets the requirements, you can use the same method for deployment. Most users choose CentOS, Ubuntu, or Debian as their cloud server system; therefore, the following explanation uses the most common Ubuntu and CentOS as examples.
Before deployment, ensure the server has updated system packages and supports basic network access, as Docker image repositories require an internet connection to pull resources. If server access is restricted, such as slow image pulling speeds in mainland China, you can replace it with a domestic mirror source, such as Alibaba Cloud, Tencent Cloud, or the Docker China mirror. This step can be adjusted after installation.
For example, the system update command for Ubuntu is as follows:
sudo apt update && sudo apt upgrade -y
CentOS, on the other hand, uses:
sudo yum update -y
After the system update is complete, you can begin installing Docker. To ensure version stability, it is recommended to use the installation method provided by the official Docker website, rather than the older version package manager that comes with your system. The following example uses Ubuntu:
sudo apt install ca-certificates curl gnupg -y
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io -y
After installation, you can verify the version using the following command:
docker --version
sudo systemctl enable docker
sudo systemctl start docker
To allow the current user to execute docker without using sudo, it is recommended to add the user to the docker group:
sudo usermod -aG docker $USER
After making changes, you need to log in via SSH again or execute `newgrp docker` for the changes to take effect.
The installation method for CentOS is similar; simply replace `apt` with `yum` and add the corresponding repository source. If the server cannot access the official Docker repository, you can switch image accelerators, for example:
sudo mkdir -p /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF
sudo systemctl restart docker
Once Docker is installed, you can start using images and containers. A Docker image is like a packaged runtime environment, while a container is an actual running instance of the application. You can use `docker pull` to pull images, for example:
docker pull nginx
After the fetch is complete, run an Nginx container for testing:
docker run -d --name web-test -p 80:80 nginx
Accessing the server's public IP address will display the Nginx default welcome page, indicating that the container is running correctly. You can use `docker ps` to view running containers, while `docker stop`, `docker restart`, and `docker rm` control stopping, restarting, and deleting containers, respectively.
To deploy real projects, it's typically necessary to mount local directories to enable file sharing between the container and the host machine, for example, when deploying a website.
docker run -d --name web \
-p 80:80 \
-v /var/www/html:/usr/share/nginx/html \
nginx
This means that the `/var/www/html` directory on the server will be mapped inside the container, allowing direct modification of web page files without repackaging the image.
For database containers, such as MySQL, persistent mounting can be used to store data:
docker run -d --name mysql-db \
-e MYSQL_ROOT_PASSWORD=123456 \
-v /data/mysql:/var/lib/mysql \
-p 3306:3306 \
mysql:8.0
This way, even if the container is deleted, the data will not be lost because the files are stored in the host machine's directory.
For more complex deployments, Docker Compose can be used for multi-container orchestration. For example, running Nginx + PHP + MySQL simultaneously requires only a single docker-compose.yml file to quickly start the complete environment.
version: '3'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./web:/usr/share/nginx/html
db:
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: 123456
volumes:
- ./db:/var/lib/mysql
implement:
docker compose up -d
This allows you to start all services at once, which is more convenient than running containers individually and is a common practice in enterprise DevOps.
After completing the Docker deployment, you can also enable background maintenance such as automatic startup, log cleanup, and image optimization, including regularly cleaning up unused resources:
docker system prune -a
With the increasing popularity of containerized applications, most applications, databases, middleware, and development environments can be deployed via Docker, reducing the burden of operation and maintenance, and supporting second-level rollback, rapid migration, and elastic scaling. This is an important reason why more and more enterprises are migrating their applications to Docker environments.