
Docker quick guide
What is docker container and what is K8s Pod
- Docker container : It is a runtime instance of an image, it can be managed and maintained by the docker.
- K8s Pods : It is an abstruction in K8s cluter which can have one or more container within.
What are the ways, we can troubleshoot issues related to docker containers.
- See the logs –
docker logs container_name
- See the logs in real time –
docker logs -f container_name
- inspect the container –
docker inspect container-id
- Get the stats of all container –
docker stats
- Docker system information –
docker info
- To see the running process in a container –
docker top container-id
How to delete unused docker resources like
- Image :
- – list all images –
docker images
- – delete all un-used images –
docker image prune
- Container :
- – list all the containers –
docker ps
- – delete all stopped – unused containers –
docker container prune
- Volume :
- Delete unused volumes –
docker volume prune
- Network : If there is any custom network, but no containers are attached, that network can be deleted [Be very careful on this activity]
- –
docker network prune
- – Remove all at once –
docker system prune -a
How to identify the containers in a network
The easiest way is to inspect each network individually
docker network ls -q
But, if there are many networks, it would be tough too list all networks and the inspect each networks, so the most optimal way is to write a shell script to inspect each network and loop through them
- identify the containers on each network through Shell script and delete the network which does not have any containers in it
#!/bin/bash
# List all networks
for network in $(docker network ls -q); do
# Check if the network has no containers attached
if [ -z "$(docker network inspect -f '{{.Containers}}' $network | grep -v '{}')" ]; then
echo "Pruning network $network which has no containers attached."
docker network rm $network
fi
done
Note :In the above shell -f is meant --format
How to restrict a container to have 1 CPU & 1GB Ram
docker run -d --cpu 1 --memory 1g --name partho-container nginx
In the docker file, what is the difference between run & cmd
- run : This is used during the container creation and starting the container(similar to powering on a laptop)
- cmd : This runs the commands inside the container once the container is ready (similar to executing ifconfig command inside the powered on laptop to know its IP address)
Basic Dockerfile (Single Stage)
# Use the official Nginx base image
FROM nginx:latest
# Set the working directory in the container
WORKDIR /usr/share/nginx/html
# Copy the content of the current directory to the container
COPY . .
# Expose port 80 to the host
EXPOSE 80
# The default command to run when the container starts
CMD ["nginx", "-g", "daemon off;"]
Multistage Dockerfile
- This is mostly used if there are multiple stages of image creation.
- This reduces the image size and so less files and dependancies, so its better for security
# Stage 1: Build
FROM node:16-alpine AS build
# Set the working directory
WORKDIR /app
# Copy the package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Build the React application
RUN npm run build
# Stage 2: Run
FROM nginx:alpine
# Copy the built React application from the build stage
COPY --from=build /app/build /usr/share/nginx/html
# Expose port 80
EXPOSE 80
# Start nginx
CMD ["nginx", "-g", "daemon off;"]
In the above Dockerfile, we may see the nginx CMD ["nginx", "-g", "daemon off;"]
These specific public image commands, volume informations are already available on the image itself, which we can easilt identify if we read the image description on the public docker hub repository.
I have discussed about Docker images on my blog where we discussed about docker volume
What to do with this above dockerfile?
Dockerfile is used to create an image with the working files that are mentioned in the Dockerfile which is mentioned on the COPY block
- Create an image using that multistage Dockerfile
docker build -t some_image_name .
- Using that image, build a container
docker run -d -p 80:80 --name some_container_name some_image_name
- Here, -p (promote or port mapping is needed because by default docker uses bridge network and the two port number means
-p laptop_port:inside_contaner_port
)
Update the Docker container without Dataloss to a new image
- Volume must be mounted on the container, so that the data is persistant on Docker volume
docker run -d --name partho-container -v my-laptop-path:/path/to/container/data partho-image
- inspect the container and check the volume section to know the directory which is mounted to the container
How to take a backup of the data from docker container
docker run --rm -v my-volume:/volume -v $(pwd):/backup busybox tar cvf /backup/backup.tar /volume
or
docker run --rm --volumes-from <container> -v $(pwd):/backup busybox tar cvfz /backup/backup.tar <container-path>
- stop the container `docker stop container-id`
- Remove old containner to avoid any conflict – `docker rm current-container`
- pull the latest image – `docker pull new-updated-image:latest`
- Start the new containner with new image and old data – `docker run -d –name new-container -v my-volume:/path/to/data new-updated-image:latest`
How to move one container from one Host-1 to Host-2
- Perform all the above steps like stop, backup the data etc, It can be done manually as well,
- ssh to Host1 and inspect the container and find the path of the volume (`cd /user/data`) and then stop the container
- Now, zip the backup folder –
tar cvf backup.tar .
- move the tar file to `2nd Host`, where the comtainer would need to come –
scp backup.tar user@Host2:/user/data
- cd /user/data
- extract the backup
tar xvf backup.tar
- Create the new container on Host2 with the previous data –
docker run -d --name container2 -v /user/data:/path/to/container/data partho-image:tag
- To do that automatically,
- we can write a script and do that
How to Restoring a backup container
docker run --rm --volumes-from <container> -v $(pwd):/backup busybox sh -c "cd <container-path> && tar xvf /backup/backup.tar --strip 1"
How to ensure the docker containers are secure enough to host some secure application
- Get the image which are trusted
- Maintainn the underlying host with latest patches and regular updates
- Regularly scan the image for any vulnerability –
- Tools like `trivy`
- install on your local system – `brew install trivy` # For mac
- Test `trivy image nginx:latest`
- Review the list of identified vulnerabilities, including severity levels (e.g., CRITICAL, HIGH, MEDIUM, LOW).
- Update the Dockerfile to use a more recent base image if vulnerabilities are found in the base image.
- Enable observability to check the logs and alerts
- enable prometheus to observe and use grafana to see it visually or some stack like ELK [`elasticSearch Logstash Kibana`]
- Or we can use docker own command like docker stats, docker top etc
What are the best practices of Docker as a container.
- Use lightweight image to avoid getting any attack and optimise the build and deployment
- Patch the Host and use the latest image for container
- Use some orchestration tool like K8s to manage containers at scale and enable the features like LB and Auto Healing etc
- Enable resource limit using –cpu –memory etc
- Monitor the container health, resource usage(
docker stats
) - Enable custom bridge network and keep the containers isolated to prevent any security compromise
- Regularly back up the data – use some shell scripts to do that daily
- Check the vulnerability test using tools like
trivy
If you enjoy reading upto this, then you would definately like my other writing on Local development environment using docker compose

Partho Das, founder of Lia Infraservices, has 15+ years of expertise in cloud solutions, DevOps, and infrastructure security. He provides consultation on architecture planning, DevOps setup, Kubernetes, and cloud migrations. Partho holds multiple AWS and Azure certifications, along with CISCO CCNA & CCNP.
Connect on LinkedIn