Docker: Difference between revisions

From Bitpost wiki
 
(4 intermediate revisions by the same user not shown)
Line 28: Line 28:
                                   # -it to run in a terminal, then Ctrl-C to stop it; use -d to run detached
                                   # -it to run in a terminal, then Ctrl-C to stop it; use -d to run detached
  docker logs --follow cont-name    # tail container logging
  docker logs --follow cont-name    # tail container logging
docker logs -f cont --tail 100    # tail but clip log - REQUIRED on long-running containers!
  docker ps                        # to see what containers are running
  docker ps                        # to see what containers are running
  docker ps -a                      # to see what containers are running (including recently stopped containers)
  docker ps -a                      # to see what containers are running (including recently stopped containers)
Line 34: Line 35:
  docker exec -it cont-name bash    # get bash prompt on running container
  docker exec -it cont-name bash    # get bash prompt on running container
  docker exec -u root -it cont-name /bin/bash # to run bash as root (so eg you can `apt install ...`)
  docker exec -u root -it cont-name /bin/bash # to run bash as root (so eg you can `apt install ...`)
docker exec -u 0 -it cont-name bash # similar to ^
docker cp ./myfile mycont:/dest  # copy file into container
docker cp mycont:/src /home/      # copy file out of container
  docker cp $(docker create --rm ${imageBaseUrl}):/image/path/files /local/path # copy image files
  docker cp $(docker create --rm ${imageBaseUrl}):/image/path/files /local/path # copy image files
  docker rm name                    # to remove a stopped container
  docker rm name                    # to remove a stopped container
Line 42: Line 46:
   
   
  docker push|pull                  # push to / pull from hub.docker.com (for subsequent pull elsewhere!)
  docker push|pull                  # push to / pull from hub.docker.com (for subsequent pull elsewhere!)
==== See files in an image ====
You should REALLY peek at images before blindly running containers.  Stupid docker doesn't make this OOTB-easy, but there's [https://stackoverflow.com/a/53481010/717274 always a way].
docker create --name="tmp_$$" image:tag ls
docker export tmp_$$ | tar t
docker rm tmp_$$
Or to just fucking untar them all:
docker save nginx > nginx.tar
tar -xvf nginx.tar


==== Pretty ps ====
==== Pretty ps ====

Latest revision as of 20:46, 27 December 2023

Thanks Keith for the intro!

Keith: Alpine is a stripped down linux distro. Need to learn about how to handle persistent volumes, container secrets (don't put in container, but it can prompt for things). Dockerfile -v (volume). Container should output to stdin/out, then host can manage logging. Terraform can build your arch (can use a proxmox template), ansible is great for actual tasks. GCP has managed kubernetes (wait until you understand why you need it). Check out hashicorp vault FOSS version for awesome secret storage that is docker-compatible.

Maintenance

Restart on reboot

If you are using docker compose, you should add this to your containers in compose.yml:

restart: always

To restart a single container on reboot, once it is running, update its config:

docker update --restart unless-stopped container_id

Prune regularly

ALWAYS prune your host's containers and images! Or docker will eat your drive alive. Do this in crontab:

0 3 * * * docker container prune -f && docker image prune -f

On occasion you may also need to clean up strays, with this super-prune:

docker system prune --all

It will remove all unused images not just dangling ones. Make sure the ones you want to keep are running! But DO THIS whenever you've been dicking around for a while, you're sure to have splorched some schtumm!

Also don't forget to prune your system log.

Commands

docker build -t name .            # builds an image from curr dir Dockerfile
docker images                     # lists images
docker run --name cont-name image # to create and start a container from an image, which you can then stop and start
                                  # -it to run in a terminal, then Ctrl-C to stop it; use -d to run detached
docker logs --follow cont-name    # tail container logging
docker logs -f cont --tail 100    # tail but clip log - REQUIRED on long-running containers!
docker ps                         # to see what containers are running
docker ps -a                      # to see what containers are running (including recently stopped containers)
docker start|stop name            # to start/stop a container
docker exec -it cont-name cmd     # to run cmd on running container
docker exec -it cont-name bash    # get bash prompt on running container
docker exec -u root -it cont-name /bin/bash # to run bash as root (so eg you can `apt install ...`)
docker exec -u 0 -it cont-name bash # similar to ^
docker cp ./myfile mycont:/dest   # copy file into container
docker cp mycont:/src /home/      # copy file out of container
docker cp $(docker create --rm ${imageBaseUrl}):/image/path/files /local/path # copy image files
docker rm name                    # to remove a stopped container
docker container prune            # to remove all stopped containers
docker images                     # lists images
docker rmi REPOSITORY/TAG         # to remove an image
docker image prune                # remove all dangling images

docker push|pull                  # push to / pull from hub.docker.com (for subsequent pull elsewhere!)

See files in an image

You should REALLY peek at images before blindly running containers. Stupid docker doesn't make this OOTB-easy, but there's always a way.

docker create --name="tmp_$$" image:tag ls
docker export tmp_$$ | tar t
docker rm tmp_$$

Or to just fucking untar them all:

docker save nginx > nginx.tar
tar -xvf nginx.tar

Pretty ps

Use this to show containers in a nice format (you can also add this as default, in ~/.docker/config.json):

docker ps -a --format 'table Template:.ID\tTemplate:.Status \tTemplate:.Names\tTemplate:.Command'
docker ps -a --format 'table Template:.ID\tTemplate:.Status \tTemplate:.Names\tTemplate:.Command' | grep #mycontainer#

Restart container with new image

This is best practice, especially for large containers that are hosted at another location. It removes the image retrieval time from the overall container downtime:

docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always ...

Containers

Find nirvana here.

Debian slim

Debian slim containers are much smaller than standard installs. They are stripped of things like documentation, while still maintaining a full linux kernel and C++ stack.

You can use apt to bake in what you need from there. Nice!

Node

The official node container is huge (1GB), the alpine one is relatively tiny. See the list here.

alpine

Alpine is the best TINY base linux container. But it runs BusyBox and musl so many things (nvm, meteor) won't work (at least without a TON of effort).

Node on alpine

Here's a good starting point for a node app, but remember meteor won't work:

FROM alpine/git
RUN apk --update add curl bash tar sudo npm 
SHELL ["/bin/bash", "-c"]

ENV NEWUSER='m'
RUN adduser -g "$NEWUSER" -D -s /bin/bash $NEWUSER \
&& echo "$NEWUSER ALL=(ALL) ALL" > /etc/sudoers.d/$NEWUSER && chmod 0440 /etc/sudoers.d/$NEWUSER

USER m
WORKDIR /home/m

COPY --chown=m my-code /home/m/my-code

RUN npm install -g whatevah

EXPOSE 3000
CMD [ "my_app", "param1" ]

More examples

LetsEncrypt SSL certificate generator

docker pull zerossl/client
# well that experiment went to shit... we tried to add a TXT domain record but it wasn't found
# tom thought we needed a full resolving A record  before TXT would work
# either way, we can use a self-signed cert with gitlab and forego the constant need to renew

Networking

Bridge networking (the default) allows connections between containers running on the same docker host.

docker network create my-nw # defaults to --driver bridge
docker run (...) --network my-nw (...) # to create and start a container on the network
docker network connect my-nw container-name # to attach a container to the network after it is started
docker network inspect my-nw

Install

Install docker with apt

This now includes docker compose. It should be all you need. I had to do this shit that is fucking always a problem and never documented - FU.

  • add yourself to docker group
sudo usermod -aG docker ${USER}
  • fix the GOD DAMNED socket permission. assholes...
sudo chmod 666 /var/run/docker.sock

Proxmox CPU config

Some images (like Meteor 5.0) require more-advanced CPU capabilities than Proxmox grants by default. Specifically, Mongo 5.0 requires AVX cpu instructions. To enable them:

Proxmox > VM > Edit > Processor > Type: "host"

Note that my Proxmox docker VM is called matryoshka.