How to execute command from one docker container to another

You have a few options, but the first 2 that come time mind are:

  1. In container 1, install the Docker CLI and bind mount
    /var/run/docker.sock (you need to specify the bind mount from the
    host when you start the container). Then, inside the container, you
    should be able to use docker commands against the bind mounted
    socket as if you were executing them from the host (you might also
    need to chmod the socket inside the container to allow a non-root
    user to do this.
  2. You could install SSHD on container 2, and then ssh in from container 1 and run your script. The advantage here is that you don’t need to make any changes inside the containers to account for the fact that they are running in Docker and not bare metal. The down side is that you will need to add the SSHD setup to your Dockerfile or the startup scripts.

Most of the other ideas I can think of are just variants of option (2), with SSHD replaced by some other tool.

Also be aware that Docker networking is a little strange (at least on Mac hosts), so you need to make sure that the containers are using the same docker-network and are able to communicate over it.

Warning:

To be completely clear, do not use option 1 outside of a lab or very controlled dev environment. It is taking a secure socket that has full authority over the Docker runtime on the host, and granting unchecked access to it from a container. Doing that makes it trivially easy to break out of the Docker sandbox and compromise the host system. About the only place I would consider it acceptable is as part of a full stack integration test setup that will only be run adhoc by a developer. It’s a hack that can be a useful shortcut in some very specific situations but the drawbacks cannot be overstated.

Leave a Comment