#!/bin/bash
# Grab the container name and use fzf to let us pick it
result=$(docker container ls --format "{{.Names}}" | fzf)
# No command - try bash
if [ $# -eq 0 ]
then
echo "docker exec -it $result /bin/bash"
docker exec -it "$result" /bin/bash
fi
# Command was given so use that
if [ $# -eq 1 ]
then
echo "docker exec -it $result $1"
docker exec -it "$result" "$1"
fi
Requires fzf but other wise `ec, select container, it'll try bash or whatever shell/command you supplied.
[[ and ]] are not portable. [ and ] are portable and specified in POSIX.
Like if you always use [[ and ]] you are going to be surprised when the default shell used for running unattended scripts in Debian, Ubuntu refuses to run your script.
dash: 1: [[: not found
I know your parent commenter mentioned "#!/bin/bash" so your point is still valid. But I recommend that always use "#!/bin/sh" and always use [ and ] so that the scripts are portable.
I say this as a former package maintainer who has spent a great deal of time converting Bash scripts to POSIX sh scripts just so that I can package them as .deb.
IMHO this is also a good remark in context of those container images that most of all have /bin/sh. Technically not all, but many distro based containers.
so if someone plans to extend the script so that it injects it iself into the container or what not.
This is why I usually stop myself from trying anything very complex with shell scripts: unlike scripting languages, half the power of shell is hiding inside little piles of punctuation marks that I don’t think I can even Google!
I think something that would make me consider using this, is auto-completion support. I've added it to a lot of my own commands and it is awesome. I think compared to this tool, raw docker commands using tab completion would still be faster for me.
Whenever you use docker it seems you'll want to write custom wrappers for it.
In particular what I've found useful:
- automatically create a user with the same uid as yourself and run the command as that user. Pretty much required whenever you mount some directories for writing.
- automatically login and pull if necessary
I usually end up with project specific "run" scripts which are just shell scripts so I can do things like `./run shell` to drop into the shell of a container, or `./run rails db:migrate` to run a command in a container.
Here's a few project specific examples. They all have similar run scripts:
Indeed, wrapping basic docker commands in a script feels like wrapping git commands in a script. You’re probably better off learning the commands for this ubiquitous tool.
Or learn the commands and then wrap them, best of both worlds, developer affordance is a useful characteristic as long as it doesn't reduce understanding.
alias dps='docker ps --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}"'
alias dcup='docker compose up -d'
alias dcupf='docker compose up -d && docker compose --follow logs'
alias dcdown='docker compose down'
alias docker-compose='docker compose'
Am I missing anything here? This is just a shell wrapper for running a couple of basic docker commands in sequence. It's not even zsh, so you can have some semi-decent argument parsing with zparseopts.
For similar docker automation I just use a Makefile with pattern rule (e.g. `docker-sh-%`). This is much more flexible than shell scripts because recipes can be easily remixed to provide higher-level functionality.
This is just a shell wrapper, yes. Like you, these shell wrappers don't appeal to me either. Like you, I just use a Makefile too. But HN is a large place with all kinds of people with their own preferences. It looks like simple shell wrappers do appeal to a large number of them. Not complaining about it. Just observing.
That looks like using docker-compose ? My problem now that we switched to docker swarm is that docker compose commands are not available anymore. Need to use « docker service » or docker on the hosts and again deal with the complexity of some commands.
I saw your comment and am wondering what in particular you are struggling with.
I recently fixed one of my biggest pet peeves with docker swarm - the inability to directly exec into a service without first SSHing to the host the task is running on.
Maybe your issue is in this ballpark? Happy to exchange notes on this. If you are looking for a community of Swarm users, check out https://devops.fan (that's a discord hosted by Bret Fisher)
I do this in all projects I work on too, there is always lots of arbitrary knowledge in long running projects.
I formalized it into a .notes folder inside every project and I have a command with useful note related tasks, that works out of the present directory .notes folder.
I have the same for project tasks, so common bash tasks go into a script inside .tasks and I have a command runner that works similar to npm run.