Sometimes, working on big projects, running all tests locally take too much time.

This post is a quick shell script tip I use rather often: testing only the packages that have changes.

Without further ado, here it is:

git status --porcelain |
  awk '{print $2}' |
  while read -r file; do
    echo "./$(dirname "$file")/..."
  done |
  sort |
  uniq |
  tr '\n' ' ' |
  xargs go test --failfast

This will get all the changed file names, convert them to ./{package}/..., remove duplicates, join them in a single line, and finally pass it down to go test.

You can also put it somewhere in your $PATH, so you can always just call it (instead of searching through your history). For instance, here’s mine.

Caching

At this point you might feel inclined to ask “doesn’t go test cache handles this?”, and the answer is, yes, it does to some extend.

In reality, I use this mostly within GoReleaser. Its test suite takes ~20 minutes to run. I never run all the tests locally. I run the tests of packages I changed, and let the CI do the rest. Since I never run all tests locally, they are never cached. Also, changing something like internal/artifact or pkg/context would still trigger a re-test of everything - which is the right thing to do more often than not.

TLDR: the shell script way is less correct, but might be faster in some cases. Always run all tests in your CI, though.

Extra: watching files

Another thing I do often is watching for files and then running the tests on change.

For instance, you use find, pipe it into entr and call gotestchanged:

find | entr gotestchanged

For the sake of clarity, gotestchanged is the script from earlier. You can see mine here.

The only downside of this is that it won’t watch newly created files, and entr might error if a file is deleted. Works well enough for the 80% scenario though, so I’m keeping it. :)


Thanks for reading! See you in the next one!