End-to-End Automated Testing in a Microservices Architecture - Emily Bache
When you move from a monolithic architecture towards a distributed microservices setup, it makes some things easier, but other things become harder. Testing is one of those things that has a bit of both. Compared to a monolith, your tests probably have a lot more APIs they can access - each individual microservice can be tested in isolation, with the rest of the system mocked or stubbed. This can be really useful - you can have greater confidence that the parts of your system work by themselves - even quite large parts. In my experience, you don’t get away from the need for end-to-end tests entirely though. There can still be integration errors between the services. If you can write tests that exercise your whole system while none of your own services are replaced by a test double, you can find some pretty important issues, before you deploy to production. I’ve heard that many teams working with large-scale microservices architectures are using techniques like incremental roll-out and testing in production, and don’t do a lot of testing in a staging or pre-production environment. I don’t think there’s necessarily a either-or decision to make there, and I think it’s worth doing some end-to-end tests before your code reaches the production infrastructure. In this talk I’ll share my experiences handling end-to-end automated tests in a pre-production environment, and some techniques I’ve found particularly useful. The first technique, is to make the tests talk the same protocols as the microservices. So the tests submit requests via REST, then listen to all the relevant traffic being sent between the services while the request is processed. The second technique I use is Approval Testing, which allows me to verify the correct messages are being passed, without test maintenance costs getting out of control. The third technique is not so much about the actual tests, but about how you configure your deployment pipeline so that the tests give the most useful feedback to development teams. This is all about resolving the conflict that arises when you want to keep services and teams working independently, while having tests that check they work together correctly.