Should e2e tests persist data in real databases?
I'm currently working at a large well-known company on our test tools and frameworks team. So while I'm no expert, it is something that's part of my job. I'm going to be talking specifically about web testing. Testing is somewhat different for native apps like iOS and Android and I'm not super familiar with those aspects.
The terminology between e2e (end to end) and integration tests is somewhat interchangeable, while unit tests have a more specific definition.
Generally e2e/integration tests should be runnable in dev and production environments. Depending on your setup, your dev environment is probably using some semi-frequently updated snapshot of your production db. In other cases, your local environment may be hitting the actual production db. There are pros/cons to both approaches, but it largely depends on the scale of your company or project. For example, if you're at a large company with dedicated teams you can see many changes a day hitting production databases vs a small team where a weekly snapshot of the prod db is probably good enough for testing locally. i At the base level, all integration tests should be treated as real. When dealing with web apps there are lots of other factors we have to take into account like different web browsers, network activity / availability, etc. So mocking out data for api calls would allow for super fast testing, but then adds another level of complexity with making sure the mocks stay up to date with the real-world database.
Running integration tests locally should more or less be doing the same thing against your dev server that they will be doing against staging and production. With the exception of having the app detecting whether its running in a dev, staging, or production environment to switch out URL's and various credentials, the app should be expected to behave exactly the same way.
In regards to your question about authentication, the answer is yes. Lets look at 2 examples that show different considerations.
Suppose your project is very small. You create some real accounts on production and your db gets snapshotted weekly for use in your local dev environment. You just run your integration tests with one or more of those users as needed. Since the local tests are only hitting your local db, you don't need to worry about the data being generated since it won't affect production. Other engineers on your team can use the same user(s) and not worry about it. If one engineer makes some changes to the db schema, ORM, etc then everyone just gets a new copy of the db snapshot and continues working.
Now for the other extreme. Suppose your project is very big. Millions of users and hundreds of employees all collectively making changes to the codebase and db every day. There are all kinds of ways that infrastructures are setup to handle various engineering tasks. There's too much data and the db changes too often to make using local snapshots feasible. At this scale, you're probably doing continuous integration and running your tests on every commit. You want to do that so that incoming changes don't make it to production and cause major problems. You're probably running your local dev environments against either a constantly updated staging database, or perhaps even against your production db itself. (Try planning for the staging db as it avoids a lot of other problems.)
Now, having just a small set of dedicated test users starts to be a problem. Tests are running all the time, both automated and by dozens of engineers all working on their own bits of work. Since the staging db is probably shared, you easily start getting weird conflicts as the same test user is doing all kinds of things and starts causing tests to fail. A good solution I've seen for this is a kind of test account checkout server. You create say 100 or 1000 (or more) test user accounts. When your integration tests run, they literally check out a test user account from the server. When the tests are done, the integration tests clean up whatever changes they made on that user and tell the checkout server that the user is free again. Then it randomly gets checked out by someone/something else and the cycle continues.
So the take aways that related directly to your question:
- You should always have dedicated test user accounts that are exactly the same as regular user accounts, just dedicated to testing.
- Depending on scale of team and project, if small a few dedicated accounts is fine. If working on a much larger scale, you need many more dedicated test accounts and probably want an automated service that allows individual test runs to checkout users as needed.
- Tests should always clean up after themselves. If a test creates a TODO that gets stored in the db. When the test is done running, that TODO should be deleted from the db. If you aren't constant about this, you'll eventually run into bugs and issues where data is inconsistent. God forbid this happens in production.
- Only worry about mocking data for unit tests, unless you're working in a very good and dedicated engineering environment where you have people dedicated to keeping the db mocks up to date all the time. If you can do that, your integration tests will be very fast and you don't really have to worry about the db stuff as much. But its hard to maintain this over time without dedicated support.
I've been reading a lot about e2e testing and one thing I cannot understand is how "real" should e2e tests be.
E2e should mimic production system as close as possible, whats more you can use e2e automation to reproduce any production issue with production like data,
Regardless of the tools I use for the e2e tests, I've seen that most of the time they hit either local, development or alpha environments.
The e2e automation has to work with any resources/database/datatsore/message bus etc., and with any enironmet including local/remote or cloud platforms
If my application has authentication, should I create a "test" user with valid credentials in the database? Should I do that for Alpha or even Production environments? How else would this test user login into my application?
As long the app credentials are part of app configuration you would have flexibiltiy to control credentials dedicatated for testing. I would strongly recommend running parallel fully automated e2e dedicated infrastructure, that does not compromise or share production secrets.
Say I have the infamous TODO app. I have a test that logs the user in. After logging in, I want to test that the user is able to create a TODO. This TODO is saved in a Database.
With e2e testing you are interested in identyfing all applicating input (like UI interaction or REST/HTTP requests), configuration files, and output with verification rules. That includes UI changes,log/messages produced, datastore/database changes.
After running the tests, should I run something to remove the data created during e2e tests? Or should I intercept the request just before saving it and mock the response (would this be an antipattern for e2e testing)?
As part of the e2e testing you need to take care of setting initial application state, as well state per use case if applicable. With e2e testing you want test all your application behaviours, thus no much place for mocking here. After running testing you can destroy all app resources, services clear database. I believe this is optional step since setting application or use case state addresses resource/database preparation.
Finally e2e testing could be challenging if you do not have the right toolset and good data organization strategy, especially that over time you would end up with hundreds use case tests for small to medium application size. Besides that you want e2e testing tool that works with multi stack applications writen in any languages (java,javascript golang, you name it), and support automation for any platform including localbox, docker, kubernetess, serverless cloud.
Here are some intesting readings:
- Data testing strategy
- ETL partical e2e testing
- Practical e2e testing examples
- Serverless e2e practical examples