Hi, thanks for posting.
This can be quite a complex area, depending on the number of tests involved and what you are trying to achieve.
Building and tearing down a database for each individual test can be very expensive and will greatly increase the length of your test run. It will work, but there are probably better solutions.
A simple option is to just have one test database created and mark these tests with ExclusivelyUsesAttribute to limit their concurrency.
The best option is usually to have one database for each test process. In this way, concurrency isn't an issue and the databases will only be created when a test process is spawned. This takes some infrastructure to implement. It involves doing the following:
1. Every test interacting with the database needs to check for the existence of the database prior to executing further code. If the database doesn't exist, it is created
2. Databases are created in a way that is programmatically derived from the Process ID of the test process. In this way, the database name is tied to the test process and can be identified by it
3. You need a mechanism to clean up old orphaned databases that do not have a corresponding test process currently running on the machine. You can do this by pulling up the list of databases and reconciling this with the list of running processes on the machine. I recommend doing this when tests begin their execution rather than as a cleanup step, since there is no guarantee a cleanup step will run under edge cases (i.e. stack overflow, out of memory, power outage, etc).
The above approach has the nice advantage that the database is left over at the end of the test run, allowing you to examine its state to help with troubleshooting failed tests.