As a developer in a data warehouse, I sometimes find myself tasked with doing small-scale plugs to our data for various scenarios. Most of these are due to source-side plugs that we for whatever reason didn’t handle correctly, but not always.
Cause aside, one of the requirements I have to deal with when changing our data is Sarbanes-Oxley compliance. We don’t technically fall under SOX ourselves, but the company has a policy that we will maintain a similar change control process and audit trail. This means testing and documenting my changes before implementing them, which is really good policy anyway.
Now, we maintain a Test environment that is sometimes an appropriate venue for this kind of limited testing – after all, the data in our Test environment is supposed to fairly closely match what Production would have looked like, but it should also fairly closely match the data we’re expecting to see in Production when we implement anything we have tested against it, so it’s usually appropriate to plug Test on the way to plugging Prod. But occasionally, it can be beneficial to actually test with only the data that will be affected, for ease of before/after comparison. It’s in these instances that we turn to our Development instances of SQL Server, which in our case are installed locally on our development workstations. These are not dedicated servers or well-provisioned virtualized servers so they cannot handle a lot of data, but they can be very useful for very small workloads.
The thing about a local install of SQL Server is that it’s still a full install of SQL Server, and so it acts like a full install of SQL Server. What this means is that it pulls data into memory as that data is accessed. If you’re sharing a SQL Server install with Visual Studio and a dozen-odd folders, text editors, browser windows and spreadsheets, this can lead to degraded performance, especially when working with data that you’re done with and don’t need anymore.
So, how do we make full use of a local development install without hurting ourselves? Here are some techniques I use, as well as some ideas I likely need to explore further: * Truncate your tables when you’re done using them. If you’re working with one table, then it’s pretty easy to just write a truncate statement and move forward. If you’ve got more that needs truncating, you can make use of a little dynamic SQL and the system tables to help you out:
1 2 3 4 |
|
- Restart SQL Server after you’ve cleaned up after yourself. This is a quick and dirty way of reducing how much memory you’re using.
- Limit the amount of memory SQL Server will use
- Write and maintain a script to drop and then create an empty but otherwise complete copy of your database. Run it after every testing session.
Now, please note that I don’t advocate trying to work with millions of rows of data on your desktop with 8 gigs of RAM. For one, your network people should probably yell at you for transferring so much over the network outside the datacenter. But if you need to plug six rows out of fifteen in a set and you need to easily compare the results before and after the update, a local development server can be exactly what the doctor ordered. Just make sure you clean up after yourself, or you’ll find yourself wondering why you can’t open Excel without the machine freezing up.