Should you enforce constraints at the database level as well as the application level?
In short : database should enforce constraints.
Why :
- Easier. For ex. to have a constraint set on a particular data column there is only one place to set it : the column itself. The data might come from various sources but the check is put where the data is finally put to rest.
- Integrity. Database should be responsible for the data it hosts. An inconsistent database is as good as no database.
- Flexibility. New UI development environments come all too frequently. If database puts up its hand to say that it will take care of the constraints , the front end development and functional testing are easier.
Yes, if you want to restrict what goes in the database. The layers should be distinct from each other as much as possible and your database shouldn't rely on another layer ensuring that it follows the rules.
There's no guarantee that a buggy (or malicious) "business logic" layer will not insert toxic data into your tables. Of course, if you can trust the other layers, you probably won't need it. But I work in a mainframe shop where the DBAs are always having to fix problems caused by the young Java whippersnappers rolling out their buggy code to production without adequate (any?) testing :-).
Database tables that are shared between different development areas (and that's all of them for us) should always protect themselves from errant data. When App A puts dodgy data into the table used by App B, it's not the App A developers that take the heat, it's the DBAs.
If you follow the Jeff Atwood school of a database is just a dumb data storage & retrieval system then you would put all the validation in the application layer.
However, I find that applications are like small children. Unchecked they will throw everything around the room. It will be up to the parents to clean up the mess. In this case it will be the DBAs doing the cleaning.
However, I think you need to be careful about using every database data integrity feature, just because it is there. Overloading your database with foreign key constraints and triggers might create more problems than you think. I tend to use foreign keys only on tables which are very closely related, such as a header/detail table pair. If you start adding foreign keys everywhere you can end up with an unmagageable database.
I rarely use triggers. I think they make a database very opaque. You issue a simple update/insert/delete command and strange things might happen. I guess there are two places where triggers are unavoidable:
When you don't have source code to the application writing to the database and you need to modify the behaviour. Triggers are your only option.
If you are performing CRUD operations on a view. Triggers are mandatory for the insert/update/delete operations.
I tend to perform basic validation in the app. This way the user is given immediate feedback that something is wrong. Complex validation that requires looking up related tables is probably best done in the database (as well as the simple validation that the app does). I would argue that some forms of validation are almost impossible to guarantee at the application level, without using complicated locking strategies.
If you have multiple applications, possibly written in different languages on different platforms, then the case for putting more of the validation into the database layer is strengthened. The liklihood of two or more applications, written by different programmers, performing identical validation is fairly remote. Best do it in one place.
The Jeff Atwoods of this world would suggest that you write a web service that all the apps use to communicate with. The web service performs the data validation. Doing this allows the database to remain a dumb storage container, thus enabling you to switch database engines. In reality you rarely change database engines (unless you started out with Microsoft Access!). If you are writing web services purely to centralise your data validation then I thnk you are going overboard.