Is it better to log to file or database?

Looking at the responses, I think the answer may actually be both.

If it's a user error that's likely to happen during expected usage (e.g. user enters an invalid email etc.), that should go into a database to take advantage of easy queries.

If it's a code error that shouldn't happen (can't get username of a logged in user), that should be reserved for a txt file.

This also nicely splits the errors between non-critical and critical. Hopefully the critical error list stays small!

I'm creating a quick prototype of a new project right now, so I'll stick with txt for now.

On another note, email is great for this. Arguably you could just email them to a "bug" account and not store them locally. However this shares the database risk of bad connections.


Either works. It's up to your preference.

We have one central database where ALL of our apps log their error messages. Every app we write is set up in a table with a unique ID, and the error log table contains a foreign key reference to the AppId.

This has been a HUGE bonus for us in giving us one place to monitor errors. We had done it as a file system or by sending emails to a monitored inbox in the past, but we were able to create a fairly nice web app for interacting with the error logs. We have different error levels, and we have an "acknowledged" flag field, so we have a page where we can view unacknowledged events by severity, etc.,


Edit

In hindsight, a better answer is to log to BOTH file system (first, immediately) and then to a centralized database (even if delayed). Most modern logging frameworks follow a publish-subscribe model (often called logging sources and sinks) which will allow multiple logging sinks (targets) to be defined.

The rationale behind writing to file system that if an external infrastructure dependency like network, database, or security issue prevents you from writing remotely, that at least you have a fall back if you can recover data from the server's hard disk (something akin to a black box in the airline industry). Log data written to a file system can be deleted as soon as it is confirmed that the central database has recorded the data, so generally file system retention sizes or rotation times need not be large.

Enterprise log managers like Splunk can be configured to scrape your local server log files (e.g. as written by log4net, the EntLib Logging Application Block, et al) and then centralize them in a searchable database, where data logged can be mined, graphed, shown on dashboards, etc.

But from an operational perspective, where it is likely that you will have a farm or cluster of servers, and assuming that both the local file system and remote database logging mechanisms are working, the 99% use case for actually trying to find anything in a log file will still be via the central database (ideally with a decent front end system to allow you to query, aggregate, graph and build triggers or notifications from log data).

Original Answer

If you have the database in place, I would recommend using this for audit records instead of the filesystem.

Rationale:

  • typed and normalized classification of data (severity, action type, user, date ...)
  • it is easier to find audit data (select ... from Audits where ... ) vs Grep
  • it is easier to clean up (e.g. Delete from Audits where = Date ...)
  • it is easier to back up

The decision to use existing db or new one depends - if you have multiple applications (with their own databases) and want to log / audit all actions in all apps centrally, then a centralized db might make sense.

Since you say you want to audit user activity, it may would make sense to audit in the same db as your users table / definition (if applicable).


I agree with the above with the perhaps obvious exception of logging database failures which would make logging to the database problematic. This has come up for me in the past as I was dealing with infrequent but regular network failovers.