What's more efficient - storing logs in sql database or files?
Logs using files are more efficient, however logs stored in the database are easier to read, even remotely (you can write a web frontend if required, for example).
Note however that connecting and inserting rows into the database is error prone (database server down, password wrong, out-of-resources) so where would you log those errors if you decided to use the database?
You can use a component such as Zend_Log which natively supports the concept of writers attached to the same log instance. In that way you can log the same message to one or more different place with no need to change your logging code. And you can always change your code to replace the log system or add a new one in a simple way.
For your question I think that log to files is simpler and more appropriate if you (developer) is the only one who needs to read log messages.
Log to db instead if you need other people needs to read logs in a web interface or if you need the ability to search through logs. As someone else has pointed out also concurrency matters, if you have a lot of users log to db could scale better.
Finally, a log frequency of 5 messages per minute requires almost no cpu for your application, so you don't need to worry about performances. In your case I'd start with logfiles and then change (or add more writers) if your requisites will change.
Commenting on your findings.
Regarding the writing to the file you are probably right.
Regarding the reading you are dead wrong.
Writing to a database:
- MyISAM locks the whole table on inserts, causing a lock contention. Use InnoDB, which has row locking.
- Contrary to 1. If you want to do fulltext searches on the log. Use MyISAM, it supports fulltext indexes.
- If you want to be really fast you can use the
memory
engine, this writes the table in RAM. Transfer the data to a disk-based table when CPU load is low.
Reading from the database
This is where the database truly shines.
You can combine all sorts of information from different entries, much much faster and easier than you can ever do from a flat file.
SELECT logdate, username, action FROM log WHERE userid = '1' /*root*/ AND error = 10;
If you have indexes on the fields used in the where
clause the result will return almost instantly, try doing that on a flat file.
SELECT username, count(*) as error_count
FROM log
WHERE error <> 0
GROUP BY user_id WITH ROLLUP
Never mind the fact that the table is not normalized, this will be much much slower and harder to do with a flat file.
It's a no brainer really.