DB with best inserts/sec performance?

If you don't need to do queries, then database is not what you need. Use a log file.


it's only stored for legal reasons.

And what about the detailed requirements? You mention the NoSQL solutions, but these can't promise the data is realy stored on disk. In PostgreSQL everything is transaction safe, so you're 100% sure the data is on disk and is available. (just don't turn of fsync)

Speed has a lot to do with your hardware, your configuration and your application. PostgreSQL can insert thousands of record per second on good hardware and using a correct configuration, it can be painfully slow using the same hardware but using a plain stupid configuration and/or the wrong approach in your application. A single INSERT is slow, many INSERT's in a single transaction are much faster, prepared statements even faster and COPY does magic when you need speed. It's up to you.


If you are never going to query the data, then i wouldn't store it to a database at all, you will never beat the performance of just writing them to a flat file.

What you might want to consider is the scaling issues, what happens when it's to slow to write the data to a flat file, will you invest in faster disk's, or something else.

Another thing to consider is how to scale the service so that you can add more servers without having to coordinate the logs of each server and consolidate them manually.

edit: You wrote that you want to have it in a database, and then i would also consider security issues with havening the data on line, what happens when your service gets compromised, do you want your attackers to be able to alter the history of what have been said?

It might be smarter to store it temporary to a file, and then dump it to an off-site place that's not accessible if your Internet fronts gets hacked.


Please ignore the above Benchmark we had a bug inside.

We have Insert 1M records with following columns: id (int), status (int), message (140 char, random). All tests was done with C++ Driver on a Desktop PC i5 with 500 GB Sata Disk.

Benchmark with MongoDB:

1M Records Insert without Index

time: 23s, insert/s: 43478

1M Records Insert with Index on Id

time: 50s, insert/s: 20000

next we add 1M records to the same table with Index and 1M records

time: 78s, insert/s: 12820

that all result in near of 4gb files on fs.

Benchmark with MySQL:

1M Records Insert without Index

time: 49s, insert/s: 20408

1M Records Insert with Index

time: 56s, insert/s: 17857

next we add 1M records to the same table with Index and 1M records

time: 56s, insert/s: 17857

exactly same performance, no loss on mysql on growth

We see Mongo has eat around 384 MB Ram during this test and load 3 cores of the cpu, MySQL was happy with 14 MB and load only 1 core.

Edorian was on the right way with his proposal, I will do some more Benchmark and I'm sure we can reach on a 2x Quad Core Server 50K Inserts/sec.

I think MySQL will be the right way to go.