Best data store for billions of rows

Storing ~3.5TB of data and inserting about 1K/sec 24×7, and also querying at a rate not specified, it is possible with SQL Server, but there are more questions:

  • what availability requirement you have for this? 99.999% uptime, or is 95% enough?
  • what reliability requirement you have? Does missing an insert cost you $1M?
  • what recoverability requirement you have? If you loose one day of data, does it matter?
  • what consistency requirement you have? Does a write need to be guaranteed to be visible on the next read?

If you need all these requirements I highlighted, the load you propose is going to cost millions in hardware and licensing on an relational system, any system, no matter what gimmicks you try (sharding, partitioning etc). A nosql system would, by their very definition, not meet all these requirements.

So obviously you have already relaxed some of these requirements. There is a nice visual guide comparing the nosql offerings based on the ‘pick 2 out of 3’ paradigm at Visual Guide to NoSQL Systems:

nosql comparisson

After OP comment update

With SQL Server this would e straight forward implementation:

  • one single table clustered (GUID, time) key. Yes, is going to get fragmented, but is fragmentation affect read-aheads and read-aheads are needed only for significant range scans. Since you only query for specific GUID and date range, fragmentation won’t matter much. Yes, is a wide key, so non-leaf pages will have poor key density. Yes, it will lead to poor fill factor. And yes, page splits may occur. Despite these problems, given the requirements, is still the best clustered key choice.
  • partition the table by time so you can implement efficient deletion of the expired records, via an automatic sliding window. Augment this with an online index partition rebuild of the last month to eliminate the poor fill factor and fragmentation introduced by the GUID clustering.
  • enable page compression. Since the clustered key groups by GUID first, all records of a GUID will be next to each other, giving page compression a good chance to deploy dictionary compression.
  • you’ll need a fast IO path for log file. You’re interested in high throughput, not on low latency for a log to keep up with 1K inserts/sec, so stripping is a must.

Partitioning and page compression each require an Enterprise Edition SQL Server, they will not work on Standard Edition and both are quite important to meet the requirements.

As a side note, if the records come from a front-end Web servers farm, I would put Express on each web server and instead of INSERT on the back end, I would SEND the info to the back end, using a local connection/transaction on the Express co-located with the web server. This gives a much much better availability story to the solution.

So this is how I would do it in SQL Server. The good news is that the problems you’ll face are well understood and solutions are known. that doesn’t necessarily mean this is a better than what you could achieve with Cassandra, BigTable or Dynamo. I’ll let someone more knowleageable in things no-sql-ish to argument their case.

Note that I never mentioned the programming model, .Net support and such. I honestly think they’re irrelevant in large deployments. They make huge difference in the development process, but once deployed it doesn’t matter how fast the development was, if the ORM overhead kills performance 🙂

Leave a Comment