MongoDB vs. Cassandra [closed]

Lots of reads in every query, fewer regular writes

Both databases perform well on reads where the hot data set fits in memory. Both also emphasize join-less data models (and encourage denormalization instead), and both provide indexes on documents or rows, although MongoDB’s indexes are currently more flexible.

Cassandra’s storage engine provides constant-time writes no matter how big your data set grows. Writes are more problematic in MongoDB, partly because of the b-tree based storage engine, but more because of the multi-granularity locking it does.

For analytics, MongoDB provides a custom map/reduce implementation; Cassandra provides native Hadoop support, including for Hive (a SQL data warehouse built on Hadoop map/reduce) and Pig (a Hadoop-specific analysis language that many think is a better fit for map/reduce workloads than SQL). Cassandra also supports use of Spark.

Not worried about “massive” scalability

If you’re looking at a single server, MongoDB is probably a better fit. For those more concerned about scaling, Cassandra’s no-single-point-of-failure architecture will be easier to set up and more reliable. (MongoDB’s global write lock tends to become more painful, too.) Cassandra also gives a lot more control over how your replication works, including support for multiple data centers.

More concerned about simple setup, maintenance and code

Both are trivial to set up, with reasonable out-of-the-box defaults for a single server. Cassandra is simpler to set up in a multi-server configuration since there are no special-role nodes to worry about.

If you’re presently using JSON blobs, MongoDB is an insanely good match for your use case, given that it uses BSON to store the data. You’ll be able to have richer and more queryable data than you would in your present database. This would be the most significant win for Mongo.

Leave a Comment