Elasticsearch replication of other system data?

You’ve pretty much listed the two main options there are when it comes to search across multiple data stores, i.e. search in one central data store (option #1) or search in all data stores and aggregate the results (option #2).

Both options would work, although option #2 has two main drawbacks:

  1. It will require a substantial amount of logic to be developed in your application in order to “branch out” the searches to the multiple data stores and aggregate the results you get back.
  2. The response times might be different for each data store, and thus, you will have to wait for the slowest data store to respond in order to present the search results to the user (unless you circumvent this by using different asynchronous technologies, such as Ajax, websocket, etc)

If you want to provide a better and more reliable search experience, option #1 would clearly get my vote (I take this way most of the time actually). As you’ve correctly stated, the main “drawback” of this option is that you need to keep Elasticsearch in synch with the changes in your other master data stores.

Since your other data stores will be relational databases, you have a few different options to keep them in synch with Elasticsearch, namely:

These first two options work great but have one main disadvantage, i.e. they don’t capture DELETEs on your table, they will only capture INSERTs and UPDATEs. This means that if you ever delete a user, account, etc, you will not be able to know that you have to delete the corresponding document in Elasticsearch. Unless, of course, you decide to delete the Elasticsearch index before each import session.

To alleviate this, you can use another tool which bases itself on the MySQL binlog and will thus be able to capture every event. There’s one written in Go, one in Java and one in Python.

UPDATE:

Here is another interesting blog article on the subject: How to keep Elasticsearch synchronized with a relational database using Logstash

Leave a Comment