Elasticsearch, Failed to obtain node lock, is the following location writable
I had an orphaned Java process related to Elasticsearch. Killing it solved the lock issue. ps aux | grep ‘java’ kill -9 <PID>
I had an orphaned Java process related to Elasticsearch. Killing it solved the lock issue. ps aux | grep ‘java’ kill -9 <PID>
To fix this, add curl option -H ‘Content-Type: application/json’ This error is due to strict content-type checking introduced in ElasticSearch 6.0, as explained in this post Starting from Elasticsearch 6.0, all REST requests that include a body must also provide the correct content-type for that body.
By default, Elasticsearch will re-assign shards to nodes dynamically. However, if you’ve disabled shard allocation (perhaps you did a rolling restart and forgot to re-enable it), you can re-enable shard allocation. # v0.90.x and earlier curl -XPUT ‘localhost:9200/_settings’ -d ‘{ “index.routing.allocation.disable_allocation”: false }’ # v1.0+ curl -XPUT ‘localhost:9200/_cluster/settings’ -d ‘{ “transient” : { “cluster.routing.allocation.enable” : … Read more
First, you should enable journalctl so we can gain some insights about what is going on. Since it seems to be a memory issue (you have 1GB on your VPS and ES is pre-configured with 1GB heap), you have two options: decrease the heap in config/jvm.options or increase your VPS RAM
That’s due to a rounding issue for IEEE-754 double-precision floating point values. Whole values up until 53 bits can be represented safely, however, 10160815114820887 is 54 bits long (100100000110010011010100011111100011000001110100010111) The real number you have indexed was indeed 10160815114820887, but due to the above-mentioned rounding issues, it was indexed and shows as 10160815114820888 You can try … Read more
You could utilize the bulk method of the official python package: import json from noaa_sdk import noaa from elasticsearch import Elasticsearch from elasticsearch.helpers import bulk noaa_client = noaa.NOAA() alerts = noaa_client.alerts()[‘features’] es = Elasticsearch() def save_alerts(): with open(‘nhc_alerts.json’, ‘w’) as f: f.write(json.dumps(alerts)) def bulk_sync(): actions = [ { “_index”: “my_noaa_index”, “_source”: alert } for alert … Read more
for real time use the best solution is to use the search after query . You need only a date field, and another field that uniquely identify a doc – it’s enough a _id field or an _uid field. Try something like this, in my example I would like to extract all the documents that … Read more
Nested documents are powerful because you retain certain attribute connections but there’s the downside of not being able to iterate over them as discussed here. With that being said you could flatten the users attributes using the copy_to feature like so: PUT my-index-000001 { “mappings”: { “properties”: { “user__first_flattened”: { “type”: “keyword” }, “user”: { … Read more
Embedding elasticsearch is no longer officially supported, and it’s a bit more complicated than in 2.x, but it works. You need to add some dependencies: <dependency> <groupId>org.elasticsearch</groupId> <artifactId>elasticsearch</artifactId> <version>5.1.1</version> <scope>test</scope> </dependency> <dependency><!– required by elasticsearch –> <groupId>org.elasticsearch.plugin</groupId> <artifactId>transport-netty4-client</artifactId> <version>5.1.1</version> <scope>test</scope> </dependency> <dependency><!– required by elasticsearch –> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.7</version> </dependency> And then launch a node … Read more
You’d need to use the field that is supposed to be unique as id for your documents. By default a new document with existing id would override the existing document with same id, but you can switch to op_type=create in order to get back an error if a document with same id already exists. There’s … Read more