Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FORBIDDEN/12/index read-only / allow delete (api) #1040

Closed
StarWar opened this issue Dec 12, 2017 · 16 comments
Closed

FORBIDDEN/12/index read-only / allow delete (api) #1040

StarWar opened this issue Dec 12, 2017 · 16 comments

Comments

@StarWar
Copy link

StarWar commented Dec 12, 2017

I've set up a new development environment on my iMac and moved my rails app from a macbook air. It was working fine and indexing the data as well. Using the same version of gems. When indexing it gives following error.

 {"count":969,"exception":["Searchkick::ImportError","{\"type\"=\u003e\"cluster_block_exception\", \"reason\"=\u003e\"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\"} on item with id '75'"]}
Searchkick::ImportError: {"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"} on item with id '75'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/indexer.rb:23:in `perform'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/logging.rb:126:in `perform'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/indexer.rb:11:in `queue'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/index.rb:535:in `bulk_index_helper'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/index.rb:94:in `bulk_index'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/logging.rb:73:in `block in bulk_index'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications.rb:164:in `block in instrument'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications/instrumenter.rb:20:in `instrument'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/activesupport-4.2.8/lib/active_support/notifications.rb:164:in `instrument'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/logging.rb:72:in `bulk_index'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/index.rb:445:in `block in import_or_update'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/index.rb:524:in `with_retries'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/index.rb:444:in `import_or_update'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/index.rb:302:in `block in import_scope'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/activerecord-4.2.8/lib/active_record/relation/batches.rb:124:in `find_in_batches'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/index.rb:301:in `import_scope'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/index.rb:240:in `reindex_scope'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/searchkick-2.4.0/lib/searchkick/model.rb:72:in `searchkick_reindex'
	from (irb):23
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/railties-4.2.8/lib/rails/commands/console.rb:110:in `start'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/railties-4.2.8/lib/rails/commands/console.rb:9:in `start'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/railties-4.2.8/lib/rails/commands/commands_tasks.rb:68:in `console'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/railties-4.2.8/lib/rails/commands/commands_tasks.rb:39:in `run_command!'
	from /Users/Raj/.rbenv/versions/2.3.3/lib/ruby/gems/2.3.0/gems/railties-4.2.8/lib/rails/commands.rb:17:in `<top (required)>'
	from bin/rails:4:in `require'
	from bin/rails:4:in `<main>'

Is there any way I can solve this issue by modifying my elasticsearch.yml?

@oddlyfunctional
Copy link

+1

@salihsagdilek
Copy link

salihsagdilek commented Dec 27, 2017

this is solution.

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

source: https://benjaminknofe.com/blog/2017/12/23/forbidden-12-index-read-only-allow-delete-api-read-only-elasticsearch-indices/

@ankane
Copy link
Owner

ankane commented Dec 29, 2017

Thanks @salihsagdilek 👍 Excerpt from the link:

You probably recovered from a full hard drive. The thing is, Elasticsearch is switching to read-only if it cannot index more documents because your hard drive is full. With this it ensures availability for read-only queries. Elasticsearch will not switch back automatically

@ankane ankane closed this as completed Dec 29, 2017
@silent-vim
Copy link

silent-vim commented Jan 25, 2018

Thanks @salihsagdilek could not find the solution on es forums also until I landed here 👍

@projekt01
Copy link

@salihsagdilek, thanks a lot, you saved my day.

At least on windows you need to replace ' with " for field names:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d "{"index.blocks.read_only_allow_delete": false}"

Elasticsearch tells you: JsonParseException: Unexpected character (''' (code 39)): was expecting double-quote to start field name

@cireficc
Copy link

cireficc commented May 1, 2018

@ankane & @Others Changing ES index settings didn't fix the issue for me. The only thing that worked was to free up disk space... I think Elasticsearch does some calculation on the estimated index size and completely prevents indexing if it is more than the available disk space. I was able to index a few individual records, but as soon as I did MyModel.reindex, I got that error. Once I freed up a few GB of disk space, reindexing went smoothly.

@ahmedtarek-
Copy link

@cireficc Agree
According to Disk-based Shard Allocation
The default for cluster.routing.allocation.disk.watermark.flood_stage is 95%. Meaning that
once any index that has one or more shards allocated on a node that resides in a disk with storage exceeding 95%, this index will be forced to go into read-only mode.
If this is the case, then:

  • Free up disk space first
  • Then manually reset the index's read_only_allow_delete to null if required as mentioned by @salihsagdilek

@cireficc
Copy link

@ahmedtarek- Thanks for finding the documentation! This is good info to have. Now I know to keep all my movies and TV shows on my external hard drive 👍

@siddharthghedia
Copy link

Thank you @salihsagdilek 👍

@chai2
Copy link

chai2 commented Aug 19, 2018

Thanks @salihsagdilek

@tahina123
Copy link

Thank you @salihsagdilek

@tominugen
Copy link

tominugen commented Nov 6, 2018

Thank you! @salihsagdilek

@christouandr7
Copy link

this is solution.

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

source: https://benjaminknofe.com/blog/2017/12/23/forbidden-12-index-read-only-allow-delete-api-read-only-elasticsearch-indices/

thank you! worked for me too!

@rctneil
Copy link

rctneil commented Jan 2, 2019

I'm getting this issue but can't seem to solve it. I'm running Elasticsearch in a Docker. I start my app by running docker-compose up. I've tried running the command recommended above but i just get "No such file or directory" returned. Any ideas?

@namila
Copy link

namila commented Jan 3, 2019

Do u have a port mapping in your docker compose file which exposes 9200 port of the elasticsearch container to outside?

@ankane
Copy link
Owner

ankane commented Jan 3, 2019

Sorry to jump in, but a lot of people are subscribed to this and issues are supposed to be locked after being closed for 30 days (not sure what happened with this one). Please continue the conversation on Stack Overflow.

Repository owner locked as resolved and limited conversation to collaborators Jan 3, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests