Commit Graph

17 Commits

Author SHA1 Message Date
Adam Locke 20d04081ec
[7.x] [DOCS] Add supported ESS settings to ES docs (#57953) (#58981)
* Adding ESS icons to supported ES settings.

* Adding new file for supported ESS settings.

* Adding supported ESS settings for HTTP and disk-based shard allocation.

* Adding more supported settings for ESS.

* Adding descriptions for each Cloud section, plus additional settings.

* Adding new warehouse file for Cloud, plus additional settings.

* Adding node settings for Cloud.

* Adding audit settings for Cloud.

* Resolving merge conflict.

* Adding SAML settings (part 1).

* Adding SAML realm encryption and signing settings.

* Adding SAML SSL settings.

* Adding Kerberos realm settings.

* Adding OpenID Connect Realm settings.

* Adding OpenID Connect SSL settings.

* Resolving leftover Git merge markers.

* Removing Cloud settings page and link to it.

* Add link to mapping source

* Update docs/reference/docs/reindex.asciidoc

* Incorporate edit of HTTP settings

* Remove "cloud" from tag and ID

* Remove "cloud" from tag and update description

* Remove "cloud" from tag and ID

* Change "whitelists" to "specifies"

* Remove "cloud" from end tag

* Removing cloud from IDs and tags.

* Changing link reference to fix build issue.

* Adding index management page for missing settings.

* Removing warehouse file for Cloud and moving settings elsewhere.

* Clarifying true/false usage of http.detailed_errors.enabled.

* Changing underscore to dash in link to fix ci build.
2020-07-02 19:40:45 -04:00
Stuart Tettemer 20abba8433
Scripting: Deprecate general cache settings (#55753) (#58283)
Backport: ef543b0
2020-06-18 11:54:23 -06:00
Stuart Tettemer 01795d1925
Revert "Scripting: Deprecate general cache settings (#55753)" (#58201)
This reverts commit 88e8b34fc2.
2020-06-16 14:58:18 -06:00
Stuart Tettemer 88e8b34fc2
Scripting: Deprecate general cache settings (#55753)
Backport: ef543b0
2020-06-16 13:06:59 -06:00
James Rodewig 43ef469570
[DOCS] Relocate `indices` module content (#54903) (#57413)
Moves `indices` content from the [Modules][0] section to the [Configuring
Elasticsearch][1] section.

Also removes the [Indices][2] landing page and adds a related redirect.

[0]: https://www.elastic.co/guide/en/elasticsearch/reference/master/modules.html
[1]: https://www.elastic.co/guide/en/elasticsearch/reference/master/settings.html
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-indices.html
2020-06-01 09:44:32 -04:00
Daniel Mitterdorfer 5dd0e74e79 Clarify which circuit breaker settings are static (#44992)
Most of the circuit breaker settings are dynamically configurable.
However, `indices.breaker.total.use_real_memory` is not. With this
commit we add a clarifying note that this specific setting is static.

Closes #44974
2019-07-31 13:15:33 +02:00
Yu d01b30acba lower fielddata circuit breaker's default limit (#27162)
* Lower fielddata circuit breaker default limit

Lower fielddata circuit breaker default limit from 60% to 40% as we have
moved to doc_values for most of the cases.

* merge master in

* update tests

* update docs
2018-12-11 11:30:58 +01:00
Daniel Mitterdorfer f174f72fee
Circuit-break based on real memory usage
With this commit we introduce a new circuit-breaking strategy to the parent
circuit breaker. Contrary to the current implementation which only accounts for
memory reserved via child circuit breakers, the new strategy measures real heap
memory usage at the time of reservation. This allows us to be much more
aggressive with the circuit breaker limit so we bump it to 95% by default. The
new strategy is turned on by default and can be controlled  with the new cluster
setting `indices.breaker.total.userealmemory`.

Note that we turn it off for all integration tests with an internal test cluster
because it leads to spurious test failures which are of no value (we cannot
fully control heap memory usage in tests). All REST tests, however, will make
use of the real memory circuit breaker.

Relates #31767
2018-07-13 10:08:28 +02:00
Daniel Mitterdorfer 3d53daeb2f
Account for XContent overhead in in-flight breaker
So far the in-flight request circuit breaker has only accounted for the
on-the-wire representation of a request. However, we convert the raw
request into XContent internally which increases the overhead.
Therefore, we increase the value of the corresponding setting
`network.breaker.inflight_requests.overhead` from one to two. While this
value is still rather conservative (we assume that the representation as
structured objects has no overhead compared to the byte[]), it is closer
to reality than the current value.

Relates #31613
2018-07-03 09:17:16 +02:00
Colin Goodheart-Smithe 360b09f148
[DOCS] Fixes accounting setting names (#30863)
The documentation for the account circuit breaker listed the settings for it's limit and overhead to be `network.breaker.accounting.limit` and `network.breaker.accounting.overhead` when in `HieratchyCircuitBreakerService` it seems the settings are actually `indices.breaker.accounting.limit` and `indices.breaker.accounting.overhead`.
2018-06-04 09:20:54 +01:00
Lee Jones 37f67d9e21 [Docs] Fix typo in circuit breaker docs (#29659)
The previous description had a part that didn't fit and was probably
from a copy/paste of the in flight requests description above.
2018-05-22 16:43:45 +02:00
Lee Hinman 623d3700f0
Add accounting circuit breaker and track segment memory usage (#27116)
* Add accounting circuit breaker and track segment memory usage

This commit adds a new circuit breaker "accounting" that is used for tracking
the memory usage of non-request-tied memory users. It also adds tracking for the
amount of Lucene segment memory used by a shard as a user of the new circuit
breaker.

The Lucene segment memory is updated when the shard refreshes, and removed when
the shard relocates away from a node or is deleted. It should also be noted that
all tracking for segment memory uses `addWithoutBreaking` so as not to fail the
shard if a limit is reached.

The `accounting` breaker has a default limit of 100% and will contribute to the
parent breaker limit.

Resolves #27044
2017-12-01 07:59:45 -07:00
Alexander Reelsen 80d0a32f8e ScriptService: Replace max compilation per minute setting with max compilation rate (#26399)
The current script service has a script compilation limit for a one
minute window. This is set to a small default value of 15. Instead of
increasing that default value, this commit introduces a new setting 
that allows to configure a rate per time unit, so that the script service can deal with bursts better.

The new setting is named `script.max_compilations_rate`,
requires a nonnegative number and a positive time value.

The default is `75/5m`, which is equivalent to the existing 15 per minute.
2017-09-01 10:15:27 +02:00
Lee Hinman 2be52eff09 Circuit break the number of inline scripts compiled per minute
When compiling many dynamically changing scripts, parameterized
scripts (<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-scripting-using.html#prefer-params>)
should be preferred. This enforces a limit to the number of scripts that
can be compiled within a minute. A new dynamic setting is added -
`script.max_compilations_per_minute`, which defaults to 15.

If more dynamic scripts are sent, a user will get the following
exception:

```json
{
  "error" : {
    "root_cause" : [
      {
        "type" : "circuit_breaking_exception",
        "reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
        "bytes_wanted" : 0,
        "bytes_limit" : 0
      }
    ],
    "type" : "search_phase_execution_exception",
    "reason" : "all shards failed",
    "phase" : "query",
    "grouped" : true,
    "failed_shards" : [
      {
        "shard" : 0,
        "index" : "i",
        "node" : "a5V1eXcZRYiIk8lecjZ4Jw",
        "reason" : {
          "type" : "general_script_exception",
          "reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
          "caused_by" : {
            "type" : "circuit_breaking_exception",
            "reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
            "bytes_wanted" : 0,
            "bytes_limit" : 0
          }
        }
      }
    ],
    "caused_by" : {
      "type" : "general_script_exception",
      "reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
      "caused_by" : {
        "type" : "circuit_breaking_exception",
        "reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
        "bytes_wanted" : 0,
        "bytes_limit" : 0
      }
    }
  },
  "status" : 500
}
```

This also fixes a bug in `ScriptService` where requests being executed
concurrently on a single node could cause a script to be compiled
multiple times (many in the case of a powerful node with many shards)
due to no synchronization between checking the cache and compiling the
script. There is now synchronization so that a script being compiled
will only be compiled once regardless of the number of concurrent
searches on a node.

Relates to #19396
2016-08-09 10:26:27 -06:00
Lee Hinman 124a9fabe3 Circuit break on aggregation bucket numbers with request breaker
This adds new circuit breaking with the "request" breaker, which adds
circuit breaks based on the number of buckets created during
aggregations. It consists of incrementing during AggregatorBase creation

This also bumps the REQUEST breaker to 60% of the JVM heap now.

The output when circuit breaking an aggregation looks like:

```json
{
  "shard" : 0,
  "index" : "i",
  "node" : "a5AvjUn_TKeTNYl0FyBW2g",
  "reason" : {
    "type" : "exception",
    "reason" : "java.util.concurrent.ExecutionException: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]];",
    "caused_by" : {
      "type" : "execution_exception",
      "reason" : "QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [myagg]>] would be larger than limit of [104857600/100mb]];",
      "caused_by" : {
        "type" : "circuit_breaking_exception",
        "reason" : "[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]",
        "bytes_wanted" : 104860781,
        "bytes_limit" : 104857600
      }
    }
  }
}
```

Relates to #14046
2016-07-25 11:33:37 -06:00
Daniel Mitterdorfer 52b2016447 Limit request size on transport level
With this commit we limit the size of all in-flight requests on
transport level. The size is guarded by a circuit breaker and is
based on the content size of each request.

By default we use 100% of available heap meaning that the parent
circuit breaker will limit the maximum available size. This value
can be changed by adjusting the setting

network.breaker.inflight_requests.limit

Relates #16011
2016-04-13 09:54:59 +02:00
Clinton Gormley f123a53d72 Docs: Refactored modules and index modules sections 2015-06-22 23:49:45 +02:00