Commit Graph

21 Commits

Author SHA1 Message Date
Alexander Reelsen 80d0a32f8e ScriptService: Replace max compilation per minute setting with max compilation rate (#26399)
The current script service has a script compilation limit for a one
minute window. This is set to a small default value of 15. Instead of
increasing that default value, this commit introduces a new setting 
that allows to configure a rate per time unit, so that the script service can deal with bursts better.

The new setting is named `script.max_compilations_rate`,
requires a nonnegative number and a positive time value.

The default is `75/5m`, which is equivalent to the existing 15 per minute.
2017-09-01 10:15:27 +02:00
Robin Clarke 1900d9c447 Docs: Fix typo for request cache (#25444) 2017-06-28 14:31:03 +02:00
Stefan Gorgiovski 798c19dd7f Deprecate request_cache for clear-cache (#23638)
It is called `request` now.
2017-03-22 08:28:04 -04:00
alamzeeshan a1cc683cff Updated document as per code change. (#22878)
Updated document as per this change : https://github.com/elastic/elasticsearch/pull/15235
2017-01-31 13:36:09 +01:00
Nik Everett 3ed3e5e660 Convert more docs to CONSOLE
* plugins/discovery-azure-class.asciidoc
* reference/cluster.asciidoc
* reference/modules/cluster/misc.asciidoc
* reference/modules/indices/request_cache.asciidoc

After this is merged there will be no unconvereted snippets outside
of `reference`.

Related to #18160
2016-09-21 09:36:21 -04:00
Colin Goodheart-Smithe 2904562b01 [DOCS] Fix shard request cache docs
Docs have been changed to reflect the fact that shard request cache is now enabled by default

Closes #19695
2016-08-11 14:25:34 +01:00
Lee Hinman 2be52eff09 Circuit break the number of inline scripts compiled per minute
When compiling many dynamically changing scripts, parameterized
scripts (<https://www.elastic.co/guide/en/elasticsearch/reference/master/modules-scripting-using.html#prefer-params>)
should be preferred. This enforces a limit to the number of scripts that
can be compiled within a minute. A new dynamic setting is added -
`script.max_compilations_per_minute`, which defaults to 15.

If more dynamic scripts are sent, a user will get the following
exception:

```json
{
  "error" : {
    "root_cause" : [
      {
        "type" : "circuit_breaking_exception",
        "reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
        "bytes_wanted" : 0,
        "bytes_limit" : 0
      }
    ],
    "type" : "search_phase_execution_exception",
    "reason" : "all shards failed",
    "phase" : "query",
    "grouped" : true,
    "failed_shards" : [
      {
        "shard" : 0,
        "index" : "i",
        "node" : "a5V1eXcZRYiIk8lecjZ4Jw",
        "reason" : {
          "type" : "general_script_exception",
          "reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
          "caused_by" : {
            "type" : "circuit_breaking_exception",
            "reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
            "bytes_wanted" : 0,
            "bytes_limit" : 0
          }
        }
      }
    ],
    "caused_by" : {
      "type" : "general_script_exception",
      "reason" : "Failed to compile inline script [\"aaaaaaaaaaaaaaaa\"] using lang [painless]",
      "caused_by" : {
        "type" : "circuit_breaking_exception",
        "reason" : "[script] Too many dynamic script compilations within one minute, max: [15/min]; please use on-disk, indexed, or scripts with parameters instead",
        "bytes_wanted" : 0,
        "bytes_limit" : 0
      }
    }
  },
  "status" : 500
}
```

This also fixes a bug in `ScriptService` where requests being executed
concurrently on a single node could cause a script to be compiled
multiple times (many in the case of a powerful node with many shards)
due to no synchronization between checking the cache and compiling the
script. There is now synchronization so that a script being compiled
will only be compiled once regardless of the number of concurrent
searches on a node.

Relates to #19396
2016-08-09 10:26:27 -06:00
Lee Hinman 1623cff6c0 Merge remote-tracking branch 'dakrone/bucket-circuit-breaker' 2016-07-25 13:37:26 -06:00
Lee Hinman 124a9fabe3 Circuit break on aggregation bucket numbers with request breaker
This adds new circuit breaking with the "request" breaker, which adds
circuit breaks based on the number of buckets created during
aggregations. It consists of incrementing during AggregatorBase creation

This also bumps the REQUEST breaker to 60% of the JVM heap now.

The output when circuit breaking an aggregation looks like:

```json
{
  "shard" : 0,
  "index" : "i",
  "node" : "a5AvjUn_TKeTNYl0FyBW2g",
  "reason" : {
    "type" : "exception",
    "reason" : "java.util.concurrent.ExecutionException: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]];",
    "caused_by" : {
      "type" : "execution_exception",
      "reason" : "QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<agg [myagg]>] would be larger than limit of [104857600/100mb]];",
      "caused_by" : {
        "type" : "circuit_breaking_exception",
        "reason" : "[request] Data too large, data for [<agg [otherthings]>] would be larger than limit of [104857600/100mb]",
        "bytes_wanted" : 104860781,
        "bytes_limit" : 104857600
      }
    }
  }
}
```

Relates to #14046
2016-07-25 11:33:37 -06:00
Colin Goodheart-Smithe b717ad8eb6 Enable option to use request cache for size > 0
Previously if the size of the search request was greater than zero we would not cache the request in the request cache.

This change retains the default behaviour of not caching requests with size > 0 but also allows the `request_cache=true` query parameter
to enable the cache for requests with size > 0
2016-07-18 13:33:59 +01:00
Adrien Grand db9af54ec0 Remove `_timestamp` and `_ttl` on 5.x indices. #18980
This removes the ability to use `_timestamp` and `_ttl` on indices created on
or after 5.0.

Closes #18280
2016-06-22 08:35:54 +02:00
Daniel Mitterdorfer 52b2016447 Limit request size on transport level
With this commit we limit the size of all in-flight requests on
transport level. The size is guarded by a circuit breaker and is
based on the content size of each request.

By default we use 100% of available heap meaning that the parent
circuit breaker will limit the maximum available size. This value
can be changed by adjusting the setting

network.breaker.inflight_requests.limit

Relates #16011
2016-04-13 09:54:59 +02:00
Adrien Grand 0eb1a816c8 Allow the query cache to be disabled. #16268
This replaces the internal `index.queries.cache.type` setting with
a new `index.queries.cache.enabled` setting, which is documented.

Closes #15802
2016-04-11 18:06:16 +02:00
Michael McCandless 3744fb9dc0 merge master 2016-01-06 04:03:42 -05:00
Simon Willnauer f5e4cd4616 Remove recovery threadpools and throttle outgoing recoveries on the master
Today we throttle recoveries only for incoming recoveries. Nodes that have a lot
of primaries can get overloaded due to too many recoveries. To still keep that at bay
we limit the number of threads that are sending files to the target to overcome this problem.

The right solution here is to also throttle the outgoing recoveries that are today unbounded on
the master and don't start the recovery until we have enough resources on both source and target nodes.

The concurrency aspects of the recovery source also added a lot of complexity and additional threadpools
that are hard to configure. This commit removes the concurrent streamns notion completely and sends files
in the thread that drives the recovery simplifying the recovery code considerably.
Outgoing recoveries are not throttled on the master via a allocation decider.
2015-12-22 14:59:43 +01:00
Michael McCandless 319dc8c8ed remove dead code; get one test working again; fix docs; remove nocommits 2015-12-16 16:19:07 -05:00
xuzha f46e66e7d0 Remove the experimental indices.fielddata.cache.expire
closes #10781
2015-09-01 00:40:04 -07:00
Martijn van Groningen a14913f7b6 Left over from the `query_cache` to `request_cache` rename. 2015-07-27 13:28:15 +02:00
Clinton Gormley 2b512f1f29 Docs: Use "js" instead of "json" and "sh" instead of "shell" for source highlighting 2015-07-14 18:14:09 +02:00
Adrien Grand 38f5cc236a Rename caches.
In order to be more consistent with what they do, the query cache has been
renamed to request cache and the filter cache has been renamed to query
cache.

A known issue is that package/logger names do no longer match settings names,
please speak up if you think this is an issue.

Here are the settings for which I kept backward compatibility. Note that they
are a bit different from what was discussed on #11569 but putting `cache` before
the name of what is cached has the benefit of making these settings consistent
with the fielddata cache whose size is configured by
`indices.fielddata.cache.size`:
 * index.cache.query.enable -> index.requests.cache.enable
 * indices.cache.query.size -> indices.requests.cache.size
 * indices.cache.filter.size -> indices.queries.cache.size

Close #11569
2015-06-29 10:15:27 +02:00
Clinton Gormley f123a53d72 Docs: Refactored modules and index modules sections 2015-06-22 23:49:45 +02:00