This gives better coverage and consistency with the scripting APIs, by
whitelisting the primary search scripting API classes and using them instead
of only Map and List methods.
For example, accessing fields can now be done with `.value` instead of `.0`
because `getValue()` is whitelisted. For now, access to a document's fields in
this way (loads) are fast-pathed in the code, to avoid dynamic overhead.
Access to geo fields and geo distance functions is now supported.
TODO: date support (e.g. whitelist ReadableDateTime methods as a start)
TODO: improve docs (like expressions and groovy have for document's fields)
TODO: remove fast-path hack
Closes#18169
Squashed commit of the following:
commit ec9f24b2424891a7429bb4c0a03f9868cba0a213
Author: Robert Muir <rmuir@apache.org>
Date: Thu May 5 17:59:37 2016 -0400
cutover to <Def> instead of <Object> here
commit 9edb1550438acd209733bc36f0d2e0aecf190ecb
Author: Robert Muir <rmuir@apache.org>
Date: Thu May 5 17:03:02 2016 -0400
add fast-path for docvalues field loads
commit f8e38c0932fccc0cfa217516130ad61522e59fe5
Author: Robert Muir <rmuir@apache.org>
Date: Thu May 5 16:47:31 2016 -0400
Painless: add fielddata accessors (.value/.values/.distance()/etc)
o/e/snapshots/Snapshot and o/e/snapshots/SnapshotInfo contain the same
fields and represent the same information. Snapshot was used to
maintain snapshot information to the snapshot repository, while
SnapshotInfo was used to represent the snapshot information as presented
through the REST layer. This removes the Snapshot class and combines
all uses into the SnapshotInfo class.
Closes#18167
Adds infrastructure so `gradle :docs:check` will extract tests from
snippets in the documentation and execute the tests. This is included
in `gradle check` so it should happen on CI and during a normal build.
By default each `// AUTOSENSE` snippet creates a unique REST test. These
tests are executed in a random order and the cluster is wiped between
each one. If multiple snippets chain together into a test you can annotate
all snippets after the first with `// TEST[continued]` to have the
generated tests for both snippets joined.
Snippets marked as `// TESTRESPONSE` are checked against the response
of the last action.
See docs/README.asciidoc for lots more.
Closes#12583. That issue is about catching bugs in the docs during build.
This catches *some* bugs in the docs during build which is a good start.
Today we softly warn about running with the client VM. However, we
should really refuse to start in production mode if running with the
client VM as the performance of the client VM is too devastating for a
server application. This commit adds an option to jvm.options to ensure
that we are starting with the server VM (on all 32-bit non-Windows
platforms on server-class machines (2+ CPUs, 2+ GB physical RAM) this is
the default and on all 64-bit platforms this is the only option) and
adds a bootstrap check for the client VM.
Relates #18155
This commit introduces a handshake when initiating a light
connection. During this handshake, node information, cluster name, and
version are received from the target node of the connection. This
information can be used to immediately validate that the target node is
a member of the same cluster, and used to set the version on the
stream. This will allow us to extend APIs that are used during initial
cluster recovery without a major version change.
Relates #15971
Adding random shuffling of xContent to InnterHitBuilderTests shows
that the scriptFields are stored in order as a list internally although
they are an unordered json objects in the query dsl.
This changes the internal representation to a set and updates
serialization accordingly.
Currently we have a lot of methods left in QueryShardContext that
take parsers or BytesReference arguments to do some xContent
parsing on the shard. While this still seems necessary in some cases
(e.g. percolation, phrase suggester), the shard context should only
be concerned with generating lucene queries from QueryBuilders.
This change removes all of the parseX() methods in favour of two
public methods toQuery(QueryBuilder) and toFilter(QueryBuilder) that
either call the query builders toFilter() or toQuery() method and
move all code required for parsing out to the respective callers.
PathTrie has a constructor that allows for an arbitrary separtor and
wildcard, but this constructor is unused and internally we always use
'/' as the separator and '*' as the wildcard. There are no tests for the
case where the separator differs from the default separator and
wildcard. This commit removes this constructor and now all instances of
PathTrie have the default separator and wildcard.
This commit removes the method Strings#splitStringToArray and replaces
the call sites with invocations to String#split. There are only two
explanations for the existence of this method. The first is that
String#split is slightly tricky in that it accepts a regular expression
rather than a character to split on. This means that if s is a string,
s.split(".") does not split on the character '.', but rather splits on
the regular expression '.' which splits on every character (of course,
this is easily fixed by invoking s.split("\\.") instead). The second
possible explanation is that (again) String#split accepts a regular
expression. This means that there could be a performance concern
compared to just splitting on a single character. However, it turns out
that String#split has a fast path for the case of splitting on a single
character and microbenchmarks show that String#split has 1.5x--2x the
throughput of Strings#splitStringToArray. There is a slight behavior
difference between Strings#splitStringToArray and String#split: namely,
the former would return an empty array in cases when the input string
was null or empty but String#split will just NPE at the call site on
null and return a one-element array containing the empty string when the
input string is empty. There was only one place relying on this behavior
and the call site has been modified accordingly.
Folds the helper class for random object generation into the
abstract sort test class. Removes a few references to ESTestCase
that were not needed due to inheriting from it along the way.
The query shard reset() method resets some internal state in the
query shard context, like clearing query names, the filter flag
or named queries. The problem with this method being public is
that it currently (miss?) used for modifying an existing context
for recursive invocatiob, but the contexts that have been reseted
that way cannot be properly set back to their previous state.
This PR is a step towards removing reset() entirely by first making
it only be used internally in QueryShardContext. In places where
reset() was used we can either create new QueryShardContexts or
modify the existing context because it is discarded afterwards anyway.
Today, the constructor for IngestDocument#FieldPath does a string
concatentation and two object allocation on every field path. This
commit removes these unnecessary operations.
Relates #18108
With this commit we compress HTTP responses provided the client
supports it (as indicated by the HTTP header 'Accept-Encoding').
We're also able to process compressed HTTP requests if needed.
The default compression level is lowered from 6 to 3 as benchmarks
have indicated that this reduces query latency with a negligible
increase in network traffic.
Closes#7309
Don't try to compute completion stats on a reader after we already closed it
Conflicts:
core/src/main/java/org/elasticsearch/index/shard/IndexShard.java
This commit removes an unnecessary if statement in Bootstrap#check. The
removed if statement was duplicating the conditionals in the nested if
statements and was merely an artifact of an earlier refactoring.
Today when running in production mode the bootstrap checks are
completely unforgiving. But there are cases where an end-user might not
have the ability to modify some of the system-level settings that cause
the bootstrap checks to trip (e.g., guest settings that are inherited
from a host and can not be modified). This commit adds a setting that
allows system-level bootstrap checks to be ignored for these
end-users. We classify certain bootstrap checks into system-level checks
and only those bootstrap checks will be ignored if this flag is
enabled. All other bootstrap checks are still subject to being enforced
if the user is in production mode. We will still log warnings for these
bootstrap checks because the end-user does still need to be made aware
that they are running in a configuration that is less-than-ideal from a
resiliency perspective.
Relates #18088
This commit removes a racy but unnecessary assertion in scaling thread
pool idle test. Namely, the main test thread can reach the removed
assertion before the last few threads in the thread pool have completed
their tasks and caused the completed tasks count on the underlying
executor to be updated. But this assertion is unnecessary. The main test
thread already waits on a latch that is only decremented immediately
before a task completes. This ensures that it was in fact the case that
every submitted task was executed.
Closes#18072
When the termslookup (mocked in this case) doesn't return any terms, the
query used to rewrite to an empty boolean query. Now it rewrites to a
MatchNoDocsQuery. This changes the test expectation accordingly.
Closes#18071
This commit modifes the EsThreadPoolTestCase#info helper method to
return null when info for the thread pool can not be found. This really
should only happen for the "same" thread pool, and so we also assert
that we only get to a place where there is no info if the thread pool
that info was requested for is in fact the "same" thread pool. Not
returning null here and instead throwing an exception would fail tests
that tried to lookup info on the "same" thread pool.
Today we use a sliced lock strategy for acquiring locks to prevent
concurrent updates to the same document. The number of sliced locks is
computed as a linear function of the number of logical
processors. Unfortunately, the probability of a collision against a
sliced lock is prone to the birthday problem and grows faster than
expected. In fact, the mathematics works out such that for a fixed
target probability of collision, the number of lock slices should grow
like the square of the number of logical processors. This is
less-than-ideal, and we can do better anyway. This commit introduces a
strategy for avoiding lock contention within the internal
engine. Ideally, we would only have lock contention if there were
concurrent updates to the same document. We can get close to this ideal
world by associating a lock with the ID of each document. This
association can be held in a concurrent hash map. Now, the JDK
ConcurrentHashMap also uses a sliced lock internally, but it has several
strategies for avoiding taking the locks and these locks are only held
for a very short period of time. This implementation associates a
reference count with the lock that is associated with a document ID and
automatically removes the document ID from the concurrent hash map when
the reference count reaches zero.
Relates #18060
Fix a limitation that prevent from hierarchical inner hits be defined in query dsl.
Removed the nested_path, parent_child_type and query options from inner hits dsl. These options are only set by ES
upon parsing the has_child, has_parent and nested queries are using their respective query builders.
These options are still used internally, when these options are set a new private copy is created based on the
provided InnerHitBuilder and configuring either nested_path or parent_child_type and the inner query of the query builder
being used.
Closes#11118
Previously like in other geo related query parsers we were using
a combination of two booleans for coerce and ignore_malformed
which was error prone and not very clear.
Switched to using GeoValidationMethod instead as we already do
e.g. in GeoBoundingBoxQueryBuilder.
Left support for both, coerce and ignore_malformed in the parser
but deprecated the two in favour of validation method.
Introduced the same deprecation in geo bounding box query builder.
While returning no hits on fields that are not mapped may be fine, it is not
for fields that are mapped but not indexed (`index:false`). We should fail the
query in that case rather than returning no hits.
Switch something from an explicit toString to Strings.toString which
is the same thing but with more code reuse.
Also renamed a constant to be CONSTANT_CASE.
ObjectParser makes parsing XContent 95% easier. No more nested loops.
No more forgetting to use ParseField. Consistent handling for arrays.
Awesome. But ObjectParser doesn't support building things objects whose
constructor arguments are mixed in with the rest of its properties.
Enter ConstructingObjectParser! ConstructingObjectParser queues up
fields until all of the constructor arguments have been parsed and
then sets them on the target object.
Closes#17352
* Add isSearchable and isAggregatable (collapsed to true if any of the instances of that field are searchable or aggregatable).
* Accept wildcards in field names.
* Add a section named conflicts for fields with the same name but with incompatible types (instead of throwing an exception).
This commit fixes a test bug in the scaling thread pool idle
test. Namely, a random thread pool is chosen which could have a min pool
size of one or four but the while loop was acting as if the min pool
size was four (this is due to the test having been initially written for
only the generic thread pool).
Additionally, a latch is added between the test thread and the work
tasks to reduce the chance of a race condition between the test thread
and last few tasks.
This commit slightly expands the scaling thread pool configuration test
coverage. In particular, the test testScalingThreadPoolConfiguration is
expanded to include the case when min is equal to size, and the test
testDynamicThreadPoolSize is expanded to include all possible cases when
size is greater than or equal to min.
This commit fixes an index name equality check in RoutingNodes. Namely,
the check was comparing an instance of Index to an instance of
String. Instead, the index name should be obtained from the Index
instance to be compared to the instance of String.
Closes#17982
Previously, we would determine index deletes in the cluster state by
comparing the index metadatas between the current cluster state and the
previous cluster state and decipher which ones were missing (the missing
ones are deleted indices). This led to a situation where a node that
went offline and rejoined the cluster could potentially cause dangling
indices to be imported which should have been deleted, because when a node
rejoins, its previous cluster state does not contain reliable state.
This commit introduces the notion of index tombstones in the cluster
state, where we are explicit about which indices have been deleted.
In the case where the previous cluster state is not useful for index
metadata comparisons, a node now determines which indices are to be
deleted based on these tombstones in the cluster state. There is also
functionality to purge the tombstones after exceeding a certain amount.
Closes#17265Closes#16358Closes#17435