The method Boolean#getBoolean is dangerous. It is too easy to mistakenly
invoke this method thinking that it is parsing a string as a
boolean. However, what it actually does is get a system property with
the specified string, and then attempts to use usual crappy boolean
parsing in the JDK to parse that system property as boolean with
complete leniency (it parses every input value into either true or
false); that is, this method amounts to invoking
Boolean#parseBoolean(String) on the result of
System#getProperty(String). Boo. This commit bans usage of this method.
Relates #23864
The pattern-analyzer docs contained a snippet that was an expanded
regex that was marked as `[source,js]`. This changes it to
`[source,regex]`.
The htmlstrip-charfilter and pattern-replace-charfilter docs had
examples that were actually a list of tokens but marked `[source,js]`.
This marks them as `[source,text]` so they don't count as unconverted
CONSOLE snippets.
The pattern-replace-charfilter also had a doc who's test was
skipped because of funny interaction with the test framework. This
fixes the test.
Three more down, eighty-two to go.
Relates to #18160
CONSOLEifies the lang-analyzer docs and replaces the (invalid)
empty `keyword_marker` setups that were on the page with one
that contains the word "example" translated into the appropriate
language.
Relates to #18160
Remote nodes in cross-cluster search can be marked as eligible for
acting a gateway node via a remote node attribute setting. For example,
if search.remote.node.attr is set to "gateway", only nodes that have
node.attr.gateway set to "true" can be connected to for cross-cluster
search. Unfortunately, there is a bug in the handling of these
attributes due to the use of a dangerous method
Boolean#getBoolean(String) which obtains the system property with
specified name as a boolean. We are not looking at system properties
here, but node settings. This commit fixes this situation, and adds a
test. A follow-up will ban the use of Boolean#getBoolean.
Relates #23863
This commit changes the ClusterStatsNodes.NetworkTypes so that is does
not print out empty field names when no Transport or HTTP type is defined:
```
{
"network_types": {
...
"http_types": {
"": 2
}
}
}
```
is now rendered as:
```
{
"network_types": {
...
"http_types": {
}
}
}
```
SearchPhaseController is tighly coupled to AtomicArray which makes
non-dense representations of results very difficult. This commit removes
the coupling and cuts over to Collection rather than List to ensure no
order or random access lookup is implied.
This change introduces a new API called `_field_caps` that allows to retrieve the capabilities of specific fields.
Example:
````
GET t,s,v,w/_field_caps?fields=field1,field2
````
... returns:
````
{
"fields": {
"field1": {
"string": {
"searchable": true,
"aggregatable": true
}
},
"field2": {
"keyword": {
"searchable": false,
"aggregatable": true,
"non_searchable_indices": ["t"]
"indices": ["t", "s"]
},
"long": {
"searchable": true,
"aggregatable": false,
"non_aggregatable_indices": ["v"]
"indices": ["v", "w"]
}
}
}
}
````
In this example `field1` have the same type `text` across the requested indices `t`, `s`, `v`, `w`.
Conversely `field2` is defined with two conflicting types `keyword` and `long`.
Note that `_field_caps` does not treat this case as an error but rather return the list of unique types seen for this field.
Today we have no way to mark an execution as internal. This commit adds
a simple thread context header that allows executing code in a system context.
This allows intercepting code can make better decisions down the road when
it gets to authentication.
I managed to push the last one without testing it because I'd changed
the way I run tests locally and hadn't picked it up. Ooops. This one
works better.
All the docs for the `exists` query that aren't marked as `CONSOLE`
aren't actually `CONSOLE`-worthy so this marks them as `NOTCONSOLE`.
It also rewrites the text around `missing` query. Since it was
removed in 5.0 we don't need to talk about it in the 6.0 docs.
Relates to #18160
Turns the top example in each of the geo aggregation docs into a working
example that can be opened in CONSOLE. Subsequent examples can all also
be opened in console and will work after you've run the first example.
All examples are tested as part of the build.
This commit changes the listener passed to sendMessage from a Runnable
to a ActionListener.
This change also removes IOException from the sendMessage signature.
That signature is misleading as it allows implementers to assume an
exception will be thrown in case of failure. That does not happen due
to Netty's async nature.
When executing an update request, the request timeout is not transferred
to the index/delete request executed on behalf of the update
request. This leads to update requests not timing out when they should
(e.g., if not all shards are available when the request specifies
wait_for_shards=all with a small timeout). This commit causes the
index/delete requests to honor the update request timeout.
Relates #23825
Today we have the shard target and the target request ID available in SearchPhaseResults.
Yet, the coordinating node maintains a shard index to reference the request, response tuples
internally which is also used in many other classes to reference back from fetch results to
query results etc. Today this shard index is implicitly passed via the index in AtomicArray
which causes an undesirable dependency on this interface.
This commit moves the shard index into the SearchPhaseResult and removes some dependencies
on AtomicArray. Further removals will follow in the future. The most important refactoring here
is the removal of AtomicArray.Entry which used to be created for every element in the atomic array
to maintain the shard index during result processing. This caused an unnecessary indirection, dependency
and potentially thousands of unnecessary objects in every search phase.
I think this query should not use the hashCode provided BytesRef#hashCode().
It uses StringHelper#GOOD_FAST_HASH_SEED which is initialized in a static
block to System.currentTimeMillis().
Running this query on different replicas may return inconsistent results.
Using a fixed seed should guaranty that the docs are sliced consistently
accross replicas.
Fixes#23096
Now that we are on gradle 3.3, we can take advantage of a fix that was
made in 2.14 which properly handles disabling transitive dependencies in
pom generation. As it was currently, we actually ended up generated two
exclusions sections in the generated pom. This is yet another example of
why we need validation on the pom files with our generation here, but I
leave that for another day because I still don't know a good way to do
it.
The LoggedExec task does not capture output when info logging is
enabled. This commit changes the upstream check to use Exec directly,
so as not to break when info logging is enabled.
This will use File.toString() for the `git clone` command, which will
automatically be correct for whatever system the build is running on.
closes#23784
We currently have the last minor version of the previous major hardcoded
in tests like rolling upgrade. This change programatically finds this
during gradle initialization by parsing versions from Version.java.
This moves `updateReplicaRequest` to `createPrimaryResponse` and separates the
translog updating to be a separate function so that the function purpose is more
easily understood (and testable).
It also separates the logic for `MappingUpdatePerformer` into two functions,
`updateMappingsIfNeeded` and `verifyMappings` so they don't do too much in a
single function. This allows finer-grained error testing for when a mapping
fails to parse or be applied.
Finally, it separates parsing and version validation for
`executeIndexRequestOnReplica` into a separate
method (`prepareIndexOperationOnReplica`) and adds a test for it.
Relates to #23359
This commit clarifies the preference docs regarding the explanation of
how operations are routed by default. In particular, the previous use of
"shard replicas" was confusing as it could imply an operation would only
be routed to replicas by default.
Relates #23794
The translog already occupies 43 bytes on disk when empty. If the
translog generation threshold is below this, the flush thread can get
stuck in an infinite loop repeatedly rolling the generation. This commit
adds a lower bound on the translog generation to avoid this problem,
however we keep the lower bound small for convenience in testing.
Relates #23779
This commit adds support for the pattern keyword marker filter in
Lucene. Previously, the keyword marker filter in Elasticsearch
supported specifying a keywords set or a path to a set of keywords.
This commit exposes the regular expression pattern based keyword marker
filter also available in Lucene, so that any token matching the pattern
specified by the `keywords_pattern` setting is excluded from being
stemmed by any stemming filters.
Closes#4877
As the query of a search request defaults to match_all,
calling _delete_by_query without an explicit query may
result in deleting all data.
In order to protect users against falling into that
pitfall, this commit adds a check to require the explicit
setting of a query.
Closes#23629
This commit adds the boolean similarity scoring from Lucene to
Elasticsearch. The boolean similarity provides a means to specify that
a field should not be scored with typical full-text ranking algorithms,
but rather just whether the query terms match the document or not.
Boolean similarity scores a query term equal to its query boost only.
Boolean similarity is available as a default similarity option and thus
a field can be specified to have boolean similarity by declaring in its
mapping:
"similarity": "boolean"
Closes#6731