Folds the helper class for random object generation into the
abstract sort test class. Removes a few references to ESTestCase
that were not needed due to inheriting from it along the way.
The query shard reset() method resets some internal state in the
query shard context, like clearing query names, the filter flag
or named queries. The problem with this method being public is
that it currently (miss?) used for modifying an existing context
for recursive invocatiob, but the contexts that have been reseted
that way cannot be properly set back to their previous state.
This PR is a step towards removing reset() entirely by first making
it only be used internally in QueryShardContext. In places where
reset() was used we can either create new QueryShardContexts or
modify the existing context because it is discarded afterwards anyway.
Today, the constructor for IngestDocument#FieldPath does a string
concatentation and two object allocation on every field path. This
commit removes these unnecessary operations.
Relates #18108
With this commit we compress HTTP responses provided the client
supports it (as indicated by the HTTP header 'Accept-Encoding').
We're also able to process compressed HTTP requests if needed.
The default compression level is lowered from 6 to 3 as benchmarks
have indicated that this reduces query latency with a negligible
increase in network traffic.
Closes#7309
Don't try to compute completion stats on a reader after we already closed it
Conflicts:
core/src/main/java/org/elasticsearch/index/shard/IndexShard.java
This commit removes an unnecessary if statement in Bootstrap#check. The
removed if statement was duplicating the conditionals in the nested if
statements and was merely an artifact of an earlier refactoring.
Today when running in production mode the bootstrap checks are
completely unforgiving. But there are cases where an end-user might not
have the ability to modify some of the system-level settings that cause
the bootstrap checks to trip (e.g., guest settings that are inherited
from a host and can not be modified). This commit adds a setting that
allows system-level bootstrap checks to be ignored for these
end-users. We classify certain bootstrap checks into system-level checks
and only those bootstrap checks will be ignored if this flag is
enabled. All other bootstrap checks are still subject to being enforced
if the user is in production mode. We will still log warnings for these
bootstrap checks because the end-user does still need to be made aware
that they are running in a configuration that is less-than-ideal from a
resiliency perspective.
Relates #18088
This commit removes a racy but unnecessary assertion in scaling thread
pool idle test. Namely, the main test thread can reach the removed
assertion before the last few threads in the thread pool have completed
their tasks and caused the completed tasks count on the underlying
executor to be updated. But this assertion is unnecessary. The main test
thread already waits on a latch that is only decremented immediately
before a task completes. This ensures that it was in fact the case that
every submitted task was executed.
Closes#18072
When the termslookup (mocked in this case) doesn't return any terms, the
query used to rewrite to an empty boolean query. Now it rewrites to a
MatchNoDocsQuery. This changes the test expectation accordingly.
Closes#18071
This commit modifes the EsThreadPoolTestCase#info helper method to
return null when info for the thread pool can not be found. This really
should only happen for the "same" thread pool, and so we also assert
that we only get to a place where there is no info if the thread pool
that info was requested for is in fact the "same" thread pool. Not
returning null here and instead throwing an exception would fail tests
that tried to lookup info on the "same" thread pool.
Today we use a sliced lock strategy for acquiring locks to prevent
concurrent updates to the same document. The number of sliced locks is
computed as a linear function of the number of logical
processors. Unfortunately, the probability of a collision against a
sliced lock is prone to the birthday problem and grows faster than
expected. In fact, the mathematics works out such that for a fixed
target probability of collision, the number of lock slices should grow
like the square of the number of logical processors. This is
less-than-ideal, and we can do better anyway. This commit introduces a
strategy for avoiding lock contention within the internal
engine. Ideally, we would only have lock contention if there were
concurrent updates to the same document. We can get close to this ideal
world by associating a lock with the ID of each document. This
association can be held in a concurrent hash map. Now, the JDK
ConcurrentHashMap also uses a sliced lock internally, but it has several
strategies for avoiding taking the locks and these locks are only held
for a very short period of time. This implementation associates a
reference count with the lock that is associated with a document ID and
automatically removes the document ID from the concurrent hash map when
the reference count reaches zero.
Relates #18060
Fix a limitation that prevent from hierarchical inner hits be defined in query dsl.
Removed the nested_path, parent_child_type and query options from inner hits dsl. These options are only set by ES
upon parsing the has_child, has_parent and nested queries are using their respective query builders.
These options are still used internally, when these options are set a new private copy is created based on the
provided InnerHitBuilder and configuring either nested_path or parent_child_type and the inner query of the query builder
being used.
Closes#11118
Previously like in other geo related query parsers we were using
a combination of two booleans for coerce and ignore_malformed
which was error prone and not very clear.
Switched to using GeoValidationMethod instead as we already do
e.g. in GeoBoundingBoxQueryBuilder.
Left support for both, coerce and ignore_malformed in the parser
but deprecated the two in favour of validation method.
Introduced the same deprecation in geo bounding box query builder.
While returning no hits on fields that are not mapped may be fine, it is not
for fields that are mapped but not indexed (`index:false`). We should fail the
query in that case rather than returning no hits.
Switch something from an explicit toString to Strings.toString which
is the same thing but with more code reuse.
Also renamed a constant to be CONSTANT_CASE.
ObjectParser makes parsing XContent 95% easier. No more nested loops.
No more forgetting to use ParseField. Consistent handling for arrays.
Awesome. But ObjectParser doesn't support building things objects whose
constructor arguments are mixed in with the rest of its properties.
Enter ConstructingObjectParser! ConstructingObjectParser queues up
fields until all of the constructor arguments have been parsed and
then sets them on the target object.
Closes#17352
* Add isSearchable and isAggregatable (collapsed to true if any of the instances of that field are searchable or aggregatable).
* Accept wildcards in field names.
* Add a section named conflicts for fields with the same name but with incompatible types (instead of throwing an exception).
This commit fixes a test bug in the scaling thread pool idle
test. Namely, a random thread pool is chosen which could have a min pool
size of one or four but the while loop was acting as if the min pool
size was four (this is due to the test having been initially written for
only the generic thread pool).
Additionally, a latch is added between the test thread and the work
tasks to reduce the chance of a race condition between the test thread
and last few tasks.
This commit slightly expands the scaling thread pool configuration test
coverage. In particular, the test testScalingThreadPoolConfiguration is
expanded to include the case when min is equal to size, and the test
testDynamicThreadPoolSize is expanded to include all possible cases when
size is greater than or equal to min.
This commit fixes an index name equality check in RoutingNodes. Namely,
the check was comparing an instance of Index to an instance of
String. Instead, the index name should be obtained from the Index
instance to be compared to the instance of String.
Closes#17982
Previously, we would determine index deletes in the cluster state by
comparing the index metadatas between the current cluster state and the
previous cluster state and decipher which ones were missing (the missing
ones are deleted indices). This led to a situation where a node that
went offline and rejoined the cluster could potentially cause dangling
indices to be imported which should have been deleted, because when a node
rejoins, its previous cluster state does not contain reliable state.
This commit introduces the notion of index tombstones in the cluster
state, where we are explicit about which indices have been deleted.
In the case where the previous cluster state is not useful for index
metadata comparisons, a node now determines which indices are to be
deleted based on these tombstones in the cluster state. There is also
functionality to purge the tombstones after exceeding a certain amount.
Closes#17265Closes#16358Closes#17435
This adds information similar to what is from the [shard stores
API](https://www.elastic.co/guide/en/elasticsearch/reference/2.3/indices-shards-stores.html)
to the cluster allocation explanation API (in fact, internally it uses
that API).
This means when you have a decision that otherwise could indicate that a
shard can go somewhere, you now have more information:
```json
{
"shard" : {
"index" : "i",
"index_uuid" : "QzoKda9aQCG_hCaZQ18GEg",
"id" : 0,
"primary" : true
},
"assigned" : false,
"unassigned_info" : {
"reason" : "CLUSTER_RECOVERED",
"at" : "2016-04-11T20:58:04.088Z"
},
"allocation_delay" : "0s",
"allocation_delay_ms" : 0,
"remaining_delay" : "0s",
"remaining_delay_ms" : 0,
"nodes" : {
"24Qmw4tdRTuVOtjAdtmr5Q" : {
"node_name" : "Vampire by Night",
"node_attributes" : { },
"final_decision" : "YES",
"weight" : 7.0,
"decisions" : [ ],
"store" : {
"allocation_id" : "aC6qVWA7TT2pgsalYxxUJQ",
"store_exception" : "IndexFormatTooOldException[Format version is not supported (resource BufferedChecksumIndexInput(SimpleFSIndexInput(path=\"/home/hinmanm/scratch/elasticsearch-5.0.0-alpha1-SNAPSHOT/data/elasticsearch/nodes/0/indices/QzoKda9aQCG_hCaZQ18GEg/0/index/segments_1\"))): -1906795950 (needs to be between 1071082519 and 1071082519). This version of Lucene only supports indexes created with release 5.0 and later.]",
"allocation" : "UNUSED"
}
}
}
}
```
The "store" section is the new section, and will include allocation, id,
and the exception if there is one.
Relates to #17372
This commit converts the settings for the ResourceWatcherService to use the new infrastructure and
registers the settings so that they do not cause errors when used.
This commit clarifies an error message that is produced when an attempt
is made to resize the backing queue for a scaling executor. As this
queue is unbounded, resizing the backing queue does not make sense. The
clarification here is to specify that this restriction is because the
executor is a scaling executor.
This commit actually bounds the size of the generic thread pool. The
generic thread pool was of type cached, a thread pool with an unbounded
number of workers and an unbounded work queue. With this commit, the
generic thread pool is now of type scaling. As such, the cached thread
pool type has been removed. By default, the generic thread pool is
constructed with a core pool size of four, a max pool size of 128 and
idle workers can be reaped after a keep-alive time of thirty seconds
expires. The work queue for this thread pool remains unbounded.
The list settings parser supports retrieving lists defined in settings that use a key followed by a `.` and a
number (for example `foo.bar.0`). However, the exists method would indicate that the provided settings
do not contain a value for this setting. This change makes it so that the exists method now handles this
format.
When a cli throws a USAGE error, it is implied that the user did
something wrong, and probably needs help in understanding the cli
arguments. This change adds help output before the usage error is
printed.
Do this by creating a Client subclass that automatically assigns the
parentTask to all requests that come through it. Code that doesn't want
to set the parentTask can call `unwrap` on the Client to get the inner
client instance that doesn't set the parentTask. Reindex uses this for
its ClearScrollRequest so that the request will run properly after the
reindex request has been canceled.
Now that the current uses of magical camelCase support have been
deprecated, we can remove these in master (sans remaining issues like
BulkRequest). This change removes camel case support from ParseField,
query types, analysis, and settings lookup.
see #8988
Callers should explicitly handle parents - either using EMPTY_TASK_ID when
a parent isn't possible or piping parents from the TransportRequest when
possible.
This commit modifies InjectorImpl to prevent a thread local leak in
certain web containers. This leak can arise when starting a node client
inside such a web container. The underlying issue is that the
ThreadLocal instance was created via an anonymous class. Such an
anonymous class has an implicit reference back to the InjectorImpl in
which it was created. The solution here is to not use an anonymous class
but instead just create the reference locally and set it on the thread
local.
Relates #17921
For some seeds the number of concurrent requests previously defined
in NettyHttpRequestSizeLimitIT was too low to trigger the intended
breaker limit.
With this commit we increase the number of concurrent requests that
are sent to the test cluster thus triggering the limit.
`ip` fields currently fail range queries when either bound is inclusive. This
commit makes ranges also work in the exclusive case to be consistent with other
data types.
This commit addresses an issue in the calculation of the time to execute
a term vectors request. The underlying issue was due to measuring the
took time by passing the starting wall clock time along with the request
and calculating the total time using the ending wall clock time on the
responding node. The fix is to use a relative time source on a single
node.
Relates #17817
This changes our packaging to be explicit about the permissions of files
and directories in the tar.gz, rpm, and deb packages. This is to protect
against a user having an incorrectly set umask when installing.
Additionally, plugins that are installed now have their permissions set
by the plugin installation so that plugins that may have been packaged
with incorrect permissions are secured.
Resolves#17634
Replace with a constructor that takes StreamInput or a static method.
In one case (ValuesSourceType) we no longer need to serialize the data
at all!
Relates to #17085
* `rename` processor, renamed `to` to `target_field`
* `date` processor, renamed `match_field` to `field` and renamed `match_formats` to `formats`
* `geoip` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
* `attachment` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
Closes#17835
This adds some new tests to DocumentParserTests to make sure the DocumentParser behaves correctly when dynamically mapping fields. Especially testing that the dynamic setting works when dynamically mapping different field types.
Adds a `fingerprint` token filter which uses Lucene's FingerprintFilter,
and a `fingerprint` analyzer that combines the Fingerprint filter with
lowercasing, stop word removal and asciifolding.
Closes#13325
Don't pass XContentParser to ParseFieldRegistry#lookup
Passing the parser in is not good because we are not parsing anything in the lookup methods, we only need it to retrieve the xcontent location from it so that in case there is an error we emit where we were with the parsing. It is better to pass in the XContentLocation although calling getTokenLocation needs to create a new object at each call. The workaround of passing the Parser in is worse than the original problem.
Cluster health responses have not shown validation errors, which are
retrieved from RoutingTable validations, in any production or testing
instances. The code is unit tested well in this area and any issues are
exposed through the testing infrastructure, so this commit removes
reporting of validation errors in the cluster health response.
Closes#17773Closes#16979
This is what happens when you pull on the "Remove the PROTOTYPEs from
MovAvgModels" string. This removes MovAvgModelStreams in favor of
readNamedWriteable and MovAvgParserMapper in favor of
ParseFieldRegistry<MovAvgModel.AbstractModelParser>.
Relates to #17085
Boolean fields were not handled in `DocumentParser.createBuilderFromFieldType`.
This also improves the logic a bit by falling back to the default mapping of
the given type insteah of hard-coding every case and throws a better exception
than a NPE if no dynamic mappings could be created.
Closes#17879
The Settings class has an enormous amount of methods with variations of
parameters. This change removes all the methods which take multiple
setting names, which were completely unused.
We plan to change every request so that it can support a parentTaskId.
This shrinks EMPTY_TASK_ID, which will be on every request once that change
is made, from 31 bytes to 9 bytes to 1 byte.
This change fixes the lookup during document parsing of whether an
object field is dynamic to handle looking up through parent object
mappers, and also handle when a parent object mapper was created during
document parsing.
closes#17854
Now there aren't any more specialized methods to read or write
NamedWriteables! Now StreamInput and StreamOutput don't need to know
about large chunks of the application!
In Elasticsearch 5.0.0, by default unquoted field names in JSON will be
rejected. This can cause issues, however, for documents that were
already indexed with unquoted field names. To alleviate this, a system
property has been added that can be enabled so migration can occur.
This system property will be removed in Elasticsearch 6.0.0
Resolves#17674
When I pulled on the thread that is "Remove PROTOTYPEs from
SignificanceHeuristics" I ended up removing SignificanceHeuristicStreams
and replacing it with readNamedWriteable. That seems like a lot at once
but it made sense at the time. And it is what we want in the end, I think.
Anyway, this also converts registration of SignificanceHeuristics to
use ParseFieldRegistry to make them consistent with Queries, Aggregations
and lots of other stuff.
Adds a new and wonderous hack to support serialization checking of
NamedWriteables registered by plugins!
Related to #17085
Extracts all the replication logic that is done on the Primary to a separated class called ReplicationOperation. The goal
here is to make unit testing of this logic easier and in the future allow setting up tests that work directly on IndexShards
without the need for networking.
Closes#16492
Removes deprecated registration methods from SearchModule and
NamedWriteableRegistry and removes the "shims" used to migrate
aggregations to the new registration methods.
Relates to #17085
* Added an extra `field` parameter to the `percolator` query to indicate what percolator field should be used. This must be an existing field in the mapping of type `percolator`.
* The `.percolator` type is now forbidden. (just like any type that starts with a `.`)
This only applies for new indices created on 5.0 and later. Indices created on previous versions the .percolator type is still allowed to exist.
The new `percolator` field type isn't active in such indices and the `PercolatorQueryCache` knows how to load queries from these legacy indices.
The `PercolatorQueryBuilder` will not enforce that the `field` parameter is of type `percolator`.
This commit refactors the UUID-generating methods out of Strings into
their own class. The primary motive for this refactoring is to avoid a
chain of class initializers from loading this class earlier than
necessary. This was discovered when it was noticed that starting
Elasticsearch without any active network interfaces leads to some
logging statements being executed before logging had been
initailized. Thus:
- these UUID methods have no place being on Strings
- removing them reduces spooky action-at-distance loading of this class
- removed the troublesome, logging statements from MacAddressProvider,
logging using statically-initialized instances of ESLogger are prone
to this problem
Relates #17837
We have this TransportAddressSerializers that works similarly to
NamedWriteables except it uses shorts instead of streams. I don't know
enough to propose removing it in favor of NamedWriteables to I just ported
it to using Writeable.Reader and left it alone.
Relates to #17085
This test should demonstrate that a single (larger)
request is processed but on of multiple large concurrent requests
is rejected. This test broke too early under some circumstances in
network mode as the limit is quite low.
With this commit we reduce the size of the individual large
requests but issue more concurrent ones thus increasing stability
of this test.
We advertise in our documentation that byte units are like `kb`, `mb`... But we actually only support the simple notation `k` or `m`.
This commit adds support for the documented form and keeps the non documented options to avoid any breaking change.
It also adds support for `micros`, `nanos` and `d` as a time unit in `_cat` API.
Remove the support for `b` as a SizeValue unit. Actually, for numbers, when using raw numbers without unit, there is no text to add/parse after the number. For example, you don't write `10` as `10b`. We support option like `size=` in `_cat` API which means that we want to display raw data without unit (singles).
Documentation updated accordingly.
Add test for the empty size option.
Fix missing TimeValues options for some cat APIs