Also renamed HeadersCopyClientTests since it tests a class that was renamed and removed randomization around client wrapper, as it now needs to be weapped all the time to copy the context, doesn't depend on useful headers that have been registered anymore.
Relates to #7610
Today we throw an error if local transport is configured with BWC tests.
Yet, the BWC test need network to be enabled so test can just set the
required defaults.
Closes#7660
Previously we incorrectly sent them in the wrong order, which can cause
validators not to be run for dynamic settings that have been added
matching a particular wildcard.
Fixes#7651
get rid of readSource() entirely, it sucked, Operations should be able
to provide the source themselves.
No more TranslogStream headers, you are now required to pass an
StreamInput or StreamOutput for all operations, which means no extra
state is needed and no need to construct new versions when detecting the
version.
Read and write translog op sizes in TranslogStreams
Previously we handled these integers outside of the translog stream
itself, which was very unclean because other code had to know about
reading the size, or about writing the correct header sometimes.
There is some additional code in LocalIndexShardGateway to handle the
legacy case for older translogs, because we need to read and discard the
size in order to maintain the compatibility for the streaming
operations (they did not read or write the size for 1.3.x and earlier).
Additionally, we need to handle a case where the header is truncated
when recovering from disk
Use a NoopStreamOutput instead of byte arrays
Instead of writing translog operations to a temporary byte array and
then writing that byte array to the stream, we now write the operation
twice, once to a No-op stream to get the size, then again to the real
size.
This trades a little more CPU usage for less memory usage.
Today we serialize the snashot metadata to a byte array and then copy
the byte array to a stream. Instead this commit moves the serialization
directly to the target stream without the intermediate representation.
Closes#7637
Before the heapdump was either written in a file with the
directory name if the heapdump path ended without / or
not written at all if the path ended with /
closes#7645
We keep around a noop stats indicating how many update operations ended up not updating the document (typically because it didn't change). However, the TransportUpdateAction update that counter only after returning the result. This can throw off stats check which are done immediately after, potentially causing test failures.
Closes#7639
Today there are two different ways to cleanup search contexts which can
potentially lead to double releasing of a context. This commit unifies
the methods and prevents double closing.
Closes#7625
Updates on the _timestamp field were silently ignored.
Now _timestamp undergoes the same merge as regular
fields. This includes exceptions if a property cannot
be changed.
"path" and "default" cannot be changed.
closes#5772closes#6958closes#7614
partially fixes#777
The terms aggregation can now support sorting on multiple criteria by replacing the sort object with an array or sort object whose order signifies the priority of the sort. The existing syntax for sorting on a single criteria also still works.
Contributes to #6917
Reduce Phases can be expensive and some of them like the aggregations
reduce phase might even execute a one-off call via an internal client
that might cause a deadlock due to execution on the network thread
that is needed to handle the one-off call. This commit dispatches
the reduce phase to the search threadpool to ensure we don't wait
for the current thread to be available.
Closes#7623
Lucene's experimental codecs (from the codecs module) do not provide
backwards compatibility and are free to change from release to
release. When they do change, they typically cannot in general read
older indices and the resulting exceptions look like index corruption.
So, we are removing built-in support for them to prevent applications
from choosing one and then seeing strange exceptions on upgrade.
Closes#7566Closes#7604
Enables filtering the actions on both sides - request and response. Also added a base class for filter implementations (cleans up filters that only need to filter one side)
Also refactored the filter & filter chain methods to more intuitive names
BlobContainer used to provide async APIs which are not used
internally. The implementation of these APIs are also not async
by nature and neither is any of the pluggable BlobContainers. This
commit simplifies the API to a simple input / output stream and
reduces the hierarchy of BlobContainer dramatically.
NOTE: This is a breaking change!
Closes#7551
Similar to the one in `TransportMessage`. Added the `ContextHolder` base class where both `TransportMessage` and `RestRequest` derive from
Now next to the known headers, the context is always copied over from the rest request to the transport request (when the injected client is used)
This adds the ability to the Term Vector API to generate term vectors for
artifical documents, that is for documents not present in the index. Following
a similar syntax to the Percolator API, a new 'doc' parameter is used, instead
of '_id', that specifies the document of interest. The parameters '_index' and
'_type' determine the mapping and therefore analyzers to apply to each value
field.
Closes#7530