Finder creates these files if you browse a directory there. These files
are really annoying, but it's an incredible pain for users that these
files are created unbeknownst to them, and then they get in the way of
Elasticsearch starting. This commit adds leniency on macOS only to skip
these files.
Relates #27108
This commit adjusts the format of the SHA-512 checksum files supported
by the plugin installer. In particular, we now require that the SHA-512
format be a single-line file containing the checksum followed by two
spaces followed by the filename. We continue to support the legacy
format for SHA-1.
Relates #27093
Turns out that `ShardSearchTarget` is nullable, hence its fields may not be printed out as part of `ShardSearchFailure#toXContent`, in which case `fromXContent` cannot parse it back. We would previously try to create the object with all of its fields set to null, but `Index` complains about it in the constructor. Also made sure that this code path is covered by our unit tests in `ShardSearchFailureTests`.
Closes#27055
Introduce minimal thread scheduler as a base class for `ThreadPool`. Such a class can be used from the `BulkProcessor` to schedule retries and the flush task. This allows to remove the `ThreadPool` dependency from `BulkProcessor`, which requires to provide settings that contain `node.name` and also needed log4j for logging. Instead, it needs now a `Scheduler` that is much lighter and gets automatically created and shut down on close.
Closes#26028
It's believed that using diffs obsoletes the other mechanism for reusing the
bits of the ClusterState that didn't change between updates, but in fact we
don't know for sure how often the diff mechanism works successfully. The stats
collected here will tell us.
Right now if the number of shards for a particular index is equal across the
data paths, we tie-break on space. This changes to tie-break first on the total
number of shards for each path, and then, if that is the same, on the usable
bytes.
Relates to #26654 (it's a follow-up)
If timed runnable wraps an abstract runnable, then it should delegate to
the abstract runnable otherwise force execution and handling rejections
is dropped on the floor. Thus, timed runnable should itself be an
abstract runnable delegating all methods to the wrapped runnable in
cases when it is an abstract runnable. This commit causes this to be the
case.
Relates #27095
This commit removes the `ByteBufStreamInput` `readBytesReference` and
`readBytesRef` methods. These methods are zero-copy which means that
they retain a reference to the underlying netty buffer. The problem is
that our `TcpTransport` is not designed to handle zero-copy. The netty
implementation sets the read index past the current message once it has
been deserialized, handled, and mostly likely dispatched to another
thread. This means that netty is free to release this buffer. So it is
unsafe to retain a reference to it without calling `retain`. And we
cannot call `retain` because we are not currently designed to handle
reference counting past the transport level.
This should not currently impact us as we wrap the `ByteBufStreamInput`
in `NamedWriteableAwareStreamInput` in the `TcpTransport`. This stream
essentially delegates to the underling stream. However, in the case of
`readBytesReference` and `readBytesRef` it leaves thw implementations
to the standard `StreamInput` methods. These methods call the read byte
array method which delegates to `ByteBufStreamInput`. The read byte
array method on `ByteBufStreamInput` copies so it is safe. The only
impact of this commit should be removing methods that could be dangerous
if they were eventually called due to some refactoring.
An upstream Gradle change has broken us starting on version 4.2. This
commit blacklists these versions until we can either find a workaround,
or the upstream issue is addressed.
Relates #27087
Memory usage of queries can't be properly accounted, which can be an issue when
large queries are cached since the actual memory usage will be much higher than
what the cache thinks. This problem is very hard if not impossible to fix so as
a workaround I would like to decrease the maximum number of cached queries so
that this problem is less likely to cause trouble in practice.
For the record, this problem is more likely to occur in envirenments that have
small shards or don't give much memory to the JVM.
Closes#26938
Today we internally accumulate elapsed scroll time in nanoseconds. The
problem here is that this can reasonably overflow. For example, on a
system with scrolls that are open for ten minutes on average, after
sixteen million scrolls the largest value that can be represented by a
long will be executed. To address this, we switch to internally
representing scrolls using microseconds as this enables with the same
number of scrolls scrolls that are open for seven days on average, or
with the same average elapsed time sixteen billion scrolls which will
never happen (executing one scroll a second until sixteen billion have
executed would not occur until more than five-hundred years had
elapsed).
Relates #27068
Upgrade to Jackson 2.9.2 and also use a boolean `closed` flag to
indicate that a FastStringReader instance is closed, so that length
is still correctly reported after the reader is closed.
Due to a change happened via #26102 to make the nested source consistent
with or without source filtering, the _source of a nested inner hit was
always wrapped in the parent path. This turned out to be not ideal for
users relying on the nested source, as it would require additional parsing
on the client side. This change fixes this, the _source of nested inner hits
is now no longer wrapped by parent json objects, irregardless of whether
the _source is included as is or source filtering is used.
Internally source filtering and highlighting relies on the fact that the
_source of nested inner hits are accessible by its full field path, so
in order to now break this, the conversion of the _source into its binary
form is performed in FetchSourceSubPhase, after any potential source filtering
is performed to make sure the structure of _source of the nested inner hit
is consistent irregardless if source filtering is performed.
PR for #26944Closes#26944
* Balance shards for an index more evenly across multiple data paths
When a node has multiple data paths configured, and is assigned all of the
shards for a particular index, it's possible now that all shards will be
assigned to the same path (see #16763).
This change keeps the same behavior around determining the "best" path for a
shard based on space, however, it enforces limits for the number of shards on a
path for an index from the single-node perspective. For example:
Assume you had a node with 4 data paths, where `/path1` has a tremendously high
amount of disk space available compared to the other paths. If you create an
index with 5 primary shards, the previous behavior would be to assign all 5
shards to `/path1`.
This change would enforce a limit of 2 shards to each data path for that
particular node, so you would end up with the following distribution:
- `/path1` - 2 shards (because it has the most usable space)
- `/path2` - 1 shard
- `/path3` - 1 shard
- `/path4` - 1 shard
Note, however, that this limit is only enforced at the local node level for
simplicity in implementation, so if you had multiple nodes, the "limit" for the
node is still 2, so assuming you had enough nodes that there was only 2 shards
for this index assigned to this node, they would still both be assigned to
`/path1`.
* Switch from ObjectLongHashMap to regular HashMap
* Remove unneeded Files.isDirectory check
* Skip iterating directories when not necessary
* Add message to assert
* Implement different (better) ranking for node paths
This is the method we discussed
* Remove unused pathHasEnoughSpace method
* Use findFirst instead of .get(0);
* Update for master merge to fix compilation
Settings.putArray -> Settings.putList
Today when getting ready to enter seccomp, we do some probes to ensure
that we are really talking to seccomp, etc. One of these probes is pure
paranoia. The paranoia was driven by a kernel bug
(https://lkml.org/lkml/2014/7/20/222) that only impacted 32-bit x86
kernels wherein invoking a non-existant syscall was not returning ENOSYS
(as it should). This probe causes problems though, for example in
containers with syscall filters, invoking a non-existant syscall will
lead to the process being sent SIGSYS and terminated. We do not need
this paranoid, we do not support 32-bit, and our other probes give us
enough of a defense to ensure that we are talking to seccomp (and we
hardcode the seccomp syscall number for platforms that we
support). Given that this probe offers us little value, but does cause
problems in valid use-cases, this commit removes this paranoia.
Relates #27016
The internal engine constructor declares a checked engine exception yet
this constructor does not actually throw this exception. This commit
removes this declaration from the internal engine constructor.
Relates #27022
Today all these API calls have a sideeffect of making documents visible
to search requests. While this is sometimes desired it's an unnecessary sideeffect
and now that we have an internal (engine-private) index reader (#26972) we artificially
add a refresh call for bwc. This change removes this sideeffect in 7.0.
Right now we are attempting to set SO_LINGER to 0 on server channels
when we are stopping the tcp transport. This is not a supported socket
option and throws an exception. This also prevents the channels from
being closed.
This commit 1. doesn't set SO_LINGER for server channges, 2. checks
that it is a supported option in nio, and 3. changes the log message
to warn for server channel close exceptions.
This commit adds a note to the docs on the full_id parameter in the cat
nodes API. This is a useful parameter but was not previously documented
anywhere.
Relates #27009
This commit reformats a paragraph in the template docs to fit in 80
columns as for the rest of the doc, and as-is a standard that we loosely
adhere to.
This commit clarifies the interaction between settings specified in a
create index request, and those that would come from any templates that
apply to the create index request.
Relates #26994
Today we only allow to decode byte arrays where the data has a 0 offset
and the same length as the array. Allowing to decode stuff from a slice will
make decoding IDs cheaper if the the ID is for instance coming from a term dictionary
or BytesRef.
Relates to #26931
Today, when ES detects it's using too much heap vs the configured indexing
buffer (default 10% of JVM heap) it opens a new searcher to force Lucene to move
the bytes to disk, clear version map, etc.
But this has the unexpected side effect of making newly indexed/deleted
documents visible to future searches, which is not nice for users who are trying
to prevent that, e.g. #3593.
This is also an indirect spinoff from #26802 where we potentially pay a big
price on rebuilding caches etc. when updates / realtime-get is used. We are
refreshing the internal reader for realtime gets which causes for instance
global ords to be rebuild. I think we can gain quite a bit if we'd use a reader
that is only used for GETs and not for searches etc. that way we can also solve
problems of searchers being refreshed unexpectedly aside of replica recovery /
relocation.
Closes#15768Closes#26912
Previous to this change the weights for the filter and filters aggregation were created in the `Filter(s)AggregatorFactory` which meant that they were created regardless of whether the aggregator actually collects any documents. This meant that for filters that are expensive to initialise, requests would not be quick when the query of the request was (or effectively was) a `match_none` query.
This change maintains a single Weight instance for each filter across parent buckets but passes a weight supplier to the aggregator instances which will create the weight on first call and then return that instance for subsequent calls.