Today we read a vint from the stream to allocate the size of an array up-front
before we start reading the values. This can be dangerous if for instance we read
from a corrupted stream or if some manipulated bytes are send for instance from
an attacker or a fuzzer. In most of the cases we can apply some best effort and
validate the array size to be _sane_ by ensuring we can at read at least N bytes
where N is the expected size of the array.
We kept `netty_3` as a fallback in the 5.x series but now that master
is 6.0 we don't need this or in other words all issues coming up with
netty 4 will be blockers for 6.0.
At one point in the past when moving out the rest tests from core to
their own subproject, we had multiple test classes which evenly split up
the tests to run. However, we simplified this and went back to a single
test runner to have better reproduceability in tests. This change
removes the remnants of that multiplexing support.
Previously Elasticsearch would only use the package name for logging
levels, truncating the package prefix and the class name. This meant
that logger names for Netty were just prefixed by netty3 and netty. We
changed this for Elasticsearch so that it's the fully-qualified class
name now, but never corrected this for Netty. This commit fixes the
logger names for the Netty modules so that their levels are controlled
by the fully-qualified class name.
Relates #21223
This commit fixes responses to HEAD requests so that the value of the
Content-Length is correct per the HTTP spec. Namely, the value of this
header should be equal to the Content-Length if the request were not a
HEAD request.
This commit also fixes a memory leak on HEAD requests to the main action
that arose from the bytes on a builder not being released due to them
being dropped on the floor to ensure that the response to the main
action did not have a body.
Relates #21123
This commit upgrades the transport-netty4 module dependency from Netty
version 4.1.5 to version 4.1.6. This is a bug fix release of Netty.
Relates #21051
This commit fixes an issue with the handling of the value "keep-alive"
on the Connection header in the Netty 4 HTTP implementation while
handling an HTTP 1.0 request. The issue was using the wrong equals
method to compare an AsciiString instance and a String instance (they
could never be equal). This commit fixes this to use the correct equals
method to compare for content equality.
This commit fixes an issue with the handling of the value "close" on the
Connection header in the Netty 4 HTTP implementation. The issue was
using the wrong equals method to compare an AsciiString instance and a
String instance (they could never be equal). This commit fixes this to
use the correct equals method to compare for content equality.
Relates #20956
Today Elasticsearch limits the number of processors used in computing
thread counts to 32. This was from a time when Elasticsearch created
more threads than it does now and users would run into out of memory
errors. It appears the real cause of these out of memory errors was not
well understood (it's often due to ulimit settings) and so users were
left hitting these out of memory errors on boxes with high core
counts. Today Elasticsearch creates less threads (but still a lot) and
we have a bootstrap check in place to ensure that the relevant ulimit is
not too low.
There are some caveats still to having too many concurrent indexing
threads as it can lead to too many little segments, and it's not a
magical go faster knob if indexing is already bottlenecked by disk, but
this limitation is artificial and surprising to users and so it should
be removed.
This commit also increases the lower bound of the max processes ulimit,
to prepare for a world where Elasticsearch instances might be running
with more the previous cap of 32 processors. With the current settings,
Elasticsearch wants to create roughly 576 + 25 * p / 2 threads, where p
is the number of processors. Add in roughly 7 * p / 8 threads for the GC
threads and a fudge factor, and 4096 should cover us pretty well up to
256 cores.
Relates #20874
Both netty3 and netty4 http implementation printed the default
toString representation of PortRange if ports couldn't be bound.
This commit adds a better default toString method to PortRange and
uses the string representation for the error message in the http
implementations.
Today we throw an assertion error if we release an AbstractArray more than once.
Yet, it's recommended to implement close methods such that they can be invoked
more than once. Guaranteed single release calls are hard to implement and some
situations might not be tested causing for instance `CircuitBreaker` to operate on
corrupted memory stats.
UpdateHelper, MetaDataIndexUpgradeService, and some recovery
stuff.
Move ClusterSettings to nullable ctor parameter of TransportService
so it isn't forgotten.
This change proposes the removal of all non-tcp transport implementations. The
mock transport can be used by default to run tests instead of local transport that has
roughly the same performance compared to TCP or at least not noticeably slower.
This is a master only change, deprecation notice in 5.x will be committed as a
separate change.
This commit removes `ByteSizeValue`'s methods that are duplicated (ex: `mbFrac()` and `getMbFrac()`) in order to only keep the `getN` form.
It also renames `mb()` -> `getMb()`, `kb()` -> `getKB()` in order to be more coherent with the `ByteSizeUnit` method names.
This change removes all guice interaction from Transport, HttpServerTransport,
HttpServer and TransportService. All these classes as well as their subclasses
or extended version configured via plugins are now created by using plain old
bloody java constructors. YAY!
TransportService is such a central part of the core server, replacing
it's implementation is risky and can cause serious issues. This change removes the ability to
plug in TransportService but allows registering a TransportInterceptor that enables
plugins to intercept requests on both the sender and the receiver ends. This is a commonly used
and overwritten functionality but encapsulates the custom code in a contained manner.
This commit modifies the call sites that allocate a parameterized
message to use a supplier so that allocations are avoided unless the log
level is fine enough to emit the corresponding log message.
This commit upgrades the Netty dependencies from version 4.1.4 to
version 4.1.5. This upgrade brings several bug fixes including the
removal of a obnoxious and scary-looking log message when unsafe is
explicitly disabled.
Relates #20222
Netty3/4 TcpTransport implementations are creating thread factories with a "http_server" thread prefix whereas it should start with "transport_server" and let the "http_server" prefix for the HttpServerTransport implementations.
The network types in use on a cluster can be useful information to have,
so this commit adds aggregate metrics for the network types in use in a
cluster to the cluster stats.
Relates #20144
The Netty 4 HTTP server pipeline tests contains two different test
cases. The general idea behind these tests is to submit some requests to
a Netty 4 HTTP server, one test with pipelining enabled and another test
with pipelining disabled. These requests are submitted to two endpoints,
one with a path like /{id} and another with a path like /slow with a
query string parameter sleep. This parameter tells the request handler
how long to sleep for before replying. The idea is that in the case of
the pipelining enabled tests, the requests should come back exactly in
the order submitted, even with some of the requests hitting the slow
endpoint with random sleep durations; this is the guarantee that
pipelining provides. And in the case of the pipelining disabled tests,
requests were randombly submitted to /{id} and /slow with sleep
parameters starting at 600ms and increasing by 100ms for each slow
request constructed. We would expect the requests to come back with the
all the responses to the /{id} requests first because these requests
will execute instantaneously, and then the responses to the /slow
requests. Further, it was expected that the slow requests would come
back ordered by the length of the sleep, the thinking being that 100ms
should be enough of a difference between each request that we would
avoid any race conditions. Sadly, this is not the case, the threads do
sometimes hit race conditions.
This commit modifies the HTTP server pipelining tests to address this
race condition. The modification is that the query string parameter on
the /slow endpoint is removed in favor of just submitting requests to
the path /slow/{id}, where id just used a marker to distinguish each
request. The server chooses a random sleep of at least 500ms for each
request on the slow path. The assertion here then is that the /{id}
responses arrive first, then then /slow responses. We can not make an
assertion on the order of the responses, but we can assert that we did
see every expected response.
Relates #19845
Due to a misordering of the HTTP handlers, the Netty 4 HTTP server
mishandles Expect: 100-continue headers from clients. This commit fixes
this issue by ordering the handlers correctly.
Relates #19904
Today when we load the Netty plugins, we indirectly cause several Netty
classes to initialize. This is because we attempt to load some classes
by name, and loading these classes is done in a way that triggers a long
chain of class initializers within Netty. We should not do this, this
can lead to log messages before the logger is loader, and it leads to
initialization in cases when the classes would never be needed (for
example, Netty 3 class initialization is never needed if Netty 4 is
used, and vice versa). This commit avoids this early initialization of
these classes by removing the need for the early loading.
Relates #19819
* master:
Fix REST test documentation
[Test] move methods from bwc test to test package for use in plugins (#19738)
package-info.java should be in src/main only.
Split regular histograms from date histograms. #19551
Tighten up concurrent store metadata listing and engine writes (#19684)
Plugins: Make NamedWriteableRegistry immutable and add extenion point for named writeables
Add documentation for the 'elasticsearch-translog' tool
[TEST] Increase time waiting for all shards to move off/on to a node
Fixes the active shard count check in the case of (#19760)
Fixes cat tasks operation in detailed mode
ignore some docker craziness in scccomp environment checks
Currently any code that wants to added NamedWriteables to the
NamedWriteableRegistry can do so via guice injection of the registry,
and registering at construction time. However, this makes the registry
complex: it has both get and register methods synchronized, and there is
likely contention on the read side from multiple threads. The
registration has mostly already been contained to guice modules at node
construction time.
This change makes the registry immutable, taking all of the
NamedWriteable readers at construction time. It also allows plugins to
added arbitrary named writables that it may use in its own transport
actions.
In an effort to reduce the number of tiny packages we have in the
code base this moves all the files that were in subdirectories of
`org.elasticsearch.rest.action.admin.cluster` into
`org.elasticsearch.rest.action.admin.cluster`.
Also fixes line length in these packages.
This makes it obvious that these tests are for running the client yaml
suites. Now that there are other ways of running tests using the REST
client against a running cluster we can't go on calling the shared
client yaml tests "REST tests". They are rest tests, but they aren't
**the** rest tests.
Recently, we experience timeouts on our Windows build slaves for
Netty4RestIT. Until we have figured out what's going on, we
increase this test suite's timeout temporarily to ensure this
timeout does not mask other problems.
This adds a header that looks like `Location: /test/test/1` to the
response for the index/create/update API. The requirement for the header
comes from https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.htmlhttps://tools.ietf.org/html/rfc7231#section-7.1.2 claims that relative
URIs are OK. So we use an absolute path which should resolve to the
appropriate location.
Closes#19079
This makes large changes to our rest test infrastructure, allowing us
to write junit tests that test a running cluster via the rest client.
It does this by splitting ESRestTestCase into two classes:
* ESRestTestCase is the superclass of all tests that use the rest client
to interact with a running cluster.
* ESClientYamlSuiteTestCase is the superclass of all tests that use the
rest client to run the yaml tests. These tests are shared across all
official clients, thus the `ClientYamlSuite` part of the name.