This wraps the stream (`.streamInput()`) that is passed to many of the
`createParser` instances in the enclosing (or a new) try-with-resources block.
This ensures the `BytesReference.streamInput()` is closed.
Relates to elastic/x-pack-elasticsearch#28504
Original commit: elastic/x-pack-elasticsearch@7546e3b4d4
This moves the `payload = null;` statement to above the asynchronous
HTTP call. This helps to avoid a race condition relative to `doClose`
asserting that it is `null`.
This is only a realistic situation during a shutdown situation because
the thread responds immediately before it can be nullified and freeable,
which is not a realistic scenario in any other situation.
Original commit: elastic/x-pack-elasticsearch@eb3a6ff118
Remove functions without a backing matrix agg
MatrixAgg works across multiple fields and exposing it directly in SQL
does not work. Instead isolated functions are exposed which get folded
and optimized into one matrix agg per field. Thus not all matrix
functions can be exposed in SQL, at least at this time.
Instead of depending on the plugin directly, depend on the plugin client
jar (matrix-agg-client)
Remove outdated test
Original commit: elastic/x-pack-elasticsearch@ec9b31bf59
The type of BUFFER_LENGTH needs to be an integer (not NULL) and unsigned
indicate the opposite of signed.
Change isSigned from Object to primitive
Since all the consumer of isSigned expect a primitive, an Object is just causing troubles by being null.
Update description table
Original commit: elastic/x-pack-elasticsearch@8e1960bdb5
Use AggregatorTestCase's `newIndexSearcher()` instead. Lucene's
version can randomly wrap with IndexReader with things we can't handle
like ParallelCompositeReader
Original commit: elastic/x-pack-elasticsearch@b4c0e9a601
The test job helper randomizes the index name with 1-10 characters,
which can lead to randomized index names to overlap and show fewer
caps than the test expects.
The solution is to just use index names "0"-"24" to ensure none
of the names overlap, and thus the caps don't overlap.
Original commit: elastic/x-pack-elasticsearch@74a6d13213
The arrangement of the final latch meant the latch could countdown,
then the test ends before the await() triggers which caused the
thread to be interrupted and fail. The whole arrangement was incorrect
anyhow.
We need to await the latch before sending the search response as before,
but move the final atomicBoolean to the second time the persistent
task status is updated which is a signal that we are done
and can end the test
If these tests continues to be flaky, we should probably just remove them.
The headers are tested elsewhere and not required to be tested in this
context.
Original commit: elastic/x-pack-elasticsearch@0cf5603972
Analysis limits contain settings that affect the resources
used by ML jobs. Those limits always take place. However,
explictly setting them is not required as they have reasonable
defaults. For a long time those defaults lived on the c++ side.
The job could just not have any explicit limits and that meant
defaults would be used at the c++ side. This has the disadvantage
that it is not obvious to the users what these settings are set to.
Additionally, users might not be aware of the settings existence.
On top of that, since 6.1, the default model_memory_limit was lowered
from 4GB to 1GB. For BWC, this meant that jobs where model_memory_limit
is null, the default of 4GB applies. Jobs that were created from 6.1
onwards, contain an explicit setting for model_memory_limit, which is
1GB unless the user sets it differently. This adds additional confusion.
This commit makes analysis limits an always explicit setting on the job.
Regardless of whether the user sets custom limits or not, the job object
(and response) will contain the full analysis limits values.
The possibilities for interpretation of missing values are:
- the entire analysis_limits is null: this may only happen for jobs
created prior to 6.1. Thus we set the model_memory_limit to 4GB.
- analysis_limits are non-null but model_memory_limit is: this also
may only happen for jobs prior to 6.1. Again, we set memory limit to
4GB.
- model_memory_limit is non-null: this either means the user set an
explicit value or the job was created from 6.1 onwards and it has
the explicit default of 1GB. We simply keep the given value.
For categorization_examples_limit the default has always been 4, so
we fill that in when it's missing.
Finally, note that we still need to handle potential null values
for the situation of a mixed cluster.
Original commit: elastic/x-pack-elasticsearch@5b6994ef75
I'm really really sad to be removing the cli-fixture but I've had
trouble with it leaking recently it is pretty slow. Beyond that, we'd
prefer that our test fixture only fixture things that are external
depndencies.
So, yeah, I'm removing it. So we get faster tests and no chance of
leaking processes. We lose some "realness" in the tests. Instead of
interacting with the CLI like a real user we embed it in the test
process. That means we don't test the forking, we don't test the
executable jar, and we don't test the jLine console detection stuff. On
the other hand we were kind of forcing the jLine console detection stuff
in a funky way with the fixture anyway. And we test the executable jar
in the packaging tests. And that'll have to do.
I haven't renamed `RemoteCli` because it'd bloat this commit with
mechanical changes that'd make it hard to review. I'll rename it in a
followup commit.
This also updates jLine so we can disable blinking to matching
parentheses during testing. I have no clue why, but this wasn't
happening when we used the fixture. The trouble with the blinking is
that it is based on *time* so it slows things down. Worse, it works
inconsistently! Sometimes it spits out sensible ascii codes and
sometimes it, well, spits out weird garbage. When you use it in person
it works fine though. So we keep it on when not testing.
Cleans up some redundancy in when testing CLI errors. Less copy and
paste good.
I was tempted to disable the xterm emulation entirely while working on
this because upgrading jLine changed a few things and it was a real pain
to update. But If we turned that off then we'd have *nothing* testing
the colors and such. That'd be a shame because we use color in the
output to commicate stuff. I like it so I don't want to break it.
While I was there, I replaces the cli connector's `PrintWriter` with a
`BufferedWriter`. The `PrintWriter` was kind of a trap because `println`
would fail to work properly on windows because we force the terminal
into xterm mode and it doesn't know what to do with windows line
endings. Windows.....
Additionally I fixed a race condition between disabling echo when
reading passwords and fast writers. We were disabling the echo shortly
after sending the prompt. A fast enough writer could send us text before
the echo disable kicked in. Now I delegate to `LineReader#readLine`
with a special echo mask that disables echo. This is both easier to test
and doesn't seem to have the race condition. This race condition was
failing the tests because they are so much faster now. Yay!
Original commit: elastic/x-pack-elasticsearch@d0ec027396
Add basic support for catalog parameters in SYS COLUMN
Pass an empty string instead of a null inside the prepared statement
Don't use pattern for catalog in getColumns
Original commit: elastic/x-pack-elasticsearch@17e9e851a0
* foo.bar can mean either: no table field foo.bar or table foo, field bar
The resolution rule has been extended to include the latter case as
fallback.
* Always check field ambiguity
* Since field with dots can create confusion (when not qualified), the
analyzer always now always checks for both qualified and not-qualified
fields and throws an ambiguity message with the potential candidates.
This forces the use of qualifiers or quotes to better indicate the
desired field.
* Add example of aliasing the table to remove ambiguity
Original commit: elastic/x-pack-elasticsearch@8b70b9c4ee
Incorrect latch caused this test to run slowly (until the await
finished), and could probably cause failure due to incorrect ordering
Original commit: elastic/x-pack-elasticsearch@ebeb8655da
The latches were not placed correctly, allowing the aborts
to be set before we checked the state for Indexing the first time.
This was due to using the DelayingIndexer's built in latch, which
isn't placed quite where we needed it.
Original commit: elastic/x-pack-elasticsearch@590cfa07b0
* Pass InputStream when creating XContent parser
Rather than passing the raw `BytesReference` in when creating the xcontent
parser, this passes the StreamInput (which is an InputStream), this allows us to
decouple XContent from BytesReference.
This is the x-pack side of https://github.com/elastic/elasticsearch/pull/28754
* Use the streamInput variant, not sourceAsString
Original commit: elastic/x-pack-elasticsearch@dd5d8b1654
This adds a new Rollup module to XPack, which allows users to configure periodic "rollup jobs" to pre-aggregate data. That data is then available later for search through a special RollupSearch API, which mimics the DSL and functionality of regular search.
Rollups are used to drastically reduce the on-disk footprint of metric-based data (e.g. timestamped document with numeric and keyword fields). It can also be used to speed up aggregations over large datasets, since the rolled data will be considerably smaller and fewer documents to search.
The PR adds seven new endpoints to interact with Rollups; create/get/delete job, start/stop job, a capabilities API similar to field-caps, and a Rollup-enabled search.
Original commit: elastic/x-pack-elasticsearch@dcde91aacf
The watcher thread pool is scaled by the number of CPUs and has by
default up to 5x the number of cores. This is needed because we assumme
I/O based waiting workloads for watcher. However if the node is not a
data node, there will not be any execution of watches with the exception
of a user calling the execute watch API on that node.
This means, we can get away with just one thread, so that there is no
need for the JVM to manage more threads on master/client or ingest only
nodes.
Original commit: elastic/x-pack-elasticsearch@b5899401d3
This commit updates the IndexAuditUpgradeIT test after backporting. The
backport fixes a bug that caused the node name field to be null, which
affected the expected number of nodes in the tests.
Original commit: elastic/x-pack-elasticsearch@9df99d8800
This adds a new setting, `xpack.monitoring.collection.enabled`, and
disables it by default (`false`).
Original commit: elastic/x-pack-elasticsearch@4b3a5a1161
Although not frequently used in production, we make heavy use of the
FileRolesStore within integration tests. This change adds a little bit
more logging at INFO level when the roles.yml file is (re)loaded.
Original commit: elastic/x-pack-elasticsearch@bbacd46e28
In order to check for the REST tests if triggering of watches with
security enabled works as expected, we have to add a watch and wait for
its background execution. In the REST tests the only wait is to wait for
this with a timeout. If the timeout is reached but the watch has not
been executed yet, the test will fail.
This commit replaces the YAML with a java based REST test, so that
helper methods like assertBusy() can be used and waiting for a watch to
be executed now works as expected.
relates elastic/x-pack-elasticsearch#3753
Original commit: elastic/x-pack-elasticsearch@fc39636ef7
The current toXContent serialization of a failed hipchat message writes
the same field called status twice and thus cannot be stored in the
watch history.
This commit ensures the field gets only written once.
relates elastic/x-pack-elasticsearch#3919
Original commit: elastic/x-pack-elasticsearch@fb499e8055