We try to reuse character arrays and UTF8 writers with softreferences.
SoftReferences have negative impact on GC and should be avoided in
general. Yet in this case it can simply replaced with a per-stream
Bytes/CharsRef that is thread local and has the same lifetime as the
stream.
Today the RecovyerID is taken from a static atomic long which
is essentially a per JVM ID. We run the tests within the same
JVM and that means we don't really simulate what happens in
production environments. Instead we should use a per node generated
ID.
* If plugin does not provide `lucene` property, we consider that the plugin is compatible.
* If plugin provides `lucene` property, we try to load related Enum org.apache.lucene.util.Version. If this fails, it means that the node is too "old" comparing to the Lucene version the plugin was built for.
* We compare then two first digits of current node lucene version against two first digits of plugin Lucene version. If not equal, it means that the plugin is too "old" for the current node.
Plugin developers who wants to launch plugin check only have to add a `lucene` property in `es-plugin.properties` file. If you are using maven to build your plugin, you can do it like this:
In `pom.xml`:
```xml
<properties>
<lucene.version>4.6.0</lucene.version>
</properties>
<build>
<resources>
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
</resource>
</resources>
</build>
```
In `es-plugin.properties`, add:
```properties
lucene=${lucene.version}
```
BTW, if you don't already have it, you can add the plugin version as well:
```properties
version=${project.version}
```
You can disable that check using `plugins.check_lucene: false`.
This commit adds randomization for:
* `index.merge.scheduler.max_thread_count`
* `index.merge.scheduler.max_merge_count`
This commit also moves to use
EsExecutors#boundedNumberOfProcessors(Settings) to default
configure the default `max_thread_count` for better reproducibility
Closes#6194
Added new internal flag to IndicesOptions that tells whether aliases can be resolved to multiple indices or not.
Cut over to new metaData#concreteIndices(IndicesOptions, String...) for all the api previously using MetaData#concreteIndices(String[], IndicesOptions) and removed old method, deprecation is not needed as it doesn't break client code.
Introduced constants for flags in IndicesOptions for more readability
Renamed MetaData#concreteIndex to concreteSingleIndex, left method as a shortcut although it calls the common concreteIndices that accepts IndicesOptions and multipleIndices
We configure the threadpools according to the number of processors which is
different on every machine. Yet, we had some test failures related to this
and #6174 that only happened reproducibly on a node with 1 available processor.
This commit does:
* sometimes randomize the number of available processors
* if we don't randomize we should set the actual number of available processors
in the settings on the test node
* always print out the num of processors when a test fails to make sure we can
reproduce the thread pool settings with the reproduce info line
Closes#6176
On small hardware, the BENCH thread pool can be set to size 1. This is
problematic as it means that while a benchmark is active, there are no
threads available to service administrative tasks such as listing and
aborting. This change fixes that by executing list and abort operations
on the GENERIC thread pool.
Closes#6174
I think Chuck Norris is required to fix this at this point until we have an API
that can for instance pause a Benchmark. We basically wait for a query to be executed
and that query syncs on a latch with the test in a script :)
This commit also adds some more testing for benchmarks that run into errors.
When you have a nested document and want to sort on its fields, it's perfectly doable on regular fields but not on "generated" sub fields.
Here is a SENSE recreation:
```
DELETE /tmp
PUT /tmp
PUT /tmp/doc/_mapping
{
"properties": {
"flat": {
"type": "string",
"index": "analyzed",
"fields": {
"sub": {
"type": "string",
"index": "not_analyzed"
}
}
},
"nested": {
"type": "nested",
"properties": {
"foo": {
"type": "string",
"index": "analyzed",
"fields": {
"sub": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}
PUT /tmp/doc/1
{
"flat":"bar",
"nested":{
"foo":"bar"
}
}
```
When sorting on `flat.sub` sub field, everything is fine:
```
GET /tmp/doc/_search
{
"sort": [
{
"flat.sub": {
"order": "desc"
}
}
]
}
```
When sorting on `nested` field, everything is fine:
```
GET /tmp/doc/_search
{
"sort": [
{
"nested.foo": {
"order": "desc"
}
}
]
}
```
But when sorting on `nested.sub` field, sorting is incorrect:
```
GET /tmp/doc/_search
{
"sort": [
{
"nested.foo.sub": {
"order": "desc"
}
}
]
}
Closes#6150.
Every class using these parameters has their own member where these four
are stored. This clutters the code. Because they mostly needed together
it might make sense to group them.
The recovery API was sometimes misreporting the recovered byte
percentages of index files. This was caused by summing up total file
lengths on each file chunk transfer. It should have been summing the
lengths of each transfer request.
Closes#6113
NettyTransport's ChannelPipelineFactory uses the instance variable
serverOpenChannels in order to create sockets. However, this instance variable
is set to null when stoping the netty transport, so if the transport tries to
stop and to initialize a socket at the same time you might hit the following
NullPointerException:
[2014-05-13 07:33:47,616][WARN ][netty.channel.socket.nio.AbstractNioSelector] Failed to initialize an accepted socket.
java.lang.NullPointerException: handler
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.<init>(DefaultChannelPipeline.java:725)
at org.jboss.netty.channel.DefaultChannelPipeline.init(DefaultChannelPipeline.java:667)
at org.jboss.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:96)
at org.elasticsearch.transport.netty.NettyTransport$2.getPipeline(NettyTransport.java:327)
at org.jboss.netty.channel.socket.nio.NioServerBoss.registerAcceptedChannel(NioServerBoss.java:134)
at org.jboss.netty.channel.socket.nio.NioServerBoss.process(NioServerBoss.java:104)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
This fix ensures that the ChannelPipelineFactory always uses the channels that
have been used upon start, even if a stop request is issued concurrently.
Close#6144