The `-Xloggc:filename.log` parameter has very strict filename semantics:
```
[A-Z][a-z][0-9]-_.%[p|t]
```
Our script specifies \" and \" to surround it, which makes Java think we
are sending: -Xloggc:"foo.log" and it fails with:
```
Invalid file name for use with -Xloggc: Filename can only contain the characters [A-Z][a-z][0-9]-_.%[p|t] but it has been "foo.log"
Note %p or %t can only be used once
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
```
We can't quote this, and we should not need to since the valid
characters don't include a space character, so we don't need to worry
about quoting.
Resolves#13277
We currently have a small number of test classes with the suffix "Test",
yet most use the suffix "Tests". This change renames all the "Test"
classes, so that we have a simple rule: "Non-inner classes ending with
Tests".
These are not actually tests, but command line applications that must be
run manually. This change removes the entire stresstest package. We can
add back individual tests that we find necessary, and make them real
tests (whether integ or not).
As we log a lot, we hit a default limit:
```
The test or suite printed 9450 bytes to stdout and stderr, even though the limit was set to 8192 bytes. Increase the limit with @Limit, ignore it completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
```
(cherry picked from commit 0cb325d)
While the list of having exclusions is small, it shouldn't be necessary
at all. Base test cases should be suffixed with TestCase so they are not
picked up by the test class name pattern. This same rule works for
abstract classes as well.
This change renames abstract tests to use the TestCase suffix, adds a
check in naming convention tests, and removes the exclusion from our
test runner configuration. It also excludes inner classes (the only
exclude we should have IMO), so that we have no need to @Ignore the
inner test classes for naming convention tests.
In this test we assume that after waitForRelocation() has returned shards
are no more relocated and optimize will therefore succeed always.
However, because the test does not wait for green status, relocations can
still start after waitForRelocation() has returned successfully.
see #13266 for a detailed explanation
This commit adds a new smoke test for testing client as a end Java user.
It starts a cluster in `pre-integration-test` phase, then execute the client operations defined as JUnit tests within `integration-test` phase and then stop the external cluster in `post-integration-test` phase.
You can also run test classes from your IDE.
* Start an external node on your machine with `bin/elasticsearch` (note that you can test Java API regressions if you run an older or newer node version)
* Run the JUnit test. By default, it will run tests on `localhost:9300` but you can change this setting using system property `tests.cluster`. It also expects the default `cluster.name` (`elasticsearch`).
This commit also starts adding [snippets as defined by Maven](https://maven.apache.org/guides/mini/guide-snippet-macro.html) to help keeping automatically synchronized the Java reference guide with the current code.
Our documentation builder tool does not support snippets though but we will most likely support it at some point.
At the moment if an index script is used in a request, the spawned request to get the indexed script from the `.scripts` index does not get the headers and context copied to it from the original request. This change makes the calls to the `ScriptService` pass in a `HasContextAndHeaders` object that can provide the headers and context. For the `search()` method the context and headers are retrieved from `SearchContext.current()`.
Closes#12891
The shaded version of elasticsearch was built at the very beginning to avoid dependency conflicts in a specific case where:
* People use elasticsearch from Java
* People needs to embed elasticsearch jar within their own application (as it's today the only way to get a `TransportClient`)
* People also embed in their application another (most of the time older) version of dependency we are using for elasticsearch, such as: Guava, Joda, Jackson...
This conflict issue can be solved within the projects themselves by either upgrade the dependency version and use the one provided by elasticsearch or by shading elasticsearch project and relocating some conflicting packages.
Example
-------
As an example, let's say you want to use within your project `Joda 2.1` but elasticsearch `2.0.0-beta1` provides `Joda 2.8`.
Let's say you also want to run all that with shield plugin.
Create a new maven project or module with:
```xml
<groupId>fr.pilato.elasticsearch.test</groupId>
<artifactId>es-shaded</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<elasticsearch.version>2.0.0-beta1</elasticsearch.version>
</properties>
<dependencies>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.plugin</groupId>
<artifactId>shield</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
</dependencies>
```
And now shade and relocate all packages which conflicts with your own application:
```xml
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>org.joda</pattern>
<shadedPattern>fr.pilato.thirdparty.joda</shadedPattern>
</relocation>
</relocations>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
```
You can create now a shaded version of elasticsearch + shield by running `mvn clean install`.
In your project, you can now depend on:
```xml
<dependency>
<groupId>fr.pilato.elasticsearch.test</groupId>
<artifactId>es-shaded</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>joda-time</groupId>
<artifactId>joda-time</artifactId>
<version>2.1</version>
</dependency>
```
Build then your TransportClient as usual:
```java
TransportClient client = TransportClient.builder()
.settings(Settings.builder()
.put("path.home", ".")
.put("shield.user", "username:password")
.put("plugin.types", "org.elasticsearch.shield.ShieldPlugin")
)
.build();
client.addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress("localhost", 9300)));
// Index some data
client.prepareIndex("test", "doc", "1").setSource("foo", "bar").setRefresh(true).get();
SearchResponse searchResponse = client.prepareSearch("test").get();
```
If you want to use your own version of Joda, then import for example `org.joda.time.DateTime`. If you want to access to the shaded version (not recommended though), import `fr.pilato.thirdparty.joda.time.DateTime`.
You can run a simple test to make sure that both classes can live together within the same JVM:
```java
CodeSource codeSource = new org.joda.time.DateTime().getClass().getProtectionDomain().getCodeSource();
System.out.println("unshaded = " + codeSource);
codeSource = new fr.pilato.thirdparty.joda.time.DateTime().getClass().getProtectionDomain().getCodeSource();
System.out.println("shaded = " + codeSource);
```
It will print:
```
unshaded = (file:/path/to/joda-time-2.1.jar <no signer certificates>)
shaded = (file:/path/to/es-shaded-1.0-SNAPSHOT.jar <no signer certificates>)
```
This PR also removes fully-loaded module.
By the way, the project can now build with Maven 3.3.3 so we can relax a bit our maven policy.
Currently, we do not allow reads on shards which are in POST_RECOVERY which
unfortunately can cause search failures on shards which just recovered if there no replicas (#9421).
The reason why we did not allow reads on shards that are in POST_RECOVERY is
that after relocating a shard might miss a refresh if the node that executed the
refresh is behind with cluster state processing. If that happens, a user might execute
index/refresh/search but still not find the document that was indexed.
We changed how refresh works now in #13068 to make sure that shards cannot miss a refresh this
way by sending refresh requests the same way that we send write requests.
This commit changes IndexShard to allow reads on POST_RECOVERY now.
In addition it adds two test:
- test for issue #9421 (After relocation shards might temporarily not be searchable if still in POST_RECOVERY)
- test for visibility issue with relocation and refresh if reads allowed when shard is in POST_RECOVERY
closes#9421
This cleans up deb, rpm, systemd, and sysvinit tests:
1. Move skip_not_rpm, skip_not_dpkg, etc to the setup() methods for faster
runtime and cleaner code.
2. Removed lots of needless invocations of `run`
3. Created install_package for use in the systemd and sysvinit tests.
4. Removed lots of needless stderr to stdout redirects.
Closes#13075
Related to #13074