Merge remote-tracking branch 'es/master' into ccr

* es/master:
  [Docs] Fix explanation for `from` and `size` example (#28320)
  Adapt bwc version after backport #28358
  Always return the after_key in composite aggregation response (#28358)
  Adds test name to MockPageCacheRecycler exception (#28359)
  Adds a note in the `terms` aggregation docs regarding pagination (#28360)
  [Test] Fix DiscoveryNodesTests.testDeltas() (#28361)
  Update packaging tests to work with meta plugins (#28336)
  Remove Painless Type from MethodWriter in favor of Java Class. (#28346)
  [Doc] Fixs typo in reverse-nested-aggregation.asciidoc (#28348)
  Reindex: Shore up rethrottle test
  Only assert single commit iff index created on 6.2
  isHeldByCurrentThread should return primitive bool
  [Docs] Clarify `html` encoder in highlighting.asciidoc (#27766)
  Fix GeoDistance query example (#28355)
  Settings: Introduce settings updater for a list of settings (#28338)
  Adapt bwc version after backport #28310
This commit is contained in:
Martijn van Groningen 2018-01-25 13:44:08 +01:00
commit dea60fed50
No known key found for this signature in database
GPG Key ID: AB236F4FCF2AF12A
30 changed files with 504 additions and 123 deletions

View File

@ -5,7 +5,7 @@ See {ref}/query-dsl-geo-distance-query.html[Geo Distance Query]
["source","java",subs="attributes,callouts,macros"]
--------------------------------------------------
include-tagged::{query-dsl-test}[geo_bounding_box]
include-tagged::{query-dsl-test}[geo_distance]
--------------------------------------------------
<1> field
<2> center point

View File

@ -394,6 +394,10 @@ GET /_search
...
"aggregations": {
"my_buckets": {
"after_key": { <1>
"date": 1494288000000,
"product": "mad max"
},
"buckets": [
{
"key": {
@ -403,7 +407,7 @@ GET /_search
"doc_count": 1
},
{
"key": { <1>
"key": {
"date": 1494288000000,
"product": "mad max"
},
@ -418,9 +422,14 @@ GET /_search
<1> The last composite bucket returned by the query.
NOTE: The `after_key` is equals to the last bucket returned in the response before
any filtering that could be done by <<search-aggregations-pipeline, Pipeline aggregations>>.
If all buckets are filtered/removed by a pipeline aggregation, the `after_key` will contain
the last bucket before filtering.
The `after` parameter can be used to retrieve the composite buckets that are **after**
the last composite buckets returned in a previous round.
For the example below the last bucket is `"key": [1494288000000, "mad max"]` so the next
For the example below the last bucket can be found in `after_key` and the next
round of result can be retrieved with:
[source,js]
@ -485,6 +494,10 @@ GET /_search
...
"aggregations": {
"my_buckets": {
"after_key": {
"date": 1494201600000,
"product": "rocky"
},
"buckets": [
{
"key": {

View File

@ -93,7 +93,7 @@ GET /issues/_search
// TEST[s/_search/_search\?filter_path=aggregations/]
As you can see above, the `reverse_nested` aggregation is put in to a `nested` aggregation as this is the only place
in the dsl where the `reversed_nested` aggregation can be used. Its sole purpose is to join back to a parent doc higher
in the dsl where the `reverse_nested` aggregation can be used. Its sole purpose is to join back to a parent doc higher
up in the nested structure.
<1> A `reverse_nested` aggregation that joins back to the root / main document level, because no `path` has been defined.

View File

@ -114,6 +114,11 @@ This means that if the number of unique terms is greater than `size`, the return
(it could be that the term counts are slightly off and it could even be that a term that should have been in the top
size buckets was not returned).
NOTE: If you want to retrieve **all** terms or all combinations of terms in a nested `terms` aggregation
you should use the <<search-aggregations-bucket-composite-aggregation,Composite>> aggregation which
allows to paginate over all possible terms rather than setting a size greater than the cardinality of the field in the
`terms` aggregation. The `terms` aggregation is meant to return the `top` terms and does not allow pagination.
[[search-aggregations-bucket-terms-aggregation-approximate-counts]]
==== Document counts are approximate

View File

@ -858,7 +858,7 @@ GET /bank/_search
Note that if `size` is not specified, it defaults to 10.
This example does a `match_all` and returns documents 11 through 20:
This example does a `match_all` and returns documents 10 through 19:
[source,js]
--------------------------------------------------

View File

@ -142,8 +142,9 @@ You can specify the locale to use with `boundary_scanner_locale`.
boundary_scanner_locale:: Controls which locale is used to search for sentence
and word boundaries.
encoder:: Indicates if the highlighted text should be HTML encoded:
`default` (no encoding) or `html` (escapes HTML highlighting tags).
encoder:: Indicates if the snippet should be HTML encoded:
`default` (no encoding) or `html` (HTML-escape the snippet text and then
insert the highlighting tags)
fields:: Specifies the fields to retrieve highlights for. You can use wildcards
to specify fields. For example, you could specify `comment_*` to

View File

@ -81,7 +81,7 @@ public final class MethodWriter extends GeneratorAdapter {
private final BitSet statements;
private final CompilerSettings settings;
private final Deque<List<org.objectweb.asm.Type>> stringConcatArgs =
private final Deque<List<Type>> stringConcatArgs =
(INDY_STRING_CONCAT_BOOTSTRAP_HANDLE == null) ? null : new ArrayDeque<>();
public MethodWriter(int access, Method method, ClassVisitor cw, BitSet statements, CompilerSettings settings) {
@ -200,7 +200,7 @@ public final class MethodWriter extends GeneratorAdapter {
* Proxy the box method to use valueOf instead to ensure that the modern boxing methods are used.
*/
@Override
public void box(org.objectweb.asm.Type type) {
public void box(Type type) {
valueOf(type);
}
@ -252,10 +252,10 @@ public final class MethodWriter extends GeneratorAdapter {
}
}
public void writeAppendStrings(final Definition.Type type) {
public void writeAppendStrings(Class<?> clazz) {
if (INDY_STRING_CONCAT_BOOTSTRAP_HANDLE != null) {
// Java 9+: record type information
stringConcatArgs.peek().add(type.type);
stringConcatArgs.peek().add(getType(clazz));
// prevent too many concat args.
// If there are too many, do the actual concat:
if (stringConcatArgs.peek().size() >= MAX_INDY_STRING_CONCAT_ARGS) {
@ -266,24 +266,24 @@ public final class MethodWriter extends GeneratorAdapter {
}
} else {
// Java 8: push a StringBuilder append
if (type.clazz == boolean.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_BOOLEAN);
else if (type.clazz == char.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_CHAR);
else if (type.clazz == byte.class ||
type.clazz == short.class ||
type.clazz == int.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_INT);
else if (type.clazz == long.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_LONG);
else if (type.clazz == float.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_FLOAT);
else if (type.clazz == double.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_DOUBLE);
else if (type.clazz == String.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_STRING);
else invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_OBJECT);
if (clazz == boolean.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_BOOLEAN);
else if (clazz == char.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_CHAR);
else if (clazz == byte.class ||
clazz == short.class ||
clazz == int.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_INT);
else if (clazz == long.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_LONG);
else if (clazz == float.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_FLOAT);
else if (clazz == double.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_DOUBLE);
else if (clazz == String.class) invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_STRING);
else invokeVirtual(STRINGBUILDER_TYPE, STRINGBUILDER_APPEND_OBJECT);
}
}
public void writeToStrings() {
if (INDY_STRING_CONCAT_BOOTSTRAP_HANDLE != null) {
// Java 9+: use type information and push invokeDynamic
final String desc = org.objectweb.asm.Type.getMethodDescriptor(STRING_TYPE,
stringConcatArgs.pop().stream().toArray(org.objectweb.asm.Type[]::new));
final String desc = Type.getMethodDescriptor(STRING_TYPE,
stringConcatArgs.pop().stream().toArray(Type[]::new));
invokeDynamic("concat", desc, INDY_STRING_CONCAT_BOOTSTRAP_HANDLE);
} else {
// Java 8: call toString() on StringBuilder
@ -292,9 +292,9 @@ public final class MethodWriter extends GeneratorAdapter {
}
/** Writes a dynamic binary instruction: returnType, lhs, and rhs can be different */
public void writeDynamicBinaryInstruction(Location location, Definition.Type returnType, Definition.Type lhs, Definition.Type rhs,
public void writeDynamicBinaryInstruction(Location location, Class<?> returnType, Class<?> lhs, Class<?> rhs,
Operation operation, int flags) {
org.objectweb.asm.Type methodType = org.objectweb.asm.Type.getMethodType(returnType.type, lhs.type, rhs.type);
Type methodType = Type.getMethodType(getType(returnType), getType(lhs), getType(rhs));
switch (operation) {
case MUL:
@ -310,7 +310,7 @@ public final class MethodWriter extends GeneratorAdapter {
// if either side is primitive, then the + operator should always throw NPE on null,
// so we don't need a special NPE guard.
// otherwise, we need to allow nulls for possible string concatenation.
boolean hasPrimitiveArg = lhs.clazz.isPrimitive() || rhs.clazz.isPrimitive();
boolean hasPrimitiveArg = lhs.isPrimitive() || rhs.isPrimitive();
if (!hasPrimitiveArg) {
flags |= DefBootstrap.OPERATOR_ALLOWS_NULL;
}
@ -343,8 +343,8 @@ public final class MethodWriter extends GeneratorAdapter {
}
/** Writes a static binary instruction */
public void writeBinaryInstruction(Location location, Definition.Type type, Operation operation) {
if ((type.clazz == float.class || type.clazz == double.class) &&
public void writeBinaryInstruction(Location location, Class<?> clazz, Operation operation) {
if ( (clazz == float.class || clazz == double.class) &&
(operation == Operation.LSH || operation == Operation.USH ||
operation == Operation.RSH || operation == Operation.BWAND ||
operation == Operation.XOR || operation == Operation.BWOR)) {
@ -352,17 +352,17 @@ public final class MethodWriter extends GeneratorAdapter {
}
switch (operation) {
case MUL: math(GeneratorAdapter.MUL, type.type); break;
case DIV: math(GeneratorAdapter.DIV, type.type); break;
case REM: math(GeneratorAdapter.REM, type.type); break;
case ADD: math(GeneratorAdapter.ADD, type.type); break;
case SUB: math(GeneratorAdapter.SUB, type.type); break;
case LSH: math(GeneratorAdapter.SHL, type.type); break;
case USH: math(GeneratorAdapter.USHR, type.type); break;
case RSH: math(GeneratorAdapter.SHR, type.type); break;
case BWAND: math(GeneratorAdapter.AND, type.type); break;
case XOR: math(GeneratorAdapter.XOR, type.type); break;
case BWOR: math(GeneratorAdapter.OR, type.type); break;
case MUL: math(GeneratorAdapter.MUL, getType(clazz)); break;
case DIV: math(GeneratorAdapter.DIV, getType(clazz)); break;
case REM: math(GeneratorAdapter.REM, getType(clazz)); break;
case ADD: math(GeneratorAdapter.ADD, getType(clazz)); break;
case SUB: math(GeneratorAdapter.SUB, getType(clazz)); break;
case LSH: math(GeneratorAdapter.SHL, getType(clazz)); break;
case USH: math(GeneratorAdapter.USHR, getType(clazz)); break;
case RSH: math(GeneratorAdapter.SHR, getType(clazz)); break;
case BWAND: math(GeneratorAdapter.AND, getType(clazz)); break;
case XOR: math(GeneratorAdapter.XOR, getType(clazz)); break;
case BWOR: math(GeneratorAdapter.OR, getType(clazz)); break;
default:
throw location.createError(new IllegalStateException("Illegal tree structure."));
}
@ -416,7 +416,7 @@ public final class MethodWriter extends GeneratorAdapter {
* @param flavor type of call
* @param params flavor-specific parameters
*/
public void invokeDefCall(String name, org.objectweb.asm.Type methodType, int flavor, Object... params) {
public void invokeDefCall(String name, Type methodType, int flavor, Object... params) {
Object[] args = new Object[params.length + 2];
args[0] = settings.getInitialCallSiteDepth();
args[1] = flavor;

View File

@ -25,6 +25,7 @@ import org.elasticsearch.painless.DefBootstrap;
import org.elasticsearch.painless.Definition;
import org.elasticsearch.painless.Definition.Cast;
import org.elasticsearch.painless.Definition.Type;
import org.elasticsearch.painless.Definition.def;
import org.elasticsearch.painless.Globals;
import org.elasticsearch.painless.Locals;
import org.elasticsearch.painless.Location;
@ -274,12 +275,12 @@ public final class EAssignment extends AExpression {
writer.writeDup(lhs.accessElementCount(), catElementStackSize); // dup the top element and insert it
// before concat helper on stack
lhs.load(writer, globals); // read the current lhs's value
writer.writeAppendStrings(lhs.actual); // append the lhs's value using the StringBuilder
writer.writeAppendStrings(Definition.TypeToClass(lhs.actual)); // append the lhs's value using the StringBuilder
rhs.write(writer, globals); // write the bytecode for the rhs
if (!(rhs instanceof EBinary) || !((EBinary)rhs).cat) { // check to see if the rhs has already done a concatenation
writer.writeAppendStrings(rhs.actual); // append the rhs's value since it's hasn't already
if (!(rhs instanceof EBinary) || !((EBinary)rhs).cat) { // check to see if the rhs has already done a concatenation
writer.writeAppendStrings(Definition.TypeToClass(rhs.actual)); // append the rhs's value since it's hasn't already
}
writer.writeToStrings(); // put the value for string concat onto the stack
@ -313,9 +314,9 @@ public final class EAssignment extends AExpression {
// write the operation instruction for compound assignment
if (promote.dynamic) {
writer.writeDynamicBinaryInstruction(
location, promote, DefType, DefType, operation, DefBootstrap.OPERATOR_COMPOUND_ASSIGNMENT);
location, Definition.TypeToClass(promote), def.class, def.class, operation, DefBootstrap.OPERATOR_COMPOUND_ASSIGNMENT);
} else {
writer.writeBinaryInstruction(location, promote, operation);
writer.writeBinaryInstruction(location, Definition.TypeToClass(promote), operation);
}
writer.writeCast(back); // if necessary cast the promotion type value back to the lhs's type

View File

@ -649,13 +649,13 @@ public final class EBinary extends AExpression {
left.write(writer, globals);
if (!(left instanceof EBinary) || !((EBinary)left).cat) {
writer.writeAppendStrings(left.actual);
writer.writeAppendStrings(Definition.TypeToClass(left.actual));
}
right.write(writer, globals);
if (!(right instanceof EBinary) || !((EBinary)right).cat) {
writer.writeAppendStrings(right.actual);
writer.writeAppendStrings(Definition.TypeToClass(right.actual));
}
if (!cat) {
@ -684,9 +684,10 @@ public final class EBinary extends AExpression {
if (originallyExplicit) {
flags |= DefBootstrap.OPERATOR_EXPLICIT_CAST;
}
writer.writeDynamicBinaryInstruction(location, actual, left.actual, right.actual, operation, flags);
writer.writeDynamicBinaryInstruction(location, Definition.TypeToClass(actual),
Definition.TypeToClass(left.actual), Definition.TypeToClass(right.actual), operation, flags);
} else {
writer.writeBinaryInstruction(location, actual, operation);
writer.writeBinaryInstruction(location, Definition.TypeToClass(actual), operation);
}
}
}

View File

@ -78,7 +78,7 @@ public class RethrottleTests extends ReindexTestCase {
private void testCase(AbstractBulkByScrollRequestBuilder<?, ?> request, String actionName) throws Exception {
logger.info("Starting test for [{}] with [{}] slices", actionName, request.request().getSlices());
/* Add ten documents per slice so most slices will have many documents to process, having to go to multiple batches.
* we can't rely on all of them doing so, but
* We can't rely on the slices being evenly sized but 10 means we have some pretty big slices.
*/
createIndex("test");
@ -170,6 +170,8 @@ public class RethrottleTests extends ReindexTestCase {
// Now the response should come back quickly because we've rethrottled the request
BulkByScrollResponse response = responseListener.get();
// It'd be bad if the entire require completed in a single batch. The test wouldn't be testing anything.
assertThat("Entire request completed in a single batch. This may invalidate the test as throttling is done between batches.",
response.getBatches(), greaterThanOrEqualTo(numSlices));
}
@ -189,8 +191,9 @@ public class RethrottleTests extends ReindexTestCase {
assertThat(rethrottleResponse.getTasks(), hasSize(1));
response.set(rethrottleResponse);
} catch (ElasticsearchException e) {
// if it's the error we're expecting, rethrow as AssertionError so awaitBusy doesn't exit early
if (e.getCause() instanceof IllegalArgumentException) {
// We want to retry in this case so we throw an assertion error
logger.info("caught unprepared task, retrying until prepared");
throw new AssertionError("Rethrottle request for task [" + taskToRethrottle.getId() + "] failed", e);
} else {
throw e;
@ -206,14 +209,32 @@ public class RethrottleTests extends ReindexTestCase {
do {
ListTasksResponse tasks = client().admin().cluster().prepareListTasks().setActions(actionName).setDetailed(true).get();
tasks.rethrowFailures("Finding tasks to rethrottle");
assertThat(tasks.getTaskGroups(), hasSize(lessThan(2)));
assertThat("tasks are left over from the last execution of this test",
tasks.getTaskGroups(), hasSize(lessThan(2)));
if (0 == tasks.getTaskGroups().size()) {
// The parent task hasn't started yet
continue;
}
TaskGroup taskGroup = tasks.getTaskGroups().get(0);
if (sliceCount != 1 && taskGroup.getChildTasks().size() == 0) {
// If there are child tasks wait for at least one to start
continue;
if (sliceCount != 1) {
BulkByScrollTask.Status status = (BulkByScrollTask.Status) taskGroup.getTaskInfo().getStatus();
/*
* If there are child tasks wait for all of them to start. It
* is possible that we'll end up with some very small slices
* (maybe even empty!) that complete super fast so we have to
* count them too.
*/
long finishedChildStatuses = status.getSliceStatuses().stream()
.filter(n -> n != null)
.count();
logger.info("Expected [{}] total children, [{}] are running and [{}] are finished\n{}",
sliceCount, taskGroup.getChildTasks().size(), finishedChildStatuses, status.getSliceStatuses());
if (sliceCount == finishedChildStatuses) {
fail("all slices finished:\n" + status);
}
if (sliceCount != taskGroup.getChildTasks().size() + finishedChildStatuses) {
continue;
}
}
return taskGroup;
} while (System.nanoTime() - start < TimeUnit.SECONDS.toNanos(10));

View File

@ -30,7 +30,7 @@
# specific language governing permissions and limitations
# under the License.
# Install a plugin an run all the common post installation tests.
# Install a plugin
install_plugin() {
local name=$1
local path="$2"
@ -52,8 +52,6 @@ install_plugin() {
sudo -E -u $ESPLUGIN_COMMAND_USER bash -c "umask $umask && \"$ESHOME/bin/elasticsearch-plugin\" install -batch \"file://$path\""
fi
assert_file_exist "$ESPLUGINS/$name"
assert_file_exist "$ESPLUGINS/$name/plugin-descriptor.properties"
#check we did not accidentially create a log file as root as /usr/share/elasticsearch
assert_file_not_exist "/usr/share/elasticsearch/logs"
@ -66,13 +64,6 @@ install_plugin() {
fi
}
install_jvm_plugin() {
local name=$1
local path="$2"
install_plugin $name "$path" $3
assert_file_exist "$ESPLUGINS/$name/$name"*".jar"
}
# Remove a plugin and make sure its plugin directory is removed.
remove_plugin() {
local name=$1
@ -95,7 +86,7 @@ remove_plugin() {
# placements for non-site plugins.
install_jvm_example() {
local relativePath=${1:-$(readlink -m jvm-example-*.zip)}
install_jvm_plugin jvm-example "$relativePath" $2
install_plugin jvm-example "$relativePath" $2
bin_user=$(find "$ESHOME/bin" -maxdepth 0 -printf "%u")
bin_owner=$(find "$ESHOME/bin" -maxdepth 0 -printf "%g")
@ -156,9 +147,11 @@ install_and_check_plugin() {
local full_name="$prefix-$name"
fi
install_jvm_plugin $full_name "$(readlink -m $full_name-*.zip)"
install_plugin $full_name "$(readlink -m $full_name-*.zip)"
assert_module_or_plugin_directory "$ESPLUGINS/$full_name"
assert_file_exist "$ESPLUGINS/$full_name/plugin-descriptor.properties"
assert_file_exist "$ESPLUGINS/$full_name/$full_name"*".jar"
# analysis plugins have a corresponding analyzers jar
if [ $prefix == 'analysis' ]; then
@ -176,6 +169,17 @@ install_and_check_plugin() {
done
}
# Install a meta plugin
# $1 - the plugin name
# $@ - all remaining arguments are jars that must exist in the plugin's
# installation directory
install_meta_plugin() {
local name=$1
install_plugin $name "$(readlink -m $name-*.zip)"
assert_module_or_plugin_directory "$ESPLUGINS/$name"
}
# Compare a list of plugin names to the plugins in the plugins pom and see if they are the same
# $1 the file containing the list of plugins we want to compare to
# $2 description of the source of the plugin list

View File

@ -182,8 +182,8 @@ setup:
---
"Aggregate After Missing":
- skip:
version: " - 6.99.99"
reason: bug fixed in 7.0.0
version: " - 6.1.99"
reason: bug fixed in 6.2.0
- do:
@ -240,8 +240,8 @@ setup:
---
"Composite aggregation with format":
- skip:
version: " - 6.99.99"
reason: this uses a new option (format) added in 7.0.0
version: " - 6.2.99"
reason: this uses a new option (format) added in 6.3.0
- do:
search:
@ -295,3 +295,31 @@ setup:
- length: { aggregations.test.buckets: 1 }
- match: { aggregations.test.buckets.0.key.date: "2017-10-21" }
- match: { aggregations.test.buckets.0.doc_count: 1 }
---
"Composite aggregation with after_key in the response":
- skip:
version: " - 6.2.99"
reason: starting in 6.3.0 after_key is returned in the response
- do:
search:
index: test
body:
aggregations:
test:
composite:
sources: [
{
"keyword": {
"terms": {
"field": "keyword",
}
}
}
]
- match: {hits.total: 6}
- length: { aggregations.test.buckets: 2 }
- length: { aggregations.test.after_key: 1 }
- match: { aggregations.test.after_key.keyword: "foo" }

View File

@ -194,6 +194,16 @@ public abstract class AbstractScopedSettings extends AbstractComponent {
addSettingsUpdater(setting.newUpdater(consumer, logger, validator));
}
/**
* Adds a settings consumer that is only executed if any setting in the supplied list of settings is changed. In that case all the
* settings are specified in the argument are returned.
*
* Also automatically adds empty consumers for all settings in order to activate logging
*/
public synchronized void addSettingsUpdateConsumer(Consumer<Settings> consumer, List<? extends Setting<?>> settings) {
addSettingsUpdater(Setting.groupedSettingsUpdater(consumer, logger, settings));
}
/**
* Adds a settings consumer for affix settings. Affix settings have a namespace associated to it that needs to be available to the
* consumer in order to be processed correctly.

View File

@ -509,10 +509,10 @@ public class Setting<T> implements ToXContentObject {
@Override
public void apply(Tuple<A, B> value, Settings current, Settings previous) {
if (aSettingUpdater.hasChanged(current, previous)) {
logger.info("updating [{}] from [{}] to [{}]", aSetting.key, aSetting.getRaw(previous), aSetting.getRaw(current));
logSettingUpdate(aSetting, current, previous, logger);
}
if (bSettingUpdater.hasChanged(current, previous)) {
logger.info("updating [{}] from [{}] to [{}]", bSetting.key, bSetting.getRaw(previous), bSetting.getRaw(current));
logSettingUpdate(bSetting, current, previous, logger);
}
consumer.accept(value.v1(), value.v2());
}
@ -524,6 +524,46 @@ public class Setting<T> implements ToXContentObject {
};
}
static AbstractScopedSettings.SettingUpdater<Settings> groupedSettingsUpdater(Consumer<Settings> consumer, Logger logger,
final List<? extends Setting<?>> configuredSettings) {
return new AbstractScopedSettings.SettingUpdater<Settings>() {
private Settings get(Settings settings) {
return settings.filter(s -> {
for (Setting<?> setting : configuredSettings) {
if (setting.key.match(s)) {
return true;
}
}
return false;
});
}
@Override
public boolean hasChanged(Settings current, Settings previous) {
Settings currentSettings = get(current);
Settings previousSettings = get(previous);
return currentSettings.equals(previousSettings) == false;
}
@Override
public Settings getValue(Settings current, Settings previous) {
return get(current);
}
@Override
public void apply(Settings value, Settings current, Settings previous) {
consumer.accept(value);
}
@Override
public String toString() {
return "Updater grouped: " + configuredSettings.stream().map(Setting::getKey).collect(Collectors.joining(", "));
}
};
}
public static class AffixSetting<T> extends Setting<T> {
private final AffixKey key;
private final Function<String, Setting<T>> delegateFactory;
@ -541,7 +581,7 @@ public class Setting<T> implements ToXContentObject {
}
private Stream<String> matchStream(Settings settings) {
return settings.keySet().stream().filter((key) -> match(key)).map(settingKey -> key.getConcreteString(settingKey));
return settings.keySet().stream().filter(this::match).map(key::getConcreteString);
}
public Set<String> getSettingsDependencies(String settingsKey) {
@ -812,9 +852,7 @@ public class Setting<T> implements ToXContentObject {
@Override
public void apply(Settings value, Settings current, Settings previous) {
if (logger.isInfoEnabled()) { // getRaw can create quite some objects
logger.info("updating [{}] from [{}] to [{}]", key, getRaw(previous), getRaw(current));
}
Setting.logSettingUpdate(GroupSetting.this, current, previous, logger);
consumer.accept(value);
}
@ -902,7 +940,7 @@ public class Setting<T> implements ToXContentObject {
@Override
public void apply(T value, Settings current, Settings previous) {
logger.info("updating [{}] from [{}] to [{}]", key, getRaw(previous), getRaw(current));
logSettingUpdate(Setting.this, current, previous, logger);
consumer.accept(value);
}
}
@ -1138,6 +1176,16 @@ public class Setting<T> implements ToXContentObject {
}
}
static void logSettingUpdate(Setting setting, Settings current, Settings previous, Logger logger) {
if (logger.isInfoEnabled()) {
if (setting.isFiltered()) {
logger.info("updating [{}]", setting.key);
} else {
logger.info("updating [{}] from [{}] to [{}]", setting.key, setting.getRaw(previous), setting.getRaw(current));
}
}
}
public static Setting<Settings> groupSetting(String key, Property... properties) {
return groupSetting(key, (s) -> {}, properties);
}
@ -1308,8 +1356,8 @@ public class Setting<T> implements ToXContentObject {
if (suffix == null) {
pattern = Pattern.compile("(" + Pattern.quote(prefix) + "((?:[-\\w]+[.])*[-\\w]+$))");
} else {
// the last part of this regexp is for lists since they are represented as x.${namespace}.y.1, x.${namespace}.y.2
pattern = Pattern.compile("(" + Pattern.quote(prefix) + "([-\\w]+)\\." + Pattern.quote(suffix) + ")(?:\\.\\d+)?");
// the last part of this regexp is to support both list and group keys
pattern = Pattern.compile("(" + Pattern.quote(prefix) + "([-\\w]+)\\." + Pattern.quote(suffix) + ")(?:\\..*)?");
}
}

View File

@ -74,7 +74,7 @@ public class ReleasableLock implements Releasable {
return true;
}
public Boolean isHeldByCurrentThread() {
public boolean isHeldByCurrentThread() {
if (holdingThreads == null) {
throw new UnsupportedOperationException("asserts must be enabled");
}

View File

@ -1297,8 +1297,11 @@ public class IndexShard extends AbstractIndexShardComponent implements IndicesCl
assert commitInfo.localCheckpoint >= globalCheckpoint :
"trying to create a shard whose local checkpoint [" + commitInfo.localCheckpoint + "] is < global checkpoint ["
+ globalCheckpoint + "]";
final List<IndexCommit> existingCommits = DirectoryReader.listCommits(store.directory());
assert existingCommits.size() == 1 : "Open index create translog should have one commit, commits[" + existingCommits + "]";
// This assertion is only guaranteed if all nodes are on 6.2+.
if (indexSettings.getIndexVersionCreated().onOrAfter(Version.V_6_2_0)) {
final List<IndexCommit> existingCommits = DirectoryReader.listCommits(store.directory());
assert existingCommits.size() == 1 : "Open index create translog should have one commit, commits[" + existingCommits + "]";
}
}
globalCheckpointTracker.updateGlobalCheckpointOnReplica(globalCheckpoint, "opening index with a new translog");
innerOpenEngineAndTranslog(EngineConfig.OpenMode.OPEN_INDEX_CREATE_TRANSLOG, forceNewHistoryUUID);

View File

@ -52,6 +52,9 @@ public interface CompositeAggregation extends MultiBucketsAggregation {
}
static XContentBuilder toXContentFragment(CompositeAggregation aggregation, XContentBuilder builder, Params params) throws IOException {
if (aggregation.afterKey() != null) {
buildCompositeMap("after_key", aggregation.afterKey(), builder);
}
builder.startArray(CommonFields.BUCKETS.getPreferredName());
for (CompositeAggregation.Bucket bucket : aggregation.getBuckets()) {
bucketToXContent(bucket, builder, params);

View File

@ -136,14 +136,15 @@ final class CompositeAggregator extends BucketsAggregator {
int docCount = bucketDocCount(slot);
buckets[pos++] = new InternalComposite.InternalBucket(sourceNames, formats, key, reverseMuls, docCount, aggs);
}
return new InternalComposite(name, size, sourceNames, formats, Arrays.asList(buckets), reverseMuls,
CompositeKey lastBucket = num > 0 ? buckets[num-1].getRawKey() : null;
return new InternalComposite(name, size, sourceNames, formats, Arrays.asList(buckets), lastBucket, reverseMuls,
pipelineAggregators(), metaData());
}
@Override
public InternalAggregation buildEmptyAggregation() {
final int[] reverseMuls = getReverseMuls();
return new InternalComposite(name, size, sourceNames, formats, Collections.emptyList(), reverseMuls,
return new InternalComposite(name, size, sourceNames, formats, Collections.emptyList(), null, reverseMuls,
pipelineAggregators(), metaData());
}

View File

@ -19,18 +19,38 @@
package org.elasticsearch.search.aggregations.bucket.composite;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import java.io.IOException;
import java.util.Arrays;
/**
* A key that is composed of multiple {@link Comparable} values.
*/
class CompositeKey {
class CompositeKey implements Writeable {
private final Comparable<?>[] values;
CompositeKey(Comparable<?>... values) {
this.values = values;
}
CompositeKey(StreamInput in) throws IOException {
values = new Comparable<?>[in.readVInt()];
for (int i = 0; i < values.length; i++) {
values[i] = (Comparable<?>) in.readGenericValue();
}
}
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(values.length);
for (int i = 0; i < values.length; i++) {
out.writeGenericValue(values[i]);
}
}
Comparable<?>[] values() {
return values;
}

View File

@ -74,7 +74,7 @@ public abstract class CompositeValuesSourceBuilder<AB extends CompositeValuesSou
}
this.missing = in.readGenericValue();
this.order = SortOrder.readFromStream(in);
if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
if (in.getVersion().onOrAfter(Version.V_6_3_0)) {
this.format = in.readOptionalString();
} else {
this.format = null;
@ -97,7 +97,7 @@ public abstract class CompositeValuesSourceBuilder<AB extends CompositeValuesSou
}
out.writeGenericValue(missing);
order.writeTo(out);
if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
if (out.getVersion().onOrAfter(Version.V_6_3_0)) {
out.writeOptionalString(format);
}
innerWriteTo(out);

View File

@ -37,7 +37,6 @@ import java.util.AbstractMap;
import java.util.AbstractSet;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
@ -50,17 +49,19 @@ public class InternalComposite
private final int size;
private final List<InternalBucket> buckets;
private final CompositeKey afterKey;
private final int[] reverseMuls;
private final List<String> sourceNames;
private final List<DocValueFormat> formats;
InternalComposite(String name, int size, List<String> sourceNames, List<DocValueFormat> formats,
List<InternalBucket> buckets, int[] reverseMuls,
List<InternalBucket> buckets, CompositeKey afterKey, int[] reverseMuls,
List<PipelineAggregator> pipelineAggregators, Map<String, Object> metaData) {
super(name, pipelineAggregators, metaData);
this.sourceNames = sourceNames;
this.formats = formats;
this.buckets = buckets;
this.afterKey = afterKey;
this.size = size;
this.reverseMuls = reverseMuls;
}
@ -71,7 +72,7 @@ public class InternalComposite
this.sourceNames = in.readList(StreamInput::readString);
this.formats = new ArrayList<>(sourceNames.size());
for (int i = 0; i < sourceNames.size(); i++) {
if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
if (in.getVersion().onOrAfter(Version.V_6_3_0)) {
formats.add(in.readNamedWriteable(DocValueFormat.class));
} else {
formats.add(DocValueFormat.RAW);
@ -79,19 +80,30 @@ public class InternalComposite
}
this.reverseMuls = in.readIntArray();
this.buckets = in.readList((input) -> new InternalBucket(input, sourceNames, formats, reverseMuls));
if (in.getVersion().onOrAfter(Version.V_6_3_0)) {
this.afterKey = in.readBoolean() ? new CompositeKey(in) : null;
} else {
this.afterKey = buckets.size() > 0 ? buckets.get(buckets.size()-1).key : null;
}
}
@Override
protected void doWriteTo(StreamOutput out) throws IOException {
out.writeVInt(size);
out.writeStringList(sourceNames);
if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
if (out.getVersion().onOrAfter(Version.V_6_3_0)) {
for (DocValueFormat format : formats) {
out.writeNamedWriteable(format);
}
}
out.writeIntArray(reverseMuls);
out.writeList(buckets);
if (out.getVersion().onOrAfter(Version.V_6_3_0)) {
out.writeBoolean(afterKey != null);
if (afterKey != null) {
afterKey.writeTo(out);
}
}
}
@Override
@ -105,8 +117,14 @@ public class InternalComposite
}
@Override
public InternalComposite create(List<InternalBucket> buckets) {
return new InternalComposite(name, size, sourceNames, formats, buckets, reverseMuls, pipelineAggregators(), getMetaData());
public InternalComposite create(List<InternalBucket> newBuckets) {
/**
* This is used by pipeline aggregations to filter/remove buckets so we
* keep the <code>afterKey</code> of the original aggregation in order
* to be able to retrieve the next page even if all buckets have been filtered.
*/
return new InternalComposite(name, size, sourceNames, formats, newBuckets, afterKey,
reverseMuls, pipelineAggregators(), getMetaData());
}
@Override
@ -126,7 +144,10 @@ public class InternalComposite
@Override
public Map<String, Object> afterKey() {
return buckets.size() > 0 ? buckets.get(buckets.size()-1).getKey() : null;
if (afterKey != null) {
return new ArrayMap(sourceNames, formats, afterKey.values());
}
return null;
}
// Visible for tests
@ -169,7 +190,8 @@ public class InternalComposite
reduceContext.consumeBucketsAndMaybeBreak(1);
result.add(reduceBucket);
}
return new InternalComposite(name, size, sourceNames, formats, result, reverseMuls, pipelineAggregators(), metaData);
final CompositeKey lastKey = result.size() > 0 ? result.get(result.size()-1).getRawKey() : null;
return new InternalComposite(name, size, sourceNames, formats, result, lastKey, reverseMuls, pipelineAggregators(), metaData);
}
@Override
@ -177,12 +199,13 @@ public class InternalComposite
InternalComposite that = (InternalComposite) obj;
return Objects.equals(size, that.size) &&
Objects.equals(buckets, that.buckets) &&
Objects.equals(afterKey, that.afterKey) &&
Arrays.equals(reverseMuls, that.reverseMuls);
}
@Override
protected int doHashCode() {
return Objects.hash(size, buckets, Arrays.hashCode(reverseMuls));
return Objects.hash(size, buckets, afterKey, Arrays.hashCode(reverseMuls));
}
private static class BucketIterator implements Comparable<BucketIterator> {
@ -226,11 +249,7 @@ public class InternalComposite
@SuppressWarnings("unchecked")
InternalBucket(StreamInput in, List<String> sourceNames, List<DocValueFormat> formats, int[] reverseMuls) throws IOException {
final Comparable<?>[] values = new Comparable<?>[in.readVInt()];
for (int i = 0; i < values.length; i++) {
values[i] = (Comparable<?>) in.readGenericValue();
}
this.key = new CompositeKey(values);
this.key = new CompositeKey(in);
this.docCount = in.readVLong();
this.aggregations = InternalAggregations.readAggregations(in);
this.reverseMuls = reverseMuls;
@ -240,10 +259,7 @@ public class InternalComposite
@Override
public void writeTo(StreamOutput out) throws IOException {
out.writeVInt(key.size());
for (int i = 0; i < key.size(); i++) {
out.writeGenericValue(key.get(i));
}
key.writeTo(out);
out.writeVLong(docCount);
aggregations.writeTo(out);
}

View File

@ -19,6 +19,7 @@
package org.elasticsearch.search.aggregations.bucket.composite;
import org.elasticsearch.common.ParseField;
import org.elasticsearch.common.xcontent.ObjectParser;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentParser;
@ -33,15 +34,26 @@ public class ParsedComposite extends ParsedMultiBucketAggregation<ParsedComposit
new ObjectParser<>(ParsedComposite.class.getSimpleName(), true, ParsedComposite::new);
static {
PARSER.declareField(ParsedComposite::setAfterKey, (p, c) -> p.mapOrdered(), new ParseField("after_key"),
ObjectParser.ValueType.OBJECT);
declareMultiBucketAggregationFields(PARSER,
parser -> ParsedComposite.ParsedBucket.fromXContent(parser),
parser -> null
);
}
private Map<String, Object> afterKey;
public static ParsedComposite fromXContent(XContentParser parser, String name) throws IOException {
ParsedComposite aggregation = PARSER.parse(parser, null);
aggregation.setName(name);
if (aggregation.afterKey == null && aggregation.getBuckets().size() > 0) {
/**
* Previous versions (< 6.3) don't send <code>afterKey</code>
* in the response so we set it as the last returned buckets.
*/
aggregation.setAfterKey(aggregation.getBuckets().get(aggregation.getBuckets().size()-1).key);
}
return aggregation;
}
@ -57,9 +69,16 @@ public class ParsedComposite extends ParsedMultiBucketAggregation<ParsedComposit
@Override
public Map<String, Object> afterKey() {
if (afterKey != null) {
return afterKey;
}
return buckets.size() > 0 ? buckets.get(buckets.size()-1).getKey() : null;
}
private void setAfterKey(Map<String, Object> afterKey) {
this.afterKey = afterKey;
}
@Override
protected XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {
return CompositeAggregation.toXContentFragment(this, builder, params);

View File

@ -21,6 +21,7 @@ package org.elasticsearch.cluster.node;
import com.carrotsearch.randomizedtesting.generators.RandomPicks;
import org.elasticsearch.Version;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.test.ESTestCase;
import java.util.ArrayList;
@ -113,7 +114,9 @@ public class DiscoveryNodesTests extends ESTestCase {
// change an attribute
Map<String, String> attrs = new HashMap<>(node.getAttributes());
attrs.put("new", "new");
node = new DiscoveryNode(node.getName(), node.getId(), node.getAddress(), attrs, node.getRoles(), node.getVersion());
final TransportAddress nodeAddress = node.getAddress();
node = new DiscoveryNode(node.getName(), node.getId(), node.getEphemeralId(), nodeAddress.address().getHostString(),
nodeAddress.getAddress(), nodeAddress, attrs, node.getRoles(), node.getVersion());
}
nodesB.add(node);
}
@ -140,14 +143,21 @@ public class DiscoveryNodesTests extends ESTestCase {
DiscoveryNodes.Delta delta = discoNodesB.delta(discoNodesA);
if (Objects.equals(masterAId, masterBId)) {
assertFalse(delta.masterNodeChanged());
if (masterA == null) {
assertThat(delta.previousMasterNode(), nullValue());
} else {
assertThat(delta.previousMasterNode().getId(), equalTo(masterAId));
}
if (masterB == null) {
assertThat(delta.newMasterNode(), nullValue());
} else {
assertThat(delta.newMasterNode().getId(), equalTo(masterBId));
}
if (Objects.equals(masterAId, masterBId)) {
assertFalse(delta.masterNodeChanged());
} else {
assertTrue(delta.masterNodeChanged());
assertThat(delta.newMasterNode() != null ? delta.newMasterNode().getId() : null, equalTo(masterBId));
assertThat(delta.previousMasterNode() != null ? delta.previousMasterNode().getId() : null, equalTo(masterAId));
}
Set<DiscoveryNode> newNodes = new HashSet<>(nodesB);

View File

@ -38,6 +38,7 @@ import java.util.stream.Stream;
import static org.hamcrest.Matchers.containsString;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.hasSize;
import static org.hamcrest.Matchers.hasToString;
import static org.hamcrest.Matchers.instanceOf;
import static org.hamcrest.Matchers.is;
@ -712,4 +713,79 @@ public class SettingTests extends ESTestCase {
assertThat(setting.get(Settings.EMPTY).getMillis(), equalTo(random.getMillis() * factor));
}
public void testSettingsGroupUpdater() {
Setting<Integer> intSetting = Setting.intSetting("prefix.foo", 1, Property.NodeScope, Property.Dynamic);
Setting<Integer> intSetting2 = Setting.intSetting("prefix.same", 1, Property.NodeScope, Property.Dynamic);
AbstractScopedSettings.SettingUpdater<Settings> updater = Setting.groupedSettingsUpdater(s -> {}, logger,
Arrays.asList(intSetting, intSetting2));
Settings current = Settings.builder().put("prefix.foo", 123).put("prefix.same", 5555).build();
Settings previous = Settings.builder().put("prefix.foo", 321).put("prefix.same", 5555).build();
assertTrue(updater.apply(current, previous));
}
public void testSettingsGroupUpdaterRemoval() {
Setting<Integer> intSetting = Setting.intSetting("prefix.foo", 1, Property.NodeScope, Property.Dynamic);
Setting<Integer> intSetting2 = Setting.intSetting("prefix.same", 1, Property.NodeScope, Property.Dynamic);
AbstractScopedSettings.SettingUpdater<Settings> updater = Setting.groupedSettingsUpdater(s -> {}, logger,
Arrays.asList(intSetting, intSetting2));
Settings current = Settings.builder().put("prefix.same", 5555).build();
Settings previous = Settings.builder().put("prefix.foo", 321).put("prefix.same", 5555).build();
assertTrue(updater.apply(current, previous));
}
public void testSettingsGroupUpdaterWithAffixSetting() {
Setting<Integer> intSetting = Setting.intSetting("prefix.foo", 1, Property.NodeScope, Property.Dynamic);
Setting.AffixSetting<String> prefixKeySetting =
Setting.prefixKeySetting("prefix.foo.bar.", key -> Setting.simpleString(key, Property.NodeScope, Property.Dynamic));
Setting.AffixSetting<String> affixSetting =
Setting.affixKeySetting("prefix.foo.", "suffix", key -> Setting.simpleString(key,Property.NodeScope, Property.Dynamic));
AbstractScopedSettings.SettingUpdater<Settings> updater = Setting.groupedSettingsUpdater(s -> {}, logger,
Arrays.asList(intSetting, prefixKeySetting, affixSetting));
Settings.Builder currentSettingsBuilder = Settings.builder()
.put("prefix.foo.bar.baz", "foo")
.put("prefix.foo.infix.suffix", "foo");
Settings.Builder previousSettingsBuilder = Settings.builder()
.put("prefix.foo.bar.baz", "foo")
.put("prefix.foo.infix.suffix", "foo");
boolean removePrefixKeySetting = randomBoolean();
boolean changePrefixKeySetting = randomBoolean();
boolean removeAffixKeySetting = randomBoolean();
boolean changeAffixKeySetting = randomBoolean();
boolean removeAffixNamespace = randomBoolean();
if (removePrefixKeySetting) {
previousSettingsBuilder.remove("prefix.foo.bar.baz");
}
if (changePrefixKeySetting) {
currentSettingsBuilder.put("prefix.foo.bar.baz", "bar");
}
if (removeAffixKeySetting) {
previousSettingsBuilder.remove("prefix.foo.infix.suffix");
}
if (changeAffixKeySetting) {
currentSettingsBuilder.put("prefix.foo.infix.suffix", "bar");
}
if (removeAffixKeySetting == false && changeAffixKeySetting == false && removeAffixNamespace) {
currentSettingsBuilder.remove("prefix.foo.infix.suffix");
currentSettingsBuilder.put("prefix.foo.infix2.suffix", "bar");
previousSettingsBuilder.put("prefix.foo.infix2.suffix", "bar");
}
boolean expectedChange = removeAffixKeySetting || removePrefixKeySetting || changeAffixKeySetting || changePrefixKeySetting
|| removeAffixNamespace;
assertThat(updater.apply(currentSettingsBuilder.build(), previousSettingsBuilder.build()), is(expectedChange));
}
public void testAffixNamespacesWithGroupSetting() {
final Setting.AffixSetting<Settings> affixSetting =
Setting.affixKeySetting("prefix.","suffix",
(key) -> Setting.groupSetting(key + ".", Setting.Property.Dynamic, Setting.Property.NodeScope));
assertThat(affixSetting.getNamespaces(Settings.builder().put("prefix.infix.suffix", "anything").build()), hasSize(1));
assertThat(affixSetting.getNamespaces(Settings.builder().put("prefix.infix.suffix.anything", "anything").build()), hasSize(1));
}
}

View File

@ -18,16 +18,22 @@
*/
package org.elasticsearch.common.settings;
import org.elasticsearch.common.Strings;
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.common.logging.Loggers;
import org.elasticsearch.common.logging.ServerLoggers;
import org.elasticsearch.common.settings.Setting.Property;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.json.JsonXContent;
import org.elasticsearch.rest.RestRequest;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.MockLogAppender;
import org.elasticsearch.test.rest.FakeRestRequest;
import java.io.IOException;
import java.util.Arrays;
import java.util.HashSet;
import java.util.function.Consumer;
import static org.hamcrest.CoreMatchers.equalTo;
@ -100,7 +106,43 @@ public class SettingsFilterTests extends ESTestCase {
.build(),
"a.b.*.d"
);
}
public void testFilteredSettingIsNotLogged() throws Exception {
Settings oldSettings = Settings.builder().put("key", "old").build();
Settings newSettings = Settings.builder().put("key", "new").build();
Setting<String> filteredSetting = Setting.simpleString("key", Property.Filtered);
assertExpectedLogMessages((testLogger) -> Setting.logSettingUpdate(filteredSetting, newSettings, oldSettings, testLogger),
new MockLogAppender.SeenEventExpectation("secure logging", "org.elasticsearch.test", Level.INFO, "updating [key]"),
new MockLogAppender.UnseenEventExpectation("unwanted old setting name", "org.elasticsearch.test", Level.INFO, "*old*"),
new MockLogAppender.UnseenEventExpectation("unwanted new setting name", "org.elasticsearch.test", Level.INFO, "*new*")
);
}
public void testRegularSettingUpdateIsFullyLogged() throws Exception {
Settings oldSettings = Settings.builder().put("key", "old").build();
Settings newSettings = Settings.builder().put("key", "new").build();
Setting<String> regularSetting = Setting.simpleString("key");
assertExpectedLogMessages((testLogger) -> Setting.logSettingUpdate(regularSetting, newSettings, oldSettings, testLogger),
new MockLogAppender.SeenEventExpectation("regular logging", "org.elasticsearch.test", Level.INFO,
"updating [key] from [old] to [new]"));
}
private void assertExpectedLogMessages(Consumer<Logger> consumer,
MockLogAppender.LoggingExpectation ... expectations) throws IllegalAccessException {
Logger testLogger = Loggers.getLogger("org.elasticsearch.test");
MockLogAppender appender = new MockLogAppender();
ServerLoggers.addAppender(testLogger, appender);
try {
appender.start();
Arrays.stream(expectations).forEach(appender::addExpectation);
consumer.accept(testLogger);
appender.assertAllExpectationsMatched();
} finally {
ServerLoggers.removeAppender(testLogger, appender);
}
}
private void testFiltering(Settings source, Settings filtered, String... patterns) throws IOException {

View File

@ -129,6 +129,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
return new CompositeAggregationBuilder("name", Collections.singletonList(terms));
}, (result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{keyword=d}", result.afterKey().toString());
assertEquals("{keyword=a}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=c}", result.getBuckets().get(1).getKeyAsString());
@ -146,6 +147,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
.aggregateAfter(Collections.singletonMap("keyword", "a"));
}, (result) -> {
assertEquals(2, result.getBuckets().size());
assertEquals("{keyword=d}", result.afterKey().toString());
assertEquals("{keyword=c}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=d}", result.getBuckets().get(1).getKeyAsString());
@ -174,6 +176,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
return new CompositeAggregationBuilder("name", Collections.singletonList(terms));
}, (result) -> {
assertEquals(4, result.getBuckets().size());
assertEquals("{keyword=zoo}", result.afterKey().toString());
assertEquals("{keyword=bar}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=delta}", result.getBuckets().get(1).getKeyAsString());
@ -193,6 +196,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
.aggregateAfter(Collections.singletonMap("keyword", "car"));
}, (result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{keyword=zoo}", result.afterKey().toString());
assertEquals("{keyword=delta}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=foo}", result.getBuckets().get(1).getKeyAsString());
@ -210,6 +214,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
.aggregateAfter(Collections.singletonMap("keyword", "mar"));
}, (result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{keyword=bar}", result.afterKey().toString());
assertEquals("{keyword=foo}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=delta}", result.getBuckets().get(1).getKeyAsString());
@ -240,6 +245,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
return new CompositeAggregationBuilder("name", Collections.singletonList(terms));
}, (result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{keyword=a}", result.afterKey().toString());
assertEquals("{keyword=a}", result.getBuckets().get(2).getKeyAsString());
assertEquals(2L, result.getBuckets().get(2).getDocCount());
assertEquals("{keyword=c}", result.getBuckets().get(1).getKeyAsString());
@ -258,6 +264,8 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
.aggregateAfter(Collections.singletonMap("keyword", "c"));
}, (result) -> {
assertEquals(result.afterKey().toString(), "{keyword=a}");
assertEquals("{keyword=a}", result.afterKey().toString());
assertEquals(1, result.getBuckets().size());
assertEquals("{keyword=a}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
@ -285,6 +293,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
}, (result) -> {
assertEquals(5, result.getBuckets().size());
assertEquals("{keyword=z}", result.afterKey().toString());
assertEquals("{keyword=a}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=b}", result.getBuckets().get(1).getKeyAsString());
@ -307,6 +316,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
}, (result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{keyword=z}", result.afterKey().toString());
assertEquals("{keyword=c}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=d}", result.getBuckets().get(1).getKeyAsString());
@ -338,6 +348,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
}, (result) -> {
assertEquals(5, result.getBuckets().size());
assertEquals("{keyword=a}", result.afterKey().toString());
assertEquals("{keyword=a}", result.getBuckets().get(4).getKeyAsString());
assertEquals(2L, result.getBuckets().get(4).getDocCount());
assertEquals("{keyword=b}", result.getBuckets().get(3).getKeyAsString());
@ -361,6 +372,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
}, (result) -> {
assertEquals(2, result.getBuckets().size());
assertEquals("{keyword=a}", result.afterKey().toString());
assertEquals("{keyword=a}", result.getBuckets().get(1).getKeyAsString());
assertEquals(2L, result.getBuckets().get(1).getDocCount());
assertEquals("{keyword=b}", result.getBuckets().get(0).getKeyAsString());
@ -395,6 +407,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
),
(result) -> {
assertEquals(4, result.getBuckets().size());
assertEquals("{keyword=d, long=10}", result.afterKey().toString());
assertEquals("{keyword=a, long=0}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=a, long=100}", result.getBuckets().get(1).getKeyAsString());
@ -416,6 +429,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
),
(result) -> {
assertEquals(2, result.getBuckets().size());
assertEquals("{keyword=d, long=10}", result.afterKey().toString());
assertEquals("{keyword=c, long=100}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=d, long=10}", result.getBuckets().get(1).getKeyAsString());
@ -451,6 +465,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
),
(result) -> {
assertEquals(4, result.getBuckets().size());
assertEquals("{keyword=a, long=0}", result.afterKey().toString());
assertEquals("{keyword=a, long=0}", result.getBuckets().get(3).getKeyAsString());
assertEquals(1L, result.getBuckets().get(3).getDocCount());
assertEquals("{keyword=a, long=100}", result.getBuckets().get(2).getKeyAsString());
@ -471,6 +486,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
)).aggregateAfter(createAfterKey("keyword", "d", "long", 10L)
), (result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{keyword=a, long=0}", result.afterKey().toString());
assertEquals("{keyword=a, long=0}", result.getBuckets().get(2).getKeyAsString());
assertEquals(1L, result.getBuckets().get(2).getDocCount());
assertEquals("{keyword=a, long=100}", result.getBuckets().get(1).getKeyAsString());
@ -503,6 +519,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
))
, (result) -> {
assertEquals(10, result.getBuckets().size());
assertEquals("{keyword=z, long=0}", result.afterKey().toString());
assertEquals("{keyword=a, long=0}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=a, long=100}", result.getBuckets().get(1).getKeyAsString());
@ -536,6 +553,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
).aggregateAfter(createAfterKey("keyword", "c", "long", 10L))
, (result) -> {
assertEquals(6, result.getBuckets().size());
assertEquals("{keyword=z, long=100}", result.afterKey().toString());
assertEquals("{keyword=c, long=100}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=d, long=10}", result.getBuckets().get(1).getKeyAsString());
@ -577,6 +595,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
),
(result) -> {
assertEquals(10, result.getBuckets().size());
assertEquals("{keyword=a, long=0}", result.afterKey().toString());
assertEquals("{keyword=a, long=0}", result.getBuckets().get(9).getKeyAsString());
assertEquals(1L, result.getBuckets().get(9).getDocCount());
assertEquals("{keyword=a, long=100}", result.getBuckets().get(8).getKeyAsString());
@ -611,6 +630,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
),
(result) -> {
assertEquals(2, result.getBuckets().size());
assertEquals("{keyword=a, long=0}", result.afterKey().toString());
assertEquals("{keyword=a, long=0}", result.getBuckets().get(1).getKeyAsString());
assertEquals(1L, result.getBuckets().get(1).getDocCount());
assertEquals("{keyword=a, long=100}", result.getBuckets().get(0).getKeyAsString());
@ -644,6 +664,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
)
, (result) -> {
assertEquals(10, result.getBuckets().size());
assertEquals("{keyword=c, long=100, double=0.4}", result.afterKey().toString());
assertEquals("{keyword=a, long=0, double=0.09}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(1).getDocCount());
assertEquals("{keyword=a, long=0, double=0.4}", result.getBuckets().get(1).getKeyAsString());
@ -678,6 +699,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
).aggregateAfter(createAfterKey("keyword", "a", "long", 100L, "double", 0.4d))
,(result) -> {
assertEquals(10, result.getBuckets().size());
assertEquals("{keyword=z, long=0, double=0.09}", result.afterKey().toString());
assertEquals("{keyword=b, long=100, double=0.4}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=c, long=0, double=0.09}", result.getBuckets().get(1).getKeyAsString());
@ -712,6 +734,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
).aggregateAfter(createAfterKey("keyword", "z", "long", 100L, "double", 0.4d))
, (result) -> {
assertEquals(0, result.getBuckets().size());
assertNull(result.afterKey());
}
);
}
@ -738,6 +761,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
},
(result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{date=1508457600000}", result.afterKey().toString());
assertEquals("{date=1474329600000}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{date=1508371200000}", result.getBuckets().get(1).getKeyAsString());
@ -757,6 +781,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
}, (result) -> {
assertEquals(2, result.getBuckets().size());
assertEquals("{date=1508457600000}", result.afterKey().toString());
assertEquals("{date=1508371200000}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{date=1508457600000}", result.getBuckets().get(1).getKeyAsString());
@ -788,6 +813,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
},
(result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{date=2017-10-20}", result.afterKey().toString());
assertEquals("{date=2016-09-20}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{date=2017-10-19}", result.getBuckets().get(1).getKeyAsString());
@ -808,6 +834,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
}, (result) -> {
assertEquals(2, result.getBuckets().size());
assertEquals("{date=2017-10-20}", result.afterKey().toString());
assertEquals("{date=2017-10-19}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{date=2017-10-20}", result.getBuckets().get(1).getKeyAsString());
@ -871,6 +898,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
},
(result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{date=1508454000000}", result.afterKey().toString());
assertEquals("{date=1474326000000}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{date=1508367600000}", result.getBuckets().get(1).getKeyAsString());
@ -891,6 +919,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
}, (result) -> {
assertEquals(2, result.getBuckets().size());
assertEquals("{date=1508454000000}", result.afterKey().toString());
assertEquals("{date=1508367600000}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{date=1508454000000}", result.getBuckets().get(1).getKeyAsString());
@ -924,6 +953,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
),
(result) -> {
assertEquals(7, result.getBuckets().size());
assertEquals("{date=1508457600000, keyword=d}", result.afterKey().toString());
assertEquals("{date=1474329600000, keyword=b}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{date=1474329600000, keyword=c}", result.getBuckets().get(1).getKeyAsString());
@ -954,6 +984,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
).aggregateAfter(createAfterKey("date", 1508371200000L, "keyword", "g"))
, (result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{date=1508457600000, keyword=d}", result.afterKey().toString());
assertEquals("{date=1508457600000, keyword=a}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{date=1508457600000, keyword=c}", result.getBuckets().get(1).getKeyAsString());
@ -986,6 +1017,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
)
, (result) -> {
assertEquals(7, result.getBuckets().size());
assertEquals("{keyword=z, price=50.0}", result.afterKey().toString());
assertEquals("{keyword=a, price=100.0}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=b, price=50.0}", result.getBuckets().get(1).getKeyAsString());
@ -1013,6 +1045,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
).aggregateAfter(createAfterKey("keyword", "c", "price", 50.0))
, (result) -> {
assertEquals(4, result.getBuckets().size());
assertEquals("{keyword=z, price=50.0}", result.afterKey().toString());
assertEquals("{keyword=c, price=100.0}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=d, price=100.0}", result.getBuckets().get(1).getKeyAsString());
@ -1052,6 +1085,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
)
, (result) -> {
assertEquals(8, result.getBuckets().size());
assertEquals("{histo=0.9, keyword=d}", result.afterKey().toString());
assertEquals("{histo=0.4, keyword=a}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{histo=0.4, keyword=b}", result.getBuckets().get(1).getKeyAsString());
@ -1081,6 +1115,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
).aggregateAfter(createAfterKey("histo", 0.8d, "keyword", "b"))
, (result) -> {
assertEquals(3, result.getBuckets().size());
assertEquals("{histo=0.9, keyword=d}", result.afterKey().toString());
assertEquals("{histo=0.8, keyword=z}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{histo=0.9, keyword=a}", result.getBuckets().get(1).getKeyAsString());
@ -1114,6 +1149,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
)
, (result) -> {
assertEquals(7, result.getBuckets().size());
assertEquals("{keyword=z, date_histo=1474329600000}", result.afterKey().toString());
assertEquals("{keyword=a, date_histo=1508457600000}", result.getBuckets().get(0).getKeyAsString());
assertEquals(2L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=b, date_histo=1474329600000}", result.getBuckets().get(1).getKeyAsString());
@ -1142,6 +1178,7 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
).aggregateAfter(createAfterKey("keyword","c", "date_histo", 1474329600000L))
, (result) -> {
assertEquals(4, result.getBuckets().size());
assertEquals("{keyword=z, date_histo=1474329600000}", result.afterKey().toString());
assertEquals("{keyword=c, date_histo=1508457600000}", result.getBuckets().get(0).getKeyAsString());
assertEquals(1L, result.getBuckets().get(0).getDocCount());
assertEquals("{keyword=d, date_histo=1508457600000}", result.getBuckets().get(1).getKeyAsString());
@ -1307,7 +1344,6 @@ public class CompositeAggregatorTests extends AggregatorTestCase {
}
}
@SuppressWarnings("unchecked")
private static Map<String, Object> createAfterKey(Object... fields) {
assert fields.length % 2 == 0;

View File

@ -161,7 +161,9 @@ public class InternalCompositeTests extends InternalMultiBucketAggregationTestCa
buckets.add(bucket);
}
Collections.sort(buckets, (o1, o2) -> o1.compareKey(o2));
return new InternalComposite(name, size, sourceNames, formats, buckets, reverseMuls, Collections.emptyList(), metaData);
CompositeKey lastBucket = buckets.size() > 0 ? buckets.get(buckets.size()-1).getRawKey() : null;
return new InternalComposite(name, size, sourceNames, formats, buckets, lastBucket, reverseMuls,
Collections.emptyList(), metaData);
}
@Override
@ -195,7 +197,8 @@ public class InternalCompositeTests extends InternalMultiBucketAggregationTestCa
default:
throw new AssertionError("illegal branch");
}
return new InternalComposite(instance.getName(), instance.getSize(), sourceNames, formats, buckets, reverseMuls,
CompositeKey lastBucket = buckets.size() > 0 ? buckets.get(buckets.size()-1).getRawKey() : null;
return new InternalComposite(instance.getName(), instance.getSize(), sourceNames, formats, buckets, lastBucket, reverseMuls,
instance.pipelineAggregators(), metaData);
}

View File

@ -21,10 +21,11 @@ package org.elasticsearch.common.util;
import com.carrotsearch.randomizedtesting.RandomizedContext;
import com.carrotsearch.randomizedtesting.SeedUtils;
import org.apache.lucene.util.Accountable;
import org.apache.lucene.util.Accountables;
import org.apache.lucene.util.BytesRef;
import org.elasticsearch.common.settings.Settings;
import org.apache.lucene.util.LuceneTestCase;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.indices.breaker.CircuitBreakerService;
import org.elasticsearch.test.ESTestCase;
@ -32,6 +33,7 @@ import org.elasticsearch.test.ESTestCase;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.ConcurrentHashMap;
@ -59,8 +61,17 @@ public class MockBigArrays extends BigArrays {
masterCopy.keySet().retainAll(ACQUIRED_ARRAYS.keySet());
ACQUIRED_ARRAYS.keySet().removeAll(masterCopy.keySet()); // remove all existing master copy we will report on
if (!masterCopy.isEmpty()) {
final Object cause = masterCopy.entrySet().iterator().next().getValue();
throw new RuntimeException(masterCopy.size() + " arrays have not been released", cause instanceof Throwable ? (Throwable) cause : null);
Iterator<Object> causes = masterCopy.values().iterator();
Object firstCause = causes.next();
RuntimeException exception = new RuntimeException(masterCopy.size() + " arrays have not been released",
firstCause instanceof Throwable ? (Throwable) firstCause : null);
while (causes.hasNext()) {
Object cause = causes.next();
if (cause instanceof Throwable) {
exception.addSuppressed((Throwable) cause);
}
}
throw exception;
}
}
}
@ -249,7 +260,9 @@ public class MockBigArrays extends BigArrays {
AbstractArrayWrapper(boolean clearOnResize) {
this.clearOnResize = clearOnResize;
this.originalRelease = new AtomicReference<>();
ACQUIRED_ARRAYS.put(this, TRACK_ALLOCATIONS ? new RuntimeException() : Boolean.TRUE);
ACQUIRED_ARRAYS.put(this,
TRACK_ALLOCATIONS ? new RuntimeException("Unreleased array from test: " + LuceneTestCase.getTestClass().getName())
: Boolean.TRUE);
}
protected abstract BigArray getDelegate();

View File

@ -19,6 +19,7 @@
package org.elasticsearch.common.util;
import org.apache.lucene.util.LuceneTestCase;
import org.elasticsearch.common.recycler.Recycler.V;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.util.set.Sets;
@ -27,6 +28,7 @@ import org.elasticsearch.test.ESTestCase;
import java.lang.reflect.Array;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.ConcurrentHashMap;
@ -48,8 +50,13 @@ public class MockPageCacheRecycler extends PageCacheRecycler {
masterCopy.keySet().retainAll(ACQUIRED_PAGES.keySet());
ACQUIRED_PAGES.keySet().removeAll(masterCopy.keySet()); // remove all existing master copy we will report on
if (!masterCopy.isEmpty()) {
final Throwable t = masterCopy.entrySet().iterator().next().getValue();
throw new RuntimeException(masterCopy.size() + " pages have not been released", t);
Iterator<Throwable> causes = masterCopy.values().iterator();
Throwable firstCause = causes.next();
RuntimeException exception = new RuntimeException(masterCopy.size() + " pages have not been released", firstCause);
while (causes.hasNext()) {
exception.addSuppressed(causes.next());
}
throw exception;
}
}
}
@ -66,7 +73,7 @@ public class MockPageCacheRecycler extends PageCacheRecycler {
}
private <T> V<T> wrap(final V<T> v) {
ACQUIRED_PAGES.put(v, new Throwable());
ACQUIRED_PAGES.put(v, new Throwable("Unreleased Page from test: " + LuceneTestCase.getTestClass().getName()));
return new V<T>() {
@Override

View File

@ -92,7 +92,7 @@ public class MockLogAppender extends AbstractAppender {
saw = true;
}
} else {
if (event.getMessage().toString().contains(message)) {
if (event.getMessage().getFormattedMessage().contains(message)) {
saw = true;
}
}