mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-03-25 09:28:27 +00:00
[ML] Merge the Jindex master feature branch (#36702)
* [ML] Job and datafeed mappings with index template (#32719) Index mappings for the configuration documents * [ML] Job config document CRUD operations (#32738) * [ML] Datafeed config CRUD operations (#32854) * [ML] Change JobManager to work with Job config in index (#33064) * [ML] Change Datafeed actions to read config from the config index (#33273) * [ML] Allocate jobs based on JobParams rather than cluster state config (#33994) * [ML] Return missing job error when .ml-config is does not exist (#34177) * [ML] Close job in index (#34217) * [ML] Adjust finalize job action to work with documents (#34226) * [ML] Job in index: Datafeed node selector (#34218) * [ML] Job in Index: Stop and preview datafeed (#34605) * [ML] Delete job document (#34595) * [ML] Convert job data remover to work with index configs (#34532) * [ML] Job in index: Get datafeed and job stats from index (#34645) * [ML] Job in Index: Convert get calendar events to index docs (#34710) * [ML] Job in index: delete filter action (#34642) This changes the delete filter action to search for jobs using the filter to be deleted in the index rather than the cluster state. * [ML] Job in Index: Enable integ tests (#34851) Enables the ml integration tests excluding the rolling upgrade tests and a lot of fixes to make the tests pass again. * [ML] Reimplement established model memory (#35500) This is the 7.0 implementation of a master node service to keep track of the native process memory requirement of each ML job with an associated native process. The new ML memory tracker service works when the whole cluster is upgraded to at least version 6.6. For mixed version clusters the old mechanism of established model memory stored on the job in cluster state was used. This means that the old (and complex) code to keep established model memory up to date on the job object has been removed in 7.0. Forward port of #35263 * [ML] Need to wait for shards to replicate in distributed test (#35541) Because the cluster was expanded from 1 node to 3 indices would initially start off with 0 replicas. If the original node was killed before auto-expansion to 1 replica was complete then the test would fail because the indices would be unavailable. * [ML] DelayedDataCheckConfig index mappings (#35646) * [ML] JIndex: Restore finalize job action (#35939) * [ML] Replace Version.CURRENT in streaming functions (#36118) * [ML] Use 'anomaly-detector' in job config doc name (#36254) * [ML] Job In Index: Migrate config from the clusterstate (#35834) Migrate ML configuration from clusterstate to index for closed jobs only once all nodes are v6.6.0 or higher * [ML] Check groups against job Ids on update (#36317) * [ML] Adapt to periodic persistent task refresh (#36633) * [ML] Adapt to periodic persistent task refresh If https://github.com/elastic/elasticsearch/pull/36069/files is merged then the approach for reallocating ML persistent tasks after refreshing job memory requirements can be simplified. This change begins the simplification process. * Remove AwaitsFix and implement TODO * [ML] Default search size for configs * Fix TooManyJobsIT.testMultipleNodes Two problems: 1. Stack overflow during async iteration when lots of jobs on same machine 2. Not effectively setting search size in all cases * Use execute() instead of submit() in MlMemoryTracker We don't need a Future to wait for completion * [ML][TEST] Fix NPE in JobManagerTests * [ML] JIindex: Limit the size of bulk migrations (#36481) * [ML] Prevent updates and upgrade tests (#36649) * [FEATURE][ML] Add cluster setting that enables/disables config migration (#36700) This commit adds a cluster settings called `xpack.ml.enable_config_migration`. The setting is `true` by default. When set to `false`, no config migration will be attempted and non-migrated resources (e.g. jobs, datafeeds) will be able to be updated normally. Relates #32905 * [ML] Snapshot ml configs before migrating (#36645) * [FEATURE][ML] Split in batches and migrate all jobs and datafeeds (#36716) Relates #32905 * SQL: Fix translation of LIKE/RLIKE keywords (#36672) * SQL: Fix translation of LIKE/RLIKE keywords Refactor Like/RLike functions to simplify internals and improve query translation when chained or within a script context. Fix #36039 Fix #36584 * Fixing line length for EnvironmentTests and RecoveryTests (#36657) Relates #34884 * Add back one line removed by mistake regarding java version check and COMPAT jvm parameter existence * Do not resolve addresses in remote connection info (#36671) The remote connection info API leads to resolving addresses of seed nodes when invoked. This is problematic because if a hostname fails to resolve, we would not display any remote connection info. Yet, a hostname not resolving can happen across remote clusters, especially in the modern world of cloud services with dynamically chaning IPs. Instead, the remote connection info API should be providing the configured seed nodes. This commit changes the remote connection info to display the configured seed nodes, avoiding a hostname resolution. Note that care was taken to preserve backwards compatibility with previous versions that expect the remote connection info to serialize a transport address instead of a string representing the hostname. * [Painless] Add boxed type to boxed type casts for method/return (#36571) This adds implicit boxed type to boxed types casts for non-def types to create asymmetric casting relative to the def type when calling methods or returning values. This means that a user calling a method taking an Integer can call it with a Byte, Short, etc. legally which matches the way def works. This creates consistency in the casting model that did not previously exist. * SNAPSHOTS: Adjust BwC Versions in Restore Logic (#36718) * Re-enables bwc tests with adjusted version conditions now that #36397 enables concurrent snapshots in 6.6+ * ingest: fix on_failure with Drop processor (#36686) This commit allows a document to be dropped when a Drop processor is used in the on_failure fork of the processor chain. Fixes #36151 * Initialize startup `CcrRepositories` (#36730) Currently, the CcrRepositoryManger only listens for settings updates and installs new repositories. It does not install the repositories that are in the initial settings. This commit, modifies the manager to install the initial repositories. Additionally, it modifies the ccr integration test to configure the remote leader node at startup, instead of using a settings update. * [TEST] fix float comparison in RandomObjects#getExpectedParsedValue This commit fixes a test bug introduced with #36597. This caused some test failure as stored field values comparisons would not work when CBOR xcontent type was used. Closes #29080 * [Geo] Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach (#35320) This commit exposes lucene's LatLonShape field as the default type in GeoShapeFieldMapper. To use the new indexing approach, simply set "type" : "geo_shape" in the mappings without setting any of the strategy, precision, tree_levels, or distance_error_pct parameters. Note the following when using the new indexing approach: * geo_shape query does not support querying by MULTIPOINT. * LINESTRING and MULTILINESTRING queries do not yet support WITHIN relation. * CONTAINS relation is not yet supported. The tree, precision, tree_levels, distance_error_pct, and points_only parameters are deprecated. * TESTS:Debug Log. IndexStatsIT#testFilterCacheStats * ingest: support default pipelines + bulk upserts (#36618) This commit adds support to enable bulk upserts to use an index's default pipeline. Bulk upsert, doc_as_upsert, and script_as_upsert are all supported. However, bulk script_as_upsert has slightly surprising behavior since the pipeline is executed _before_ the script is evaluated. This means that the pipeline only has access the data found in the upsert field of the script_as_upsert. The non-bulk script_as_upsert (existing behavior) runs the pipeline _after_ the script is executed. This commit does _not_ attempt to consolidate the bulk and non-bulk behavior for script_as_upsert. This commit also adds additional testing for the non-bulk behavior, which remains unchanged with this commit. fixes #36219 * Fix duplicate phrase in shrink/split error message (#36734) This commit removes a duplicate "must be a" from the shrink/split error messages. * Deprecate types in get_source and exist_source (#36426) This change adds a new untyped endpoint `{index}/_source/{id}` for both the GET and the HEAD methods to get the source of a document or check for its existance. It also adds deprecation warnings to RestGetSourceAction that emit a warning when the old deprecated "type" parameter is still used. Also updating documentation and tests where appropriate. Relates to #35190 * Revert "[Geo] Integrate Lucene's LatLonShape (BKD Backed GeoShapes) as default `geo_shape` indexing approach (#35320)" This reverts commit 5bc7822562a6eefa4a64743233160cdc9f431adf. * Enhance Invalidate Token API (#35388) This change: - Adds functionality to invalidate all (refresh+access) tokens for all users of a realm - Adds functionality to invalidate all (refresh+access)tokens for a user in all realms - Adds functionality to invalidate all (refresh+access) tokens for a user in a specific realm - Changes the response format for the invalidate token API to contain information about the number of the invalidated tokens and possible errors that were encountered. - Updates the API Documentation After back-porting to 6.x, the `created` field will be removed from master as a field in the response Resolves: #35115 Relates: #34556 * Add raw sort values to SearchSortValues transport serialization (#36617) In order for CCS alternate execution mode (see #32125) to be able to do the final reduction step on the CCS coordinating node, we need to serialize additional info in the transport layer as part of each `SearchHit`. Sort values are already present but they are formatted according to the provided `DocValueFormat` provided. The CCS node needs to be able to reconstruct the lucene `FieldDoc` to include in the `TopFieldDocs` and `CollapseTopFieldDocs` which will feed the `mergeTopDocs` method used to reduce multiple search responses (one per cluster) into one. This commit adds such information to the `SearchSortValues` and exposes it through a new getter method added to `SearchHit` for retrieval. This info is only serialized at transport and never printed out at REST. * Watcher: Ensure all internal search requests count hits (#36697) In previous commits only the stored toXContent version of a search request was using the old format. However an executed search request was already disabling hit counts. In 7.0 hit counts will stay enabled by default to allow for proper migration. Closes #36177 * [TEST] Ensure shard follow tasks have really stopped. Relates to #36696 * Ensure MapperService#getAllMetaFields elements order is deterministic (#36739) MapperService#getAllMetaFields returns an array, which is created out of an `ObjectHashSet`. Such set does not guarantee deterministic hash ordering. The array returned by its toArray may be sorted differently at each run. This caused some repeatability issues in our tests (see #29080) as we pick random fields from the array of possible metadata fields, but that won't be repeatable if the input array is sorted differently at every run. Once setting the tests seed, hppc picks that up and the sorting is deterministic, but failures don't repeat with the seed that gets printed out originally (as a seed was not originally set). See also https://issues.carrot2.org/projects/HPPC/issues/HPPC-173. With this commit, we simply create a static sorted array that is used for `getAllMetaFields`. The change is in production code but really affects only testing as the only production usage of this method was to iterate through all values when parsing fields in the high-level REST client code. Anyways, this seems like a good change as returning an array would imply that it's deterministically sorted. * Expose Sequence Number based Optimistic Concurrency Control in the rest layer (#36721) Relates #36148 Relates #10708 * [ML] Mute MlDistributedFailureIT
This commit is contained in:
parent
ea9b08dee1
commit
e294056bbf
@ -59,7 +59,6 @@ public class Job implements ToXContentObject {
|
||||
public static final ParseField DATA_DESCRIPTION = new ParseField("data_description");
|
||||
public static final ParseField DESCRIPTION = new ParseField("description");
|
||||
public static final ParseField FINISHED_TIME = new ParseField("finished_time");
|
||||
public static final ParseField ESTABLISHED_MODEL_MEMORY = new ParseField("established_model_memory");
|
||||
public static final ParseField MODEL_PLOT_CONFIG = new ParseField("model_plot_config");
|
||||
public static final ParseField RENORMALIZATION_WINDOW_DAYS = new ParseField("renormalization_window_days");
|
||||
public static final ParseField BACKGROUND_PERSIST_INTERVAL = new ParseField("background_persist_interval");
|
||||
@ -84,7 +83,6 @@ public class Job implements ToXContentObject {
|
||||
(p) -> TimeUtil.parseTimeField(p, FINISHED_TIME.getPreferredName()),
|
||||
FINISHED_TIME,
|
||||
ValueType.VALUE);
|
||||
PARSER.declareLong(Builder::setEstablishedModelMemory, ESTABLISHED_MODEL_MEMORY);
|
||||
PARSER.declareObject(Builder::setAnalysisConfig, AnalysisConfig.PARSER, ANALYSIS_CONFIG);
|
||||
PARSER.declareObject(Builder::setAnalysisLimits, AnalysisLimits.PARSER, ANALYSIS_LIMITS);
|
||||
PARSER.declareObject(Builder::setDataDescription, DataDescription.PARSER, DATA_DESCRIPTION);
|
||||
@ -107,7 +105,6 @@ public class Job implements ToXContentObject {
|
||||
private final String description;
|
||||
private final Date createTime;
|
||||
private final Date finishedTime;
|
||||
private final Long establishedModelMemory;
|
||||
private final AnalysisConfig analysisConfig;
|
||||
private final AnalysisLimits analysisLimits;
|
||||
private final DataDescription dataDescription;
|
||||
@ -122,7 +119,7 @@ public class Job implements ToXContentObject {
|
||||
private final Boolean deleting;
|
||||
|
||||
private Job(String jobId, String jobType, List<String> groups, String description,
|
||||
Date createTime, Date finishedTime, Long establishedModelMemory,
|
||||
Date createTime, Date finishedTime,
|
||||
AnalysisConfig analysisConfig, AnalysisLimits analysisLimits, DataDescription dataDescription,
|
||||
ModelPlotConfig modelPlotConfig, Long renormalizationWindowDays, TimeValue backgroundPersistInterval,
|
||||
Long modelSnapshotRetentionDays, Long resultsRetentionDays, Map<String, Object> customSettings,
|
||||
@ -134,7 +131,6 @@ public class Job implements ToXContentObject {
|
||||
this.description = description;
|
||||
this.createTime = createTime;
|
||||
this.finishedTime = finishedTime;
|
||||
this.establishedModelMemory = establishedModelMemory;
|
||||
this.analysisConfig = analysisConfig;
|
||||
this.analysisLimits = analysisLimits;
|
||||
this.dataDescription = dataDescription;
|
||||
@ -204,16 +200,6 @@ public class Job implements ToXContentObject {
|
||||
return finishedTime;
|
||||
}
|
||||
|
||||
/**
|
||||
* The established model memory of the job, or <code>null</code> if model
|
||||
* memory has not reached equilibrium yet.
|
||||
*
|
||||
* @return The established model memory of the job
|
||||
*/
|
||||
public Long getEstablishedModelMemory() {
|
||||
return establishedModelMemory;
|
||||
}
|
||||
|
||||
/**
|
||||
* The analysis configuration object
|
||||
*
|
||||
@ -306,9 +292,6 @@ public class Job implements ToXContentObject {
|
||||
builder.timeField(FINISHED_TIME.getPreferredName(), FINISHED_TIME.getPreferredName() + humanReadableSuffix,
|
||||
finishedTime.getTime());
|
||||
}
|
||||
if (establishedModelMemory != null) {
|
||||
builder.field(ESTABLISHED_MODEL_MEMORY.getPreferredName(), establishedModelMemory);
|
||||
}
|
||||
builder.field(ANALYSIS_CONFIG.getPreferredName(), analysisConfig, params);
|
||||
if (analysisLimits != null) {
|
||||
builder.field(ANALYSIS_LIMITS.getPreferredName(), analysisLimits, params);
|
||||
@ -364,7 +347,6 @@ public class Job implements ToXContentObject {
|
||||
&& Objects.equals(this.description, that.description)
|
||||
&& Objects.equals(this.createTime, that.createTime)
|
||||
&& Objects.equals(this.finishedTime, that.finishedTime)
|
||||
&& Objects.equals(this.establishedModelMemory, that.establishedModelMemory)
|
||||
&& Objects.equals(this.analysisConfig, that.analysisConfig)
|
||||
&& Objects.equals(this.analysisLimits, that.analysisLimits)
|
||||
&& Objects.equals(this.dataDescription, that.dataDescription)
|
||||
@ -381,7 +363,7 @@ public class Job implements ToXContentObject {
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(jobId, jobType, groups, description, createTime, finishedTime, establishedModelMemory,
|
||||
return Objects.hash(jobId, jobType, groups, description, createTime, finishedTime,
|
||||
analysisConfig, analysisLimits, dataDescription, modelPlotConfig, renormalizationWindowDays,
|
||||
backgroundPersistInterval, modelSnapshotRetentionDays, resultsRetentionDays, customSettings,
|
||||
modelSnapshotId, resultsIndexName, deleting);
|
||||
@ -407,7 +389,6 @@ public class Job implements ToXContentObject {
|
||||
private DataDescription dataDescription;
|
||||
private Date createTime;
|
||||
private Date finishedTime;
|
||||
private Long establishedModelMemory;
|
||||
private ModelPlotConfig modelPlotConfig;
|
||||
private Long renormalizationWindowDays;
|
||||
private TimeValue backgroundPersistInterval;
|
||||
@ -435,7 +416,6 @@ public class Job implements ToXContentObject {
|
||||
this.dataDescription = job.getDataDescription();
|
||||
this.createTime = job.getCreateTime();
|
||||
this.finishedTime = job.getFinishedTime();
|
||||
this.establishedModelMemory = job.getEstablishedModelMemory();
|
||||
this.modelPlotConfig = job.getModelPlotConfig();
|
||||
this.renormalizationWindowDays = job.getRenormalizationWindowDays();
|
||||
this.backgroundPersistInterval = job.getBackgroundPersistInterval();
|
||||
@ -496,11 +476,6 @@ public class Job implements ToXContentObject {
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setEstablishedModelMemory(Long establishedModelMemory) {
|
||||
this.establishedModelMemory = establishedModelMemory;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setDataDescription(DataDescription.Builder description) {
|
||||
dataDescription = Objects.requireNonNull(description, DATA_DESCRIPTION.getPreferredName()).build();
|
||||
return this;
|
||||
@ -555,7 +530,7 @@ public class Job implements ToXContentObject {
|
||||
Objects.requireNonNull(id, "[" + ID.getPreferredName() + "] must not be null");
|
||||
Objects.requireNonNull(jobType, "[" + JOB_TYPE.getPreferredName() + "] must not be null");
|
||||
return new Job(
|
||||
id, jobType, groups, description, createTime, finishedTime, establishedModelMemory,
|
||||
id, jobType, groups, description, createTime, finishedTime,
|
||||
analysisConfig, analysisLimits, dataDescription, modelPlotConfig, renormalizationWindowDays,
|
||||
backgroundPersistInterval, modelSnapshotRetentionDays, resultsRetentionDays, customSettings,
|
||||
modelSnapshotId, resultsIndexName, deleting);
|
||||
|
@ -125,9 +125,6 @@ public class JobTests extends AbstractXContentTestCase<Job> {
|
||||
if (randomBoolean()) {
|
||||
builder.setFinishedTime(new Date(randomNonNegativeLong()));
|
||||
}
|
||||
if (randomBoolean()) {
|
||||
builder.setEstablishedModelMemory(randomNonNegativeLong());
|
||||
}
|
||||
builder.setAnalysisConfig(AnalysisConfigTests.createRandomized());
|
||||
builder.setAnalysisLimits(AnalysisLimitsTests.createRandomized());
|
||||
|
||||
|
@ -42,11 +42,6 @@ so do not set the `background_persist_interval` value too low.
|
||||
`description`::
|
||||
(string) An optional description of the job.
|
||||
|
||||
`established_model_memory`::
|
||||
(long) The approximate amount of memory resources that have been used for
|
||||
analytical processing. This field is present only when the analytics have used
|
||||
a stable amount of memory for several consecutive buckets.
|
||||
|
||||
`finished_time`::
|
||||
(string) If the job closed or failed, this is the time the job finished,
|
||||
otherwise it is `null`. This property is informational; you cannot change its
|
||||
|
@ -65,6 +65,7 @@ import org.elasticsearch.xpack.core.indexlifecycle.action.RetryAction;
|
||||
import org.elasticsearch.xpack.core.logstash.LogstashFeatureSetUsage;
|
||||
import org.elasticsearch.xpack.core.ml.MachineLearningFeatureSetUsage;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.CloseJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.DeleteCalendarAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.DeleteCalendarEventAction;
|
||||
@ -363,9 +364,9 @@ public class XPackClientPlugin extends Plugin implements ActionPlugin, NetworkPl
|
||||
new NamedWriteableRegistry.Entry(MetaData.Custom.class, "ml", MlMetadata::new),
|
||||
new NamedWriteableRegistry.Entry(NamedDiff.class, "ml", MlMetadata.MlMetadataDiff::new),
|
||||
// ML - Persistent action requests
|
||||
new NamedWriteableRegistry.Entry(PersistentTaskParams.class, StartDatafeedAction.TASK_NAME,
|
||||
new NamedWriteableRegistry.Entry(PersistentTaskParams.class, MlTasks.DATAFEED_TASK_NAME,
|
||||
StartDatafeedAction.DatafeedParams::new),
|
||||
new NamedWriteableRegistry.Entry(PersistentTaskParams.class, OpenJobAction.TASK_NAME,
|
||||
new NamedWriteableRegistry.Entry(PersistentTaskParams.class, MlTasks.JOB_TASK_NAME,
|
||||
OpenJobAction.JobParams::new),
|
||||
// ML - Task states
|
||||
new NamedWriteableRegistry.Entry(PersistentTaskState.class, JobTaskState.NAME, JobTaskState::new),
|
||||
@ -433,9 +434,9 @@ public class XPackClientPlugin extends Plugin implements ActionPlugin, NetworkPl
|
||||
new NamedXContentRegistry.Entry(MetaData.Custom.class, new ParseField("ml"),
|
||||
parser -> MlMetadata.LENIENT_PARSER.parse(parser, null).build()),
|
||||
// ML - Persistent action requests
|
||||
new NamedXContentRegistry.Entry(PersistentTaskParams.class, new ParseField(StartDatafeedAction.TASK_NAME),
|
||||
new NamedXContentRegistry.Entry(PersistentTaskParams.class, new ParseField(MlTasks.DATAFEED_TASK_NAME),
|
||||
StartDatafeedAction.DatafeedParams::fromXContent),
|
||||
new NamedXContentRegistry.Entry(PersistentTaskParams.class, new ParseField(OpenJobAction.TASK_NAME),
|
||||
new NamedXContentRegistry.Entry(PersistentTaskParams.class, new ParseField(MlTasks.JOB_TASK_NAME),
|
||||
OpenJobAction.JobParams::fromXContent),
|
||||
// ML - Task states
|
||||
new NamedXContentRegistry.Entry(PersistentTaskState.class, new ParseField(DatafeedState.NAME), DatafeedState::fromXContent),
|
||||
|
@ -21,8 +21,6 @@ public final class MlMetaIndex {
|
||||
*/
|
||||
public static final String INDEX_NAME = ".ml-meta";
|
||||
|
||||
public static final String INCLUDE_TYPE_KEY = "include_type";
|
||||
|
||||
public static final String TYPE = "doc";
|
||||
|
||||
private MlMetaIndex() {}
|
||||
|
@ -5,7 +5,6 @@
|
||||
*/
|
||||
package org.elasticsearch.xpack.core.ml;
|
||||
|
||||
import org.elasticsearch.ResourceAlreadyExistsException;
|
||||
import org.elasticsearch.ResourceNotFoundException;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.cluster.AbstractDiffable;
|
||||
@ -146,7 +145,6 @@ public class MlMetadata implements XPackPlugin.XPackMetaDataCustom {
|
||||
datafeeds.put(in.readString(), new DatafeedConfig(in));
|
||||
}
|
||||
this.datafeeds = datafeeds;
|
||||
|
||||
this.groupOrJobLookup = new GroupOrJobLookup(jobs.values());
|
||||
}
|
||||
|
||||
@ -167,7 +165,7 @@ public class MlMetadata implements XPackPlugin.XPackMetaDataCustom {
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
DelegatingMapParams extendedParams =
|
||||
new DelegatingMapParams(Collections.singletonMap(ToXContentParams.FOR_CLUSTER_STATE, "true"), params);
|
||||
new DelegatingMapParams(Collections.singletonMap(ToXContentParams.FOR_INTERNAL_STORAGE, "true"), params);
|
||||
mapValuesToXContent(JOBS_FIELD, jobs, builder, extendedParams);
|
||||
mapValuesToXContent(DATAFEEDS_FIELD, datafeeds, builder, extendedParams);
|
||||
return builder;
|
||||
@ -196,9 +194,14 @@ public class MlMetadata implements XPackPlugin.XPackMetaDataCustom {
|
||||
this.jobs = DiffableUtils.readJdkMapDiff(in, DiffableUtils.getStringKeySerializer(), Job::new,
|
||||
MlMetadataDiff::readJobDiffFrom);
|
||||
this.datafeeds = DiffableUtils.readJdkMapDiff(in, DiffableUtils.getStringKeySerializer(), DatafeedConfig::new,
|
||||
MlMetadataDiff::readSchedulerDiffFrom);
|
||||
MlMetadataDiff::readDatafeedDiffFrom);
|
||||
}
|
||||
|
||||
/**
|
||||
* Merge the diff with the ML metadata.
|
||||
* @param part The current ML metadata.
|
||||
* @return The new ML metadata.
|
||||
*/
|
||||
@Override
|
||||
public MetaData.Custom apply(MetaData.Custom part) {
|
||||
TreeMap<String, Job> newJobs = new TreeMap<>(jobs.apply(((MlMetadata) part).jobs));
|
||||
@ -221,7 +224,7 @@ public class MlMetadata implements XPackPlugin.XPackMetaDataCustom {
|
||||
return AbstractDiffable.readDiffFrom(Job::new, in);
|
||||
}
|
||||
|
||||
static Diff<DatafeedConfig> readSchedulerDiffFrom(StreamInput in) throws IOException {
|
||||
static Diff<DatafeedConfig> readDatafeedDiffFrom(StreamInput in) throws IOException {
|
||||
return AbstractDiffable.readDiffFrom(DatafeedConfig::new, in);
|
||||
}
|
||||
}
|
||||
@ -295,7 +298,7 @@ public class MlMetadata implements XPackPlugin.XPackMetaDataCustom {
|
||||
|
||||
public Builder putDatafeed(DatafeedConfig datafeedConfig, Map<String, String> headers) {
|
||||
if (datafeeds.containsKey(datafeedConfig.getId())) {
|
||||
throw new ResourceAlreadyExistsException("A datafeed with id [" + datafeedConfig.getId() + "] already exists");
|
||||
throw ExceptionsHelper.datafeedAlreadyExists(datafeedConfig.getId());
|
||||
}
|
||||
String jobId = datafeedConfig.getJobId();
|
||||
checkJobIsAvailableForDatafeed(jobId);
|
||||
@ -369,14 +372,14 @@ public class MlMetadata implements XPackPlugin.XPackMetaDataCustom {
|
||||
}
|
||||
}
|
||||
|
||||
private Builder putJobs(Collection<Job> jobs) {
|
||||
public Builder putJobs(Collection<Job> jobs) {
|
||||
for (Job job : jobs) {
|
||||
putJob(job, true);
|
||||
}
|
||||
return this;
|
||||
}
|
||||
|
||||
private Builder putDatafeeds(Collection<DatafeedConfig> datafeeds) {
|
||||
public Builder putDatafeeds(Collection<DatafeedConfig> datafeeds) {
|
||||
for (DatafeedConfig datafeed : datafeeds) {
|
||||
this.datafeeds.put(datafeed.getId(), datafeed);
|
||||
}
|
||||
@ -421,8 +424,6 @@ public class MlMetadata implements XPackPlugin.XPackMetaDataCustom {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
public static MlMetadata getMlMetadata(ClusterState state) {
|
||||
MlMetadata mlMetadata = (state == null) ? null : state.getMetaData().custom(TYPE);
|
||||
if (mlMetadata == null) {
|
||||
|
@ -12,8 +12,19 @@ import org.elasticsearch.xpack.core.ml.datafeed.DatafeedState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobTaskState;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
public final class MlTasks {
|
||||
|
||||
public static final String JOB_TASK_NAME = "xpack/ml/job";
|
||||
public static final String DATAFEED_TASK_NAME = "xpack/ml/datafeed";
|
||||
|
||||
private static final String JOB_TASK_ID_PREFIX = "job-";
|
||||
private static final String DATAFEED_TASK_ID_PREFIX = "datafeed-";
|
||||
|
||||
private MlTasks() {
|
||||
}
|
||||
|
||||
@ -22,7 +33,7 @@ public final class MlTasks {
|
||||
* A datafeed id can be used as a job id, because they are stored separately in cluster state.
|
||||
*/
|
||||
public static String jobTaskId(String jobId) {
|
||||
return "job-" + jobId;
|
||||
return JOB_TASK_ID_PREFIX + jobId;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -30,7 +41,7 @@ public final class MlTasks {
|
||||
* A job id can be used as a datafeed id, because they are stored separately in cluster state.
|
||||
*/
|
||||
public static String datafeedTaskId(String datafeedId) {
|
||||
return "datafeed-" + datafeedId;
|
||||
return DATAFEED_TASK_ID_PREFIX + datafeedId;
|
||||
}
|
||||
|
||||
@Nullable
|
||||
@ -67,4 +78,64 @@ public final class MlTasks {
|
||||
return DatafeedState.STOPPED;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* The job Ids of anomaly detector job tasks.
|
||||
* All anomaly detector jobs are returned regardless of the status of the
|
||||
* task (OPEN, CLOSED, FAILED etc).
|
||||
*
|
||||
* @param tasks Persistent tasks. If null an empty set is returned.
|
||||
* @return The job Ids of anomaly detector job tasks
|
||||
*/
|
||||
public static Set<String> openJobIds(@Nullable PersistentTasksCustomMetaData tasks) {
|
||||
if (tasks == null) {
|
||||
return Collections.emptySet();
|
||||
}
|
||||
|
||||
return tasks.findTasks(JOB_TASK_NAME, task -> true)
|
||||
.stream()
|
||||
.map(t -> t.getId().substring(JOB_TASK_ID_PREFIX.length()))
|
||||
.collect(Collectors.toSet());
|
||||
}
|
||||
|
||||
/**
|
||||
* The datafeed Ids of started datafeed tasks
|
||||
*
|
||||
* @param tasks Persistent tasks. If null an empty set is returned.
|
||||
* @return The Ids of running datafeed tasks
|
||||
*/
|
||||
public static Set<String> startedDatafeedIds(@Nullable PersistentTasksCustomMetaData tasks) {
|
||||
if (tasks == null) {
|
||||
return Collections.emptySet();
|
||||
}
|
||||
|
||||
return tasks.findTasks(DATAFEED_TASK_NAME, task -> true)
|
||||
.stream()
|
||||
.map(t -> t.getId().substring(DATAFEED_TASK_ID_PREFIX.length()))
|
||||
.collect(Collectors.toSet());
|
||||
}
|
||||
|
||||
/**
|
||||
* Is there an ml anomaly detector job task for the job {@code jobId}?
|
||||
* @param jobId The job id
|
||||
* @param tasks Persistent tasks
|
||||
* @return True if the job has a task
|
||||
*/
|
||||
public static boolean taskExistsForJob(String jobId, PersistentTasksCustomMetaData tasks) {
|
||||
return openJobIds(tasks).contains(jobId);
|
||||
}
|
||||
|
||||
/**
|
||||
* Read the active anomaly detector job tasks.
|
||||
* Active tasks are not {@code JobState.CLOSED} or {@code JobState.FAILED}.
|
||||
*
|
||||
* @param tasks Persistent tasks
|
||||
* @return The job tasks excluding closed and failed jobs
|
||||
*/
|
||||
public static List<PersistentTasksCustomMetaData.PersistentTask<?>> activeJobTasks(PersistentTasksCustomMetaData tasks) {
|
||||
return tasks.findTasks(JOB_TASK_NAME, task -> true)
|
||||
.stream()
|
||||
.filter(task -> ((JobTaskState) task.getState()).getState().isAnyOf(JobState.CLOSED, JobState.FAILED) == false)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
}
|
||||
|
@ -13,6 +13,7 @@ import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.action.support.master.MasterNodeRequest;
|
||||
import org.elasticsearch.client.ElasticsearchClient;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.ParseField;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.io.stream.StreamInput;
|
||||
@ -26,6 +27,7 @@ import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.xpack.core.XPackPlugin;
|
||||
import org.elasticsearch.xpack.core.ml.MachineLearningField;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
|
||||
@ -36,7 +38,7 @@ public class OpenJobAction extends Action<AcknowledgedResponse> {
|
||||
|
||||
public static final OpenJobAction INSTANCE = new OpenJobAction();
|
||||
public static final String NAME = "cluster:admin/xpack/ml/job/open";
|
||||
public static final String TASK_NAME = "xpack/ml/job";
|
||||
|
||||
|
||||
private OpenJobAction() {
|
||||
super(NAME);
|
||||
@ -132,15 +134,16 @@ public class OpenJobAction extends Action<AcknowledgedResponse> {
|
||||
|
||||
/** TODO Remove in 7.0.0 */
|
||||
public static final ParseField IGNORE_DOWNTIME = new ParseField("ignore_downtime");
|
||||
|
||||
public static final ParseField TIMEOUT = new ParseField("timeout");
|
||||
public static ObjectParser<JobParams, Void> PARSER = new ObjectParser<>(TASK_NAME, true, JobParams::new);
|
||||
public static final ParseField JOB = new ParseField("job");
|
||||
|
||||
public static ObjectParser<JobParams, Void> PARSER = new ObjectParser<>(MlTasks.JOB_TASK_NAME, true, JobParams::new);
|
||||
static {
|
||||
PARSER.declareString(JobParams::setJobId, Job.ID);
|
||||
PARSER.declareBoolean((p, v) -> {}, IGNORE_DOWNTIME);
|
||||
PARSER.declareString((params, val) ->
|
||||
params.setTimeout(TimeValue.parseTimeValue(val, TIMEOUT.getPreferredName())), TIMEOUT);
|
||||
PARSER.declareObject(JobParams::setJob, (p, c) -> Job.LENIENT_PARSER.apply(p, c).build(), JOB);
|
||||
}
|
||||
|
||||
public static JobParams fromXContent(XContentParser parser) {
|
||||
@ -159,6 +162,7 @@ public class OpenJobAction extends Action<AcknowledgedResponse> {
|
||||
// A big state can take a while to restore. For symmetry with the _close endpoint any
|
||||
// changes here should be reflected there too.
|
||||
private TimeValue timeout = MachineLearningField.STATE_PERSIST_RESTORE_TIMEOUT;
|
||||
private Job job;
|
||||
|
||||
JobParams() {
|
||||
}
|
||||
@ -170,6 +174,9 @@ public class OpenJobAction extends Action<AcknowledgedResponse> {
|
||||
public JobParams(StreamInput in) throws IOException {
|
||||
jobId = in.readString();
|
||||
timeout = TimeValue.timeValueMillis(in.readVLong());
|
||||
if (in.getVersion().onOrAfter(Version.V_6_6_0)) {
|
||||
job = in.readOptionalWriteable(Job::new);
|
||||
}
|
||||
}
|
||||
|
||||
public String getJobId() {
|
||||
@ -188,15 +195,27 @@ public class OpenJobAction extends Action<AcknowledgedResponse> {
|
||||
this.timeout = timeout;
|
||||
}
|
||||
|
||||
@Nullable
|
||||
public Job getJob() {
|
||||
return job;
|
||||
}
|
||||
|
||||
public void setJob(Job job) {
|
||||
this.job = job;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getWriteableName() {
|
||||
return TASK_NAME;
|
||||
return MlTasks.JOB_TASK_NAME;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void writeTo(StreamOutput out) throws IOException {
|
||||
out.writeString(jobId);
|
||||
out.writeVLong(timeout.millis());
|
||||
if (out.getVersion().onOrAfter(Version.V_6_6_0)) {
|
||||
out.writeOptionalWriteable(job);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -204,13 +223,17 @@ public class OpenJobAction extends Action<AcknowledgedResponse> {
|
||||
builder.startObject();
|
||||
builder.field(Job.ID.getPreferredName(), jobId);
|
||||
builder.field(TIMEOUT.getPreferredName(), timeout.getStringRep());
|
||||
if (job != null) {
|
||||
builder.field("job", job);
|
||||
}
|
||||
builder.endObject();
|
||||
// The job field is streamed but not persisted
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(jobId, timeout);
|
||||
return Objects.hash(jobId, timeout, job);
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -223,7 +246,8 @@ public class OpenJobAction extends Action<AcknowledgedResponse> {
|
||||
}
|
||||
OpenJobAction.JobParams other = (OpenJobAction.JobParams) obj;
|
||||
return Objects.equals(jobId, other.jobId) &&
|
||||
Objects.equals(timeout, other.timeout);
|
||||
Objects.equals(timeout, other.timeout) &&
|
||||
Objects.equals(job, other.job);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -138,9 +138,7 @@ public class PutDatafeedAction extends Action<PutDatafeedAction.Response> {
|
||||
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
datafeed.doXContentBody(builder, params);
|
||||
builder.endObject();
|
||||
datafeed.toXContent(builder, params);
|
||||
return builder;
|
||||
}
|
||||
|
||||
|
@ -26,11 +26,15 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.index.mapper.DateFieldMapper;
|
||||
import org.elasticsearch.xpack.core.XPackPlugin;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Objects;
|
||||
import java.util.function.LongSupplier;
|
||||
|
||||
@ -42,7 +46,6 @@ public class StartDatafeedAction extends Action<AcknowledgedResponse> {
|
||||
|
||||
public static final StartDatafeedAction INSTANCE = new StartDatafeedAction();
|
||||
public static final String NAME = "cluster:admin/xpack/ml/datafeed/start";
|
||||
public static final String TASK_NAME = "xpack/ml/datafeed";
|
||||
|
||||
private StartDatafeedAction() {
|
||||
super(NAME);
|
||||
@ -141,8 +144,9 @@ public class StartDatafeedAction extends Action<AcknowledgedResponse> {
|
||||
|
||||
public static class DatafeedParams implements XPackPlugin.XPackPersistentTaskParams {
|
||||
|
||||
public static ObjectParser<DatafeedParams, Void> PARSER = new ObjectParser<>(TASK_NAME, true, DatafeedParams::new);
|
||||
public static final ParseField INDICES = new ParseField("indices");
|
||||
|
||||
public static ObjectParser<DatafeedParams, Void> PARSER = new ObjectParser<>(MlTasks.DATAFEED_TASK_NAME, true, DatafeedParams::new);
|
||||
static {
|
||||
PARSER.declareString((params, datafeedId) -> params.datafeedId = datafeedId, DatafeedConfig.ID);
|
||||
PARSER.declareString((params, startTime) -> params.startTime = parseDateOrThrow(
|
||||
@ -150,6 +154,8 @@ public class StartDatafeedAction extends Action<AcknowledgedResponse> {
|
||||
PARSER.declareString(DatafeedParams::setEndTime, END_TIME);
|
||||
PARSER.declareString((params, val) ->
|
||||
params.setTimeout(TimeValue.parseTimeValue(val, TIMEOUT.getPreferredName())), TIMEOUT);
|
||||
PARSER.declareString(DatafeedParams::setJobId, Job.ID);
|
||||
PARSER.declareStringArray(DatafeedParams::setDatafeedIndices, INDICES);
|
||||
}
|
||||
|
||||
static long parseDateOrThrow(String date, ParseField paramName, LongSupplier now) {
|
||||
@ -189,6 +195,10 @@ public class StartDatafeedAction extends Action<AcknowledgedResponse> {
|
||||
startTime = in.readVLong();
|
||||
endTime = in.readOptionalLong();
|
||||
timeout = TimeValue.timeValueMillis(in.readVLong());
|
||||
if (in.getVersion().onOrAfter(Version.V_6_6_0)) {
|
||||
jobId = in.readOptionalString();
|
||||
datafeedIndices = in.readList(StreamInput::readString);
|
||||
}
|
||||
}
|
||||
|
||||
DatafeedParams() {
|
||||
@ -198,6 +208,9 @@ public class StartDatafeedAction extends Action<AcknowledgedResponse> {
|
||||
private long startTime;
|
||||
private Long endTime;
|
||||
private TimeValue timeout = TimeValue.timeValueSeconds(20);
|
||||
private List<String> datafeedIndices = Collections.emptyList();
|
||||
private String jobId;
|
||||
|
||||
|
||||
public String getDatafeedId() {
|
||||
return datafeedId;
|
||||
@ -227,9 +240,25 @@ public class StartDatafeedAction extends Action<AcknowledgedResponse> {
|
||||
this.timeout = timeout;
|
||||
}
|
||||
|
||||
public String getJobId() {
|
||||
return jobId;
|
||||
}
|
||||
|
||||
public void setJobId(String jobId) {
|
||||
this.jobId = jobId;
|
||||
}
|
||||
|
||||
public List<String> getDatafeedIndices() {
|
||||
return datafeedIndices;
|
||||
}
|
||||
|
||||
public void setDatafeedIndices(List<String> datafeedIndices) {
|
||||
this.datafeedIndices = datafeedIndices;
|
||||
}
|
||||
|
||||
@Override
|
||||
public String getWriteableName() {
|
||||
return TASK_NAME;
|
||||
return MlTasks.DATAFEED_TASK_NAME;
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -243,6 +272,10 @@ public class StartDatafeedAction extends Action<AcknowledgedResponse> {
|
||||
out.writeVLong(startTime);
|
||||
out.writeOptionalLong(endTime);
|
||||
out.writeVLong(timeout.millis());
|
||||
if (out.getVersion().onOrAfter(Version.V_6_6_0)) {
|
||||
out.writeOptionalString(jobId);
|
||||
out.writeStringList(datafeedIndices);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -254,13 +287,19 @@ public class StartDatafeedAction extends Action<AcknowledgedResponse> {
|
||||
builder.field(END_TIME.getPreferredName(), String.valueOf(endTime));
|
||||
}
|
||||
builder.field(TIMEOUT.getPreferredName(), timeout.getStringRep());
|
||||
if (jobId != null) {
|
||||
builder.field(Job.ID.getPreferredName(), jobId);
|
||||
}
|
||||
if (datafeedIndices.isEmpty() == false) {
|
||||
builder.field(INDICES.getPreferredName(), datafeedIndices);
|
||||
}
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(datafeedId, startTime, endTime, timeout);
|
||||
return Objects.hash(datafeedId, startTime, endTime, timeout, jobId, datafeedIndices);
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -275,7 +314,9 @@ public class StartDatafeedAction extends Action<AcknowledgedResponse> {
|
||||
return Objects.equals(datafeedId, other.datafeedId) &&
|
||||
Objects.equals(startTime, other.startTime) &&
|
||||
Objects.equals(endTime, other.endTime) &&
|
||||
Objects.equals(timeout, other.timeout);
|
||||
Objects.equals(timeout, other.timeout) &&
|
||||
Objects.equals(jobId, other.jobId) &&
|
||||
Objects.equals(datafeedIndices, other.datafeedIndices);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -49,7 +49,6 @@ public class UpdateJobAction extends Action<PutJobAction.Response> {
|
||||
|
||||
/** Indicates an update that was not triggered by a user */
|
||||
private boolean isInternal;
|
||||
private boolean waitForAck = true;
|
||||
|
||||
public Request(String jobId, JobUpdate update) {
|
||||
this(jobId, update, false);
|
||||
@ -83,14 +82,6 @@ public class UpdateJobAction extends Action<PutJobAction.Response> {
|
||||
return isInternal;
|
||||
}
|
||||
|
||||
public boolean isWaitForAck() {
|
||||
return waitForAck;
|
||||
}
|
||||
|
||||
public void setWaitForAck(boolean waitForAck) {
|
||||
this.waitForAck = waitForAck;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ActionRequestValidationException validate() {
|
||||
return null;
|
||||
@ -106,10 +97,8 @@ public class UpdateJobAction extends Action<PutJobAction.Response> {
|
||||
} else {
|
||||
isInternal = false;
|
||||
}
|
||||
if (in.getVersion().onOrAfter(Version.V_6_3_0)) {
|
||||
waitForAck = in.readBoolean();
|
||||
} else {
|
||||
waitForAck = true;
|
||||
if (in.getVersion().onOrAfter(Version.V_6_3_0) && in.getVersion().before(Version.V_7_0_0)) {
|
||||
in.readBoolean(); // was waitForAck
|
||||
}
|
||||
}
|
||||
|
||||
@ -121,8 +110,8 @@ public class UpdateJobAction extends Action<PutJobAction.Response> {
|
||||
if (out.getVersion().onOrAfter(Version.V_6_2_2)) {
|
||||
out.writeBoolean(isInternal);
|
||||
}
|
||||
if (out.getVersion().onOrAfter(Version.V_6_3_0)) {
|
||||
out.writeBoolean(waitForAck);
|
||||
if (out.getVersion().onOrAfter(Version.V_6_3_0) && out.getVersion().before(Version.V_7_0_0)) {
|
||||
out.writeBoolean(false); // was waitForAck
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -13,7 +13,7 @@ import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetaIndex;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
@ -111,7 +111,7 @@ public class Calendar implements ToXContentObject, Writeable {
|
||||
if (description != null) {
|
||||
builder.field(DESCRIPTION.getPreferredName(), description);
|
||||
}
|
||||
if (params.paramAsBoolean(MlMetaIndex.INCLUDE_TYPE_KEY, false)) {
|
||||
if (params.paramAsBoolean(ToXContentParams.INCLUDE_TYPE, false)) {
|
||||
builder.field(TYPE.getPreferredName(), CALENDAR_TYPE);
|
||||
}
|
||||
builder.endObject();
|
||||
|
@ -15,7 +15,6 @@ import org.elasticsearch.common.xcontent.ObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetaIndex;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DetectionRule;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Operator;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.RuleAction;
|
||||
@ -23,6 +22,7 @@ import org.elasticsearch.xpack.core.ml.job.config.RuleCondition;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.core.ml.utils.Intervals;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
import org.elasticsearch.xpack.core.ml.utils.time.TimeUtils;
|
||||
|
||||
import java.io.IOException;
|
||||
@ -170,7 +170,7 @@ public class ScheduledEvent implements ToXContentObject, Writeable {
|
||||
if (eventId != null) {
|
||||
builder.field(EVENT_ID.getPreferredName(), eventId);
|
||||
}
|
||||
if (params.paramAsBoolean(MlMetaIndex.INCLUDE_TYPE_KEY, false)) {
|
||||
if (params.paramAsBoolean(ToXContentParams.INCLUDE_TYPE, false)) {
|
||||
builder.field(TYPE.getPreferredName(), SCHEDULED_EVENT_TYPE);
|
||||
}
|
||||
builder.endObject();
|
||||
|
@ -110,6 +110,7 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
||||
|
||||
// Used for QueryPage
|
||||
public static final ParseField RESULTS_FIELD = new ParseField("datafeeds");
|
||||
public static String TYPE = "datafeed";
|
||||
|
||||
/**
|
||||
* The field name used to specify document counts in Elasticsearch
|
||||
@ -118,6 +119,7 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
||||
public static final String DOC_COUNT = "doc_count";
|
||||
|
||||
public static final ParseField ID = new ParseField("datafeed_id");
|
||||
public static final ParseField CONFIG_TYPE = new ParseField("config_type");
|
||||
public static final ParseField QUERY_DELAY = new ParseField("query_delay");
|
||||
public static final ParseField FREQUENCY = new ParseField("frequency");
|
||||
public static final ParseField INDEXES = new ParseField("indexes");
|
||||
@ -156,6 +158,7 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
||||
ObjectParser<Builder, Void> parser = new ObjectParser<>("datafeed_config", ignoreUnknownFields, Builder::new);
|
||||
|
||||
parser.declareString(Builder::setId, ID);
|
||||
parser.declareString((c, s) -> {}, CONFIG_TYPE);
|
||||
parser.declareString(Builder::setJobId, Job.ID);
|
||||
parser.declareStringArray(Builder::setIndices, INDEXES);
|
||||
parser.declareStringArray(Builder::setIndices, INDICES);
|
||||
@ -292,6 +295,16 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
||||
this.aggSupplier = new CachedSupplier<>(() -> lazyAggParser.apply(aggregations, id, new ArrayList<>()));
|
||||
}
|
||||
|
||||
/**
|
||||
* The name of datafeed configuration document name from the datafeed ID.
|
||||
*
|
||||
* @param datafeedId The datafeed ID
|
||||
* @return The ID of document the datafeed config is persisted in
|
||||
*/
|
||||
public static String documentId(String datafeedId) {
|
||||
return TYPE + "-" + datafeedId;
|
||||
}
|
||||
|
||||
public String getId() {
|
||||
return id;
|
||||
}
|
||||
@ -300,6 +313,10 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
||||
return jobId;
|
||||
}
|
||||
|
||||
public String getConfigType() {
|
||||
return TYPE;
|
||||
}
|
||||
|
||||
public TimeValue getQueryDelay() {
|
||||
return queryDelay;
|
||||
}
|
||||
@ -441,14 +458,11 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
||||
@Override
|
||||
public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.startObject();
|
||||
doXContentBody(builder, params);
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {
|
||||
builder.field(ID.getPreferredName(), id);
|
||||
builder.field(Job.ID.getPreferredName(), jobId);
|
||||
if (params.paramAsBoolean(ToXContentParams.INCLUDE_TYPE, false) == true) {
|
||||
builder.field(CONFIG_TYPE.getPreferredName(), TYPE);
|
||||
}
|
||||
builder.field(QUERY_DELAY.getPreferredName(), queryDelay.getStringRep());
|
||||
if (frequency != null) {
|
||||
builder.field(FREQUENCY.getPreferredName(), frequency.getStringRep());
|
||||
@ -470,12 +484,13 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
||||
if (chunkingConfig != null) {
|
||||
builder.field(CHUNKING_CONFIG.getPreferredName(), chunkingConfig);
|
||||
}
|
||||
if (headers.isEmpty() == false && params.paramAsBoolean(ToXContentParams.FOR_CLUSTER_STATE, false) == true) {
|
||||
if (headers.isEmpty() == false && params.paramAsBoolean(ToXContentParams.FOR_INTERNAL_STORAGE, false) == true) {
|
||||
builder.field(HEADERS.getPreferredName(), headers);
|
||||
}
|
||||
if (delayedDataCheckConfig != null) {
|
||||
builder.field(DELAYED_DATA_CHECK_CONFIG.getPreferredName(), delayedDataCheckConfig);
|
||||
}
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@ -621,6 +636,10 @@ public class DatafeedConfig extends AbstractDiffable<DatafeedConfig> implements
|
||||
id = ExceptionsHelper.requireNonNull(datafeedId, ID.getPreferredName());
|
||||
}
|
||||
|
||||
public String getId() {
|
||||
return id;
|
||||
}
|
||||
|
||||
public void setJobId(String jobId) {
|
||||
this.jobId = ExceptionsHelper.requireNonNull(jobId, Job.ID.getPreferredName());
|
||||
}
|
||||
|
@ -12,7 +12,7 @@ import org.elasticsearch.common.xcontent.ConstructingObjectParser;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.persistent.PersistentTaskState;
|
||||
import org.elasticsearch.xpack.core.ml.action.StartDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Locale;
|
||||
@ -23,7 +23,7 @@ public enum DatafeedState implements PersistentTaskState {
|
||||
|
||||
STARTED, STOPPED, STARTING, STOPPING;
|
||||
|
||||
public static final String NAME = StartDatafeedAction.TASK_NAME;
|
||||
public static final String NAME = MlTasks.DATAFEED_TASK_NAME;
|
||||
|
||||
private static final ConstructingObjectParser<DatafeedState, Void> PARSER =
|
||||
new ConstructingObjectParser<>(NAME, args -> fromString((String) args[0]));
|
||||
|
@ -210,7 +210,7 @@ public class DatafeedUpdate implements Writeable, ToXContentObject {
|
||||
}
|
||||
}
|
||||
|
||||
String getJobId() {
|
||||
public String getJobId() {
|
||||
return jobId;
|
||||
}
|
||||
|
||||
|
@ -53,15 +53,15 @@ public class AnalysisConfig implements ToXContentObject, Writeable {
|
||||
* Serialisation names
|
||||
*/
|
||||
public static final ParseField ANALYSIS_CONFIG = new ParseField("analysis_config");
|
||||
private static final ParseField BUCKET_SPAN = new ParseField("bucket_span");
|
||||
private static final ParseField CATEGORIZATION_FIELD_NAME = new ParseField("categorization_field_name");
|
||||
static final ParseField CATEGORIZATION_FILTERS = new ParseField("categorization_filters");
|
||||
private static final ParseField CATEGORIZATION_ANALYZER = CategorizationAnalyzerConfig.CATEGORIZATION_ANALYZER;
|
||||
private static final ParseField LATENCY = new ParseField("latency");
|
||||
private static final ParseField SUMMARY_COUNT_FIELD_NAME = new ParseField("summary_count_field_name");
|
||||
private static final ParseField DETECTORS = new ParseField("detectors");
|
||||
private static final ParseField INFLUENCERS = new ParseField("influencers");
|
||||
private static final ParseField MULTIVARIATE_BY_FIELDS = new ParseField("multivariate_by_fields");
|
||||
public static final ParseField BUCKET_SPAN = new ParseField("bucket_span");
|
||||
public static final ParseField CATEGORIZATION_FIELD_NAME = new ParseField("categorization_field_name");
|
||||
public static final ParseField CATEGORIZATION_FILTERS = new ParseField("categorization_filters");
|
||||
public static final ParseField CATEGORIZATION_ANALYZER = CategorizationAnalyzerConfig.CATEGORIZATION_ANALYZER;
|
||||
public static final ParseField LATENCY = new ParseField("latency");
|
||||
public static final ParseField SUMMARY_COUNT_FIELD_NAME = new ParseField("summary_count_field_name");
|
||||
public static final ParseField DETECTORS = new ParseField("detectors");
|
||||
public static final ParseField INFLUENCERS = new ParseField("influencers");
|
||||
public static final ParseField MULTIVARIATE_BY_FIELDS = new ParseField("multivariate_by_fields");
|
||||
|
||||
public static final String ML_CATEGORY_FIELD = "mlcategory";
|
||||
public static final Set<String> AUTO_CREATED_FIELDS = new HashSet<>(Collections.singletonList(ML_CATEGORY_FIELD));
|
||||
|
@ -53,9 +53,9 @@ import java.util.Objects;
|
||||
public class CategorizationAnalyzerConfig implements ToXContentFragment, Writeable {
|
||||
|
||||
public static final ParseField CATEGORIZATION_ANALYZER = new ParseField("categorization_analyzer");
|
||||
private static final ParseField TOKENIZER = RestAnalyzeAction.Fields.TOKENIZER;
|
||||
private static final ParseField TOKEN_FILTERS = RestAnalyzeAction.Fields.TOKEN_FILTERS;
|
||||
private static final ParseField CHAR_FILTERS = RestAnalyzeAction.Fields.CHAR_FILTERS;
|
||||
public static final ParseField TOKENIZER = RestAnalyzeAction.Fields.TOKENIZER;
|
||||
public static final ParseField TOKEN_FILTERS = RestAnalyzeAction.Fields.TOKEN_FILTERS;
|
||||
public static final ParseField CHAR_FILTERS = RestAnalyzeAction.Fields.CHAR_FILTERS;
|
||||
|
||||
/**
|
||||
* This method is only used in the unit tests - in production code this config is always parsed as a fragment.
|
||||
|
@ -77,12 +77,12 @@ public class DataDescription implements ToXContentObject, Writeable {
|
||||
}
|
||||
}
|
||||
|
||||
private static final ParseField DATA_DESCRIPTION_FIELD = new ParseField("data_description");
|
||||
private static final ParseField FORMAT_FIELD = new ParseField("format");
|
||||
private static final ParseField TIME_FIELD_NAME_FIELD = new ParseField("time_field");
|
||||
private static final ParseField TIME_FORMAT_FIELD = new ParseField("time_format");
|
||||
private static final ParseField FIELD_DELIMITER_FIELD = new ParseField("field_delimiter");
|
||||
private static final ParseField QUOTE_CHARACTER_FIELD = new ParseField("quote_character");
|
||||
public static final ParseField DATA_DESCRIPTION_FIELD = new ParseField("data_description");
|
||||
public static final ParseField FORMAT_FIELD = new ParseField("format");
|
||||
public static final ParseField TIME_FIELD_NAME_FIELD = new ParseField("time_field");
|
||||
public static final ParseField TIME_FORMAT_FIELD = new ParseField("time_format");
|
||||
public static final ParseField FIELD_DELIMITER_FIELD = new ParseField("field_delimiter");
|
||||
public static final ParseField QUOTE_CHARACTER_FIELD = new ParseField("quote_character");
|
||||
|
||||
/**
|
||||
* Special time format string for epoch times (seconds)
|
||||
|
@ -283,7 +283,7 @@ public class Detector implements ToXContentObject, Writeable {
|
||||
// negative means "unknown", which should only happen for a 5.4 job
|
||||
if (detectorIndex >= 0
|
||||
// no point writing this to cluster state, as the indexes will get reassigned on reload anyway
|
||||
&& params.paramAsBoolean(ToXContentParams.FOR_CLUSTER_STATE, false) == false) {
|
||||
&& params.paramAsBoolean(ToXContentParams.FOR_INTERNAL_STORAGE, false) == false) {
|
||||
builder.field(DETECTOR_INDEX.getPreferredName(), detectorIndex);
|
||||
}
|
||||
builder.endObject();
|
||||
|
@ -5,6 +5,7 @@
|
||||
*/
|
||||
package org.elasticsearch.xpack.core.ml.job.config;
|
||||
|
||||
import org.elasticsearch.ResourceAlreadyExistsException;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.cluster.AbstractDiffable;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
@ -67,7 +68,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
public static final ParseField DATA_DESCRIPTION = new ParseField("data_description");
|
||||
public static final ParseField DESCRIPTION = new ParseField("description");
|
||||
public static final ParseField FINISHED_TIME = new ParseField("finished_time");
|
||||
public static final ParseField ESTABLISHED_MODEL_MEMORY = new ParseField("established_model_memory");
|
||||
public static final ParseField MODEL_PLOT_CONFIG = new ParseField("model_plot_config");
|
||||
public static final ParseField RENORMALIZATION_WINDOW_DAYS = new ParseField("renormalization_window_days");
|
||||
public static final ParseField BACKGROUND_PERSIST_INTERVAL = new ParseField("background_persist_interval");
|
||||
@ -102,7 +102,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
p -> TimeUtils.parseTimeField(p, CREATE_TIME.getPreferredName()), CREATE_TIME, ValueType.VALUE);
|
||||
parser.declareField(Builder::setFinishedTime,
|
||||
p -> TimeUtils.parseTimeField(p, FINISHED_TIME.getPreferredName()), FINISHED_TIME, ValueType.VALUE);
|
||||
parser.declareLong(Builder::setEstablishedModelMemory, ESTABLISHED_MODEL_MEMORY);
|
||||
parser.declareObject(Builder::setAnalysisConfig, ignoreUnknownFields ? AnalysisConfig.LENIENT_PARSER : AnalysisConfig.STRICT_PARSER,
|
||||
ANALYSIS_CONFIG);
|
||||
parser.declareObject(Builder::setAnalysisLimits, ignoreUnknownFields ? AnalysisLimits.LENIENT_PARSER : AnalysisLimits.STRICT_PARSER,
|
||||
@ -140,7 +139,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
// TODO: Use java.time for the Dates here: x-pack-elasticsearch#829
|
||||
private final Date createTime;
|
||||
private final Date finishedTime;
|
||||
private final Long establishedModelMemory;
|
||||
private final AnalysisConfig analysisConfig;
|
||||
private final AnalysisLimits analysisLimits;
|
||||
private final DataDescription dataDescription;
|
||||
@ -156,7 +154,7 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
private final boolean deleting;
|
||||
|
||||
private Job(String jobId, String jobType, Version jobVersion, List<String> groups, String description,
|
||||
Date createTime, Date finishedTime, Long establishedModelMemory,
|
||||
Date createTime, Date finishedTime,
|
||||
AnalysisConfig analysisConfig, AnalysisLimits analysisLimits, DataDescription dataDescription,
|
||||
ModelPlotConfig modelPlotConfig, Long renormalizationWindowDays, TimeValue backgroundPersistInterval,
|
||||
Long modelSnapshotRetentionDays, Long resultsRetentionDays, Map<String, Object> customSettings,
|
||||
@ -169,7 +167,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
this.description = description;
|
||||
this.createTime = createTime;
|
||||
this.finishedTime = finishedTime;
|
||||
this.establishedModelMemory = establishedModelMemory;
|
||||
this.analysisConfig = analysisConfig;
|
||||
this.analysisLimits = analysisLimits;
|
||||
this.dataDescription = dataDescription;
|
||||
@ -203,10 +200,9 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
in.readVLong();
|
||||
}
|
||||
}
|
||||
if (in.getVersion().onOrAfter(Version.V_6_1_0)) {
|
||||
establishedModelMemory = in.readOptionalLong();
|
||||
} else {
|
||||
establishedModelMemory = null;
|
||||
// for removed establishedModelMemory field
|
||||
if (in.getVersion().onOrAfter(Version.V_6_1_0) && in.getVersion().before(Version.V_7_0_0)) {
|
||||
in.readOptionalLong();
|
||||
}
|
||||
analysisConfig = new AnalysisConfig(in);
|
||||
analysisLimits = in.readOptionalWriteable(AnalysisLimits::new);
|
||||
@ -228,6 +224,25 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
deleting = in.readBoolean();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the persisted job document name from the Job Id.
|
||||
* Throws if {@code jobId} is not a valid job Id.
|
||||
*
|
||||
* @param jobId The job id
|
||||
* @return The id of document the job is persisted in
|
||||
*/
|
||||
public static String documentId(String jobId) {
|
||||
if (!MlStrings.isValidId(jobId)) {
|
||||
throw new IllegalArgumentException(Messages.getMessage(Messages.INVALID_ID, ID.getPreferredName(), jobId));
|
||||
}
|
||||
if (!MlStrings.hasValidLengthForId(jobId)) {
|
||||
throw new IllegalArgumentException(Messages.getMessage(Messages.JOB_CONFIG_ID_TOO_LONG, MlStrings.ID_LENGTH_LIMIT));
|
||||
}
|
||||
|
||||
return ANOMALY_DETECTOR_JOB_TYPE + "-" + jobId;
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Return the Job Id.
|
||||
*
|
||||
@ -295,16 +310,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
return finishedTime;
|
||||
}
|
||||
|
||||
/**
|
||||
* The established model memory of the job, or <code>null</code> if model
|
||||
* memory has not reached equilibrium yet.
|
||||
*
|
||||
* @return The established model memory of the job
|
||||
*/
|
||||
public Long getEstablishedModelMemory() {
|
||||
return establishedModelMemory;
|
||||
}
|
||||
|
||||
/**
|
||||
* The analysis configuration object
|
||||
*
|
||||
@ -411,21 +416,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
return allFields;
|
||||
}
|
||||
|
||||
/**
|
||||
* Make a best estimate of the job's memory footprint using the information available.
|
||||
* If a job has an established model memory size, then this is the best estimate.
|
||||
* Otherwise, assume the maximum model memory limit will eventually be required.
|
||||
* In either case, a fixed overhead is added to account for the memory required by the
|
||||
* program code and stack.
|
||||
* @return an estimate of the memory requirement of this job, in bytes
|
||||
*/
|
||||
public long estimateMemoryFootprint() {
|
||||
if (establishedModelMemory != null && establishedModelMemory > 0) {
|
||||
return establishedModelMemory + PROCESS_MEMORY_OVERHEAD.getBytes();
|
||||
}
|
||||
return ByteSizeUnit.MB.toBytes(analysisLimits.getModelMemoryLimit()) + PROCESS_MEMORY_OVERHEAD.getBytes();
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the timestamp before which data is not accepted by the job.
|
||||
* This is the latest record timestamp minus the job latency.
|
||||
@ -468,8 +458,9 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
if (out.getVersion().before(Version.V_7_0_0)) {
|
||||
out.writeBoolean(false);
|
||||
}
|
||||
if (out.getVersion().onOrAfter(Version.V_6_1_0)) {
|
||||
out.writeOptionalLong(establishedModelMemory);
|
||||
// for removed establishedModelMemory field
|
||||
if (out.getVersion().onOrAfter(Version.V_6_1_0) && out.getVersion().before(Version.V_7_0_0)) {
|
||||
out.writeOptionalLong(null);
|
||||
}
|
||||
analysisConfig.writeTo(out);
|
||||
out.writeOptionalWriteable(analysisLimits);
|
||||
@ -520,9 +511,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
builder.timeField(FINISHED_TIME.getPreferredName(), FINISHED_TIME.getPreferredName() + humanReadableSuffix,
|
||||
finishedTime.getTime());
|
||||
}
|
||||
if (establishedModelMemory != null) {
|
||||
builder.field(ESTABLISHED_MODEL_MEMORY.getPreferredName(), establishedModelMemory);
|
||||
}
|
||||
builder.field(ANALYSIS_CONFIG.getPreferredName(), analysisConfig, params);
|
||||
if (analysisLimits != null) {
|
||||
builder.field(ANALYSIS_LIMITS.getPreferredName(), analysisLimits, params);
|
||||
@ -579,7 +567,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
&& Objects.equals(this.description, that.description)
|
||||
&& Objects.equals(this.createTime, that.createTime)
|
||||
&& Objects.equals(this.finishedTime, that.finishedTime)
|
||||
&& Objects.equals(this.establishedModelMemory, that.establishedModelMemory)
|
||||
&& Objects.equals(this.analysisConfig, that.analysisConfig)
|
||||
&& Objects.equals(this.analysisLimits, that.analysisLimits)
|
||||
&& Objects.equals(this.dataDescription, that.dataDescription)
|
||||
@ -597,7 +584,7 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(jobId, jobType, jobVersion, groups, description, createTime, finishedTime, establishedModelMemory,
|
||||
return Objects.hash(jobId, jobType, jobVersion, groups, description, createTime, finishedTime,
|
||||
analysisConfig, analysisLimits, dataDescription, modelPlotConfig, renormalizationWindowDays,
|
||||
backgroundPersistInterval, modelSnapshotRetentionDays, resultsRetentionDays, customSettings,
|
||||
modelSnapshotId, modelSnapshotMinVersion, resultsIndexName, deleting);
|
||||
@ -638,7 +625,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
private DataDescription dataDescription;
|
||||
private Date createTime;
|
||||
private Date finishedTime;
|
||||
private Long establishedModelMemory;
|
||||
private ModelPlotConfig modelPlotConfig;
|
||||
private Long renormalizationWindowDays;
|
||||
private TimeValue backgroundPersistInterval;
|
||||
@ -668,7 +654,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
this.dataDescription = job.getDataDescription();
|
||||
this.createTime = job.getCreateTime();
|
||||
this.finishedTime = job.getFinishedTime();
|
||||
this.establishedModelMemory = job.getEstablishedModelMemory();
|
||||
this.modelPlotConfig = job.getModelPlotConfig();
|
||||
this.renormalizationWindowDays = job.getRenormalizationWindowDays();
|
||||
this.backgroundPersistInterval = job.getBackgroundPersistInterval();
|
||||
@ -699,8 +684,9 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
in.readVLong();
|
||||
}
|
||||
}
|
||||
if (in.getVersion().onOrAfter(Version.V_6_1_0)) {
|
||||
establishedModelMemory = in.readOptionalLong();
|
||||
// for removed establishedModelMemory field
|
||||
if (in.getVersion().onOrAfter(Version.V_6_1_0) && in.getVersion().before(Version.V_7_0_0)) {
|
||||
in.readOptionalLong();
|
||||
}
|
||||
analysisConfig = in.readOptionalWriteable(AnalysisConfig::new);
|
||||
analysisLimits = in.readOptionalWriteable(AnalysisLimits::new);
|
||||
@ -746,6 +732,10 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
this.groups = groups == null ? Collections.emptyList() : groups;
|
||||
}
|
||||
|
||||
public List<String> getGroups() {
|
||||
return groups;
|
||||
}
|
||||
|
||||
public Builder setCustomSettings(Map<String, Object> customSettings) {
|
||||
this.customSettings = customSettings;
|
||||
return this;
|
||||
@ -780,11 +770,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setEstablishedModelMemory(Long establishedModelMemory) {
|
||||
this.establishedModelMemory = establishedModelMemory;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setDataDescription(DataDescription.Builder description) {
|
||||
dataDescription = ExceptionsHelper.requireNonNull(description, DATA_DESCRIPTION.getPreferredName()).build();
|
||||
return this;
|
||||
@ -825,7 +810,7 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setModelSnapshotMinVersion(String modelSnapshotMinVersion) {
|
||||
Builder setModelSnapshotMinVersion(String modelSnapshotMinVersion) {
|
||||
this.modelSnapshotMinVersion = Version.fromString(modelSnapshotMinVersion);
|
||||
return this;
|
||||
}
|
||||
@ -890,8 +875,9 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
if (out.getVersion().before(Version.V_7_0_0)) {
|
||||
out.writeBoolean(false);
|
||||
}
|
||||
if (out.getVersion().onOrAfter(Version.V_6_1_0)) {
|
||||
out.writeOptionalLong(establishedModelMemory);
|
||||
// for removed establishedModelMemory field
|
||||
if (out.getVersion().onOrAfter(Version.V_6_1_0) && out.getVersion().before(Version.V_7_0_0)) {
|
||||
out.writeOptionalLong(null);
|
||||
}
|
||||
out.writeOptionalWriteable(analysisConfig);
|
||||
out.writeOptionalWriteable(analysisLimits);
|
||||
@ -934,9 +920,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
if (finishedTime != null) {
|
||||
builder.field(FINISHED_TIME.getPreferredName(), finishedTime.getTime());
|
||||
}
|
||||
if (establishedModelMemory != null) {
|
||||
builder.field(ESTABLISHED_MODEL_MEMORY.getPreferredName(), establishedModelMemory);
|
||||
}
|
||||
if (analysisConfig != null) {
|
||||
builder.field(ANALYSIS_CONFIG.getPreferredName(), analysisConfig, params);
|
||||
}
|
||||
@ -997,7 +980,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
&& Objects.equals(this.dataDescription, that.dataDescription)
|
||||
&& Objects.equals(this.createTime, that.createTime)
|
||||
&& Objects.equals(this.finishedTime, that.finishedTime)
|
||||
&& Objects.equals(this.establishedModelMemory, that.establishedModelMemory)
|
||||
&& Objects.equals(this.modelPlotConfig, that.modelPlotConfig)
|
||||
&& Objects.equals(this.renormalizationWindowDays, that.renormalizationWindowDays)
|
||||
&& Objects.equals(this.backgroundPersistInterval, that.backgroundPersistInterval)
|
||||
@ -1013,7 +995,7 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(id, jobType, jobVersion, groups, description, analysisConfig, analysisLimits, dataDescription,
|
||||
createTime, finishedTime, establishedModelMemory, modelPlotConfig, renormalizationWindowDays,
|
||||
createTime, finishedTime, modelPlotConfig, renormalizationWindowDays,
|
||||
backgroundPersistInterval, modelSnapshotRetentionDays, resultsRetentionDays, customSettings, modelSnapshotId,
|
||||
modelSnapshotMinVersion, resultsIndexName, deleting);
|
||||
}
|
||||
@ -1072,6 +1054,10 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
if (MlStrings.isValidId(group) == false) {
|
||||
throw new IllegalArgumentException(Messages.getMessage(Messages.INVALID_GROUP, group));
|
||||
}
|
||||
if (this.id.equals(group)) {
|
||||
// cannot have a group name the same as the job id
|
||||
throw new ResourceAlreadyExistsException(Messages.getMessage(Messages.JOB_AND_GROUP_NAMES_MUST_BE_UNIQUE, group));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -1085,11 +1071,6 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
public Job build(Date createTime) {
|
||||
setCreateTime(createTime);
|
||||
setJobVersion(Version.CURRENT);
|
||||
// TODO: Maybe we _could_ accept a value for this supplied at create time - it would
|
||||
// mean cloned jobs that hadn't been edited much would start with an accurate expected size.
|
||||
// But on the other hand it would mean jobs that were cloned and then completely changed
|
||||
// would start with a size that was completely wrong.
|
||||
setEstablishedModelMemory(null);
|
||||
return build();
|
||||
}
|
||||
|
||||
@ -1125,7 +1106,7 @@ public class Job extends AbstractDiffable<Job> implements Writeable, ToXContentO
|
||||
}
|
||||
|
||||
return new Job(
|
||||
id, jobType, jobVersion, groups, description, createTime, finishedTime, establishedModelMemory,
|
||||
id, jobType, jobVersion, groups, description, createTime, finishedTime,
|
||||
analysisConfig, analysisLimits, dataDescription, modelPlotConfig, renormalizationWindowDays,
|
||||
backgroundPersistInterval, modelSnapshotRetentionDays, resultsRetentionDays, customSettings,
|
||||
modelSnapshotId, modelSnapshotMinVersion, resultsIndexName, deleting);
|
||||
|
@ -14,7 +14,7 @@ import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.persistent.PersistentTaskState;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData.PersistentTask;
|
||||
import org.elasticsearch.xpack.core.ml.action.OpenJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Objects;
|
||||
@ -23,7 +23,7 @@ import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constru
|
||||
|
||||
public class JobTaskState implements PersistentTaskState {
|
||||
|
||||
public static final String NAME = OpenJobAction.TASK_NAME;
|
||||
public static final String NAME = MlTasks.JOB_TASK_NAME;
|
||||
|
||||
private static ParseField STATE = new ParseField("state");
|
||||
private static ParseField ALLOCATION_ID = new ParseField("allocation_id");
|
||||
|
@ -29,6 +29,7 @@ import java.util.TreeSet;
|
||||
|
||||
public class JobUpdate implements Writeable, ToXContentObject {
|
||||
public static final ParseField DETECTORS = new ParseField("detectors");
|
||||
public static final ParseField CLEAR_JOB_FINISH_TIME = new ParseField("clear_job_finish_time");
|
||||
|
||||
// For internal updates
|
||||
static final ConstructingObjectParser<Builder, Void> INTERNAL_PARSER = new ConstructingObjectParser<>(
|
||||
@ -56,9 +57,9 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
}
|
||||
// These fields should not be set by a REST request
|
||||
INTERNAL_PARSER.declareString(Builder::setModelSnapshotId, Job.MODEL_SNAPSHOT_ID);
|
||||
INTERNAL_PARSER.declareLong(Builder::setEstablishedModelMemory, Job.ESTABLISHED_MODEL_MEMORY);
|
||||
INTERNAL_PARSER.declareString(Builder::setModelSnapshotMinVersion, Job.MODEL_SNAPSHOT_MIN_VERSION);
|
||||
INTERNAL_PARSER.declareString(Builder::setJobVersion, Job.JOB_VERSION);
|
||||
INTERNAL_PARSER.declareBoolean(Builder::setClearFinishTime, CLEAR_JOB_FINISH_TIME);
|
||||
}
|
||||
|
||||
private final String jobId;
|
||||
@ -75,8 +76,8 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
private final Map<String, Object> customSettings;
|
||||
private final String modelSnapshotId;
|
||||
private final Version modelSnapshotMinVersion;
|
||||
private final Long establishedModelMemory;
|
||||
private final Version jobVersion;
|
||||
private final Boolean clearJobFinishTime;
|
||||
|
||||
private JobUpdate(String jobId, @Nullable List<String> groups, @Nullable String description,
|
||||
@Nullable List<DetectorUpdate> detectorUpdates, @Nullable ModelPlotConfig modelPlotConfig,
|
||||
@ -84,8 +85,7 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
@Nullable Long renormalizationWindowDays, @Nullable Long resultsRetentionDays,
|
||||
@Nullable Long modelSnapshotRetentionDays, @Nullable List<String> categorisationFilters,
|
||||
@Nullable Map<String, Object> customSettings, @Nullable String modelSnapshotId,
|
||||
@Nullable Version modelSnapshotMinVersion, @Nullable Long establishedModelMemory,
|
||||
@Nullable Version jobVersion) {
|
||||
@Nullable Version modelSnapshotMinVersion, @Nullable Version jobVersion, @Nullable Boolean clearJobFinishTime) {
|
||||
this.jobId = jobId;
|
||||
this.groups = groups;
|
||||
this.description = description;
|
||||
@ -100,8 +100,8 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
this.customSettings = customSettings;
|
||||
this.modelSnapshotId = modelSnapshotId;
|
||||
this.modelSnapshotMinVersion = modelSnapshotMinVersion;
|
||||
this.establishedModelMemory = establishedModelMemory;
|
||||
this.jobVersion = jobVersion;
|
||||
this.clearJobFinishTime = clearJobFinishTime;
|
||||
}
|
||||
|
||||
public JobUpdate(StreamInput in) throws IOException {
|
||||
@ -131,16 +131,20 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
}
|
||||
customSettings = in.readMap();
|
||||
modelSnapshotId = in.readOptionalString();
|
||||
if (in.getVersion().onOrAfter(Version.V_6_1_0)) {
|
||||
establishedModelMemory = in.readOptionalLong();
|
||||
} else {
|
||||
establishedModelMemory = null;
|
||||
// was establishedModelMemory
|
||||
if (in.getVersion().onOrAfter(Version.V_6_1_0) && in.getVersion().before(Version.V_7_0_0)) {
|
||||
in.readOptionalLong();
|
||||
}
|
||||
if (in.getVersion().onOrAfter(Version.V_6_3_0) && in.readBoolean()) {
|
||||
jobVersion = Version.readVersion(in);
|
||||
} else {
|
||||
jobVersion = null;
|
||||
}
|
||||
if (in.getVersion().onOrAfter(Version.V_6_6_0)) {
|
||||
clearJobFinishTime = in.readOptionalBoolean();
|
||||
} else {
|
||||
clearJobFinishTime = null;
|
||||
}
|
||||
if (in.getVersion().onOrAfter(Version.V_7_0_0) && in.readBoolean()) {
|
||||
modelSnapshotMinVersion = Version.readVersion(in);
|
||||
} else {
|
||||
@ -172,8 +176,9 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
}
|
||||
out.writeMap(customSettings);
|
||||
out.writeOptionalString(modelSnapshotId);
|
||||
if (out.getVersion().onOrAfter(Version.V_6_1_0)) {
|
||||
out.writeOptionalLong(establishedModelMemory);
|
||||
// was establishedModelMemory
|
||||
if (out.getVersion().onOrAfter(Version.V_6_1_0) && out.getVersion().before(Version.V_7_0_0)) {
|
||||
out.writeOptionalLong(null);
|
||||
}
|
||||
if (out.getVersion().onOrAfter(Version.V_6_3_0)) {
|
||||
if (jobVersion != null) {
|
||||
@ -183,6 +188,9 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
out.writeBoolean(false);
|
||||
}
|
||||
}
|
||||
if (out.getVersion().onOrAfter(Version.V_6_6_0)) {
|
||||
out.writeOptionalBoolean(clearJobFinishTime);
|
||||
}
|
||||
if (out.getVersion().onOrAfter(Version.V_7_0_0)) {
|
||||
if (modelSnapshotMinVersion != null) {
|
||||
out.writeBoolean(true);
|
||||
@ -249,14 +257,14 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
return modelSnapshotMinVersion;
|
||||
}
|
||||
|
||||
public Long getEstablishedModelMemory() {
|
||||
return establishedModelMemory;
|
||||
}
|
||||
|
||||
public Version getJobVersion() {
|
||||
return jobVersion;
|
||||
}
|
||||
|
||||
public Boolean getClearJobFinishTime() {
|
||||
return clearJobFinishTime;
|
||||
}
|
||||
|
||||
public boolean isAutodetectProcessUpdate() {
|
||||
return modelPlotConfig != null || detectorUpdates != null || groups != null;
|
||||
}
|
||||
@ -304,12 +312,12 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
if (modelSnapshotMinVersion != null) {
|
||||
builder.field(Job.MODEL_SNAPSHOT_MIN_VERSION.getPreferredName(), modelSnapshotMinVersion);
|
||||
}
|
||||
if (establishedModelMemory != null) {
|
||||
builder.field(Job.ESTABLISHED_MODEL_MEMORY.getPreferredName(), establishedModelMemory);
|
||||
}
|
||||
if (jobVersion != null) {
|
||||
builder.field(Job.JOB_VERSION.getPreferredName(), jobVersion);
|
||||
}
|
||||
if (clearJobFinishTime != null) {
|
||||
builder.field(CLEAR_JOB_FINISH_TIME.getPreferredName(), clearJobFinishTime);
|
||||
}
|
||||
builder.endObject();
|
||||
return builder;
|
||||
}
|
||||
@ -355,9 +363,6 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
if (modelSnapshotMinVersion != null) {
|
||||
updateFields.add(Job.MODEL_SNAPSHOT_MIN_VERSION.getPreferredName());
|
||||
}
|
||||
if (establishedModelMemory != null) {
|
||||
updateFields.add(Job.ESTABLISHED_MODEL_MEMORY.getPreferredName());
|
||||
}
|
||||
if (jobVersion != null) {
|
||||
updateFields.add(Job.JOB_VERSION.getPreferredName());
|
||||
}
|
||||
@ -433,18 +438,14 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
if (modelSnapshotMinVersion != null) {
|
||||
builder.setModelSnapshotMinVersion(modelSnapshotMinVersion);
|
||||
}
|
||||
if (establishedModelMemory != null) {
|
||||
// An established model memory of zero means we don't actually know the established model memory
|
||||
if (establishedModelMemory > 0) {
|
||||
builder.setEstablishedModelMemory(establishedModelMemory);
|
||||
} else {
|
||||
builder.setEstablishedModelMemory(null);
|
||||
}
|
||||
}
|
||||
if (jobVersion != null) {
|
||||
builder.setJobVersion(jobVersion);
|
||||
}
|
||||
|
||||
if (clearJobFinishTime != null && clearJobFinishTime) {
|
||||
builder.setFinishedTime(null);
|
||||
}
|
||||
|
||||
builder.setAnalysisConfig(newAnalysisConfig);
|
||||
return builder.build();
|
||||
}
|
||||
@ -464,8 +465,8 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
&& (customSettings == null || Objects.equals(customSettings, job.getCustomSettings()))
|
||||
&& (modelSnapshotId == null || Objects.equals(modelSnapshotId, job.getModelSnapshotId()))
|
||||
&& (modelSnapshotMinVersion == null || Objects.equals(modelSnapshotMinVersion, job.getModelSnapshotMinVersion()))
|
||||
&& (establishedModelMemory == null || Objects.equals(establishedModelMemory, job.getEstablishedModelMemory()))
|
||||
&& (jobVersion == null || Objects.equals(jobVersion, job.getJobVersion()));
|
||||
&& (jobVersion == null || Objects.equals(jobVersion, job.getJobVersion()))
|
||||
&& ((clearJobFinishTime == null || clearJobFinishTime == false) || job.getFinishedTime() == null);
|
||||
}
|
||||
|
||||
boolean updatesDetectors(Job job) {
|
||||
@ -512,15 +513,15 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
&& Objects.equals(this.customSettings, that.customSettings)
|
||||
&& Objects.equals(this.modelSnapshotId, that.modelSnapshotId)
|
||||
&& Objects.equals(this.modelSnapshotMinVersion, that.modelSnapshotMinVersion)
|
||||
&& Objects.equals(this.establishedModelMemory, that.establishedModelMemory)
|
||||
&& Objects.equals(this.jobVersion, that.jobVersion);
|
||||
&& Objects.equals(this.jobVersion, that.jobVersion)
|
||||
&& Objects.equals(this.clearJobFinishTime, that.clearJobFinishTime);
|
||||
}
|
||||
|
||||
@Override
|
||||
public int hashCode() {
|
||||
return Objects.hash(jobId, groups, description, detectorUpdates, modelPlotConfig, analysisLimits, renormalizationWindowDays,
|
||||
backgroundPersistInterval, modelSnapshotRetentionDays, resultsRetentionDays, categorizationFilters, customSettings,
|
||||
modelSnapshotId, modelSnapshotMinVersion, establishedModelMemory, jobVersion);
|
||||
modelSnapshotId, modelSnapshotMinVersion, jobVersion, clearJobFinishTime);
|
||||
}
|
||||
|
||||
public static class DetectorUpdate implements Writeable, ToXContentObject {
|
||||
@ -630,8 +631,8 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
private Map<String, Object> customSettings;
|
||||
private String modelSnapshotId;
|
||||
private Version modelSnapshotMinVersion;
|
||||
private Long establishedModelMemory;
|
||||
private Version jobVersion;
|
||||
private Boolean clearJobFinishTime;
|
||||
|
||||
public Builder(String jobId) {
|
||||
this.jobId = jobId;
|
||||
@ -712,11 +713,6 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setEstablishedModelMemory(Long establishedModelMemory) {
|
||||
this.establishedModelMemory = establishedModelMemory;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setJobVersion(Version version) {
|
||||
this.jobVersion = version;
|
||||
return this;
|
||||
@ -727,10 +723,15 @@ public class JobUpdate implements Writeable, ToXContentObject {
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder setClearFinishTime(boolean clearJobFinishTime) {
|
||||
this.clearJobFinishTime = clearJobFinishTime;
|
||||
return this;
|
||||
}
|
||||
|
||||
public JobUpdate build() {
|
||||
return new JobUpdate(jobId, groups, description, detectorUpdates, modelPlotConfig, analysisLimits, backgroundPersistInterval,
|
||||
renormalizationWindowDays, resultsRetentionDays, modelSnapshotRetentionDays, categorizationFilters, customSettings,
|
||||
modelSnapshotId, modelSnapshotMinVersion, establishedModelMemory, jobVersion);
|
||||
modelSnapshotId, modelSnapshotMinVersion, jobVersion, clearJobFinishTime);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -14,10 +14,10 @@ import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.xcontent.ObjectParser;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetaIndex;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.core.ml.utils.MlStrings;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
@ -101,7 +101,7 @@ public class MlFilter implements ToXContentObject, Writeable {
|
||||
builder.field(DESCRIPTION.getPreferredName(), description);
|
||||
}
|
||||
builder.field(ITEMS.getPreferredName(), items);
|
||||
if (params.paramAsBoolean(MlMetaIndex.INCLUDE_TYPE_KEY, false)) {
|
||||
if (params.paramAsBoolean(ToXContentParams.INCLUDE_TYPE, false)) {
|
||||
builder.field(TYPE.getPreferredName(), FILTER_TYPE);
|
||||
}
|
||||
builder.endObject();
|
||||
|
@ -18,8 +18,8 @@ import java.util.Objects;
|
||||
|
||||
public class ModelPlotConfig implements ToXContentObject, Writeable {
|
||||
|
||||
private static final ParseField TYPE_FIELD = new ParseField("model_plot_config");
|
||||
private static final ParseField ENABLED_FIELD = new ParseField("enabled");
|
||||
public static final ParseField TYPE_FIELD = new ParseField("model_plot_config");
|
||||
public static final ParseField ENABLED_FIELD = new ParseField("enabled");
|
||||
public static final ParseField TERMS_FIELD = new ParseField("terms");
|
||||
|
||||
// These parsers follow the pattern that metadata is parsed leniently (to allow for enhancements), whilst config is parsed strictly
|
||||
|
@ -48,9 +48,11 @@ public final class Messages {
|
||||
public static final String DATAFEED_MISSING_MAX_AGGREGATION_FOR_TIME_FIELD = "Missing max aggregation for time_field [{0}]";
|
||||
public static final String DATAFEED_FREQUENCY_MUST_BE_MULTIPLE_OF_AGGREGATIONS_INTERVAL =
|
||||
"Datafeed frequency [{0}] must be a multiple of the aggregation interval [{1}]";
|
||||
public static final String DATAFEED_ID_ALREADY_TAKEN = "A datafeed with id [{0}] already exists";
|
||||
|
||||
public static final String FILTER_NOT_FOUND = "No filter with id [{0}] exists";
|
||||
public static final String FILTER_CANNOT_DELETE = "Cannot delete filter [{0}] currently used by jobs {1}";
|
||||
public static final String FILTER_CONTAINS_TOO_MANY_ITEMS = "Filter [{0}] contains too many items; up to [{1}] items are allowed";
|
||||
public static final String FILTER_NOT_FOUND = "No filter with id [{0}] exists";
|
||||
|
||||
public static final String INCONSISTENT_ID =
|
||||
"Inconsistent {0}; ''{1}'' specified in the body differs from ''{2}'' specified as a URL argument";
|
||||
|
@ -5,9 +5,6 @@
|
||||
*/
|
||||
package org.elasticsearch.xpack.core.ml.job.persistence;
|
||||
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
|
||||
/**
|
||||
* Methods for handling index naming related functions
|
||||
*/
|
||||
@ -40,15 +37,6 @@ public final class AnomalyDetectorsIndex {
|
||||
return AnomalyDetectorsIndexFields.RESULTS_INDEX_PREFIX + ".write-" + jobId;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieves the currently defined physical index from the job state
|
||||
* @param jobId Job Id
|
||||
* @return The index name
|
||||
*/
|
||||
public static String getPhysicalIndexFromState(ClusterState state, String jobId) {
|
||||
return MlMetadata.getMlMetadata(state).getJobs().get(jobId).getResultsIndexName();
|
||||
}
|
||||
|
||||
/**
|
||||
* The name of the default index where a job's state is stored
|
||||
* @return The index name
|
||||
@ -56,4 +44,14 @@ public final class AnomalyDetectorsIndex {
|
||||
public static String jobStateIndexName() {
|
||||
return AnomalyDetectorsIndexFields.STATE_INDEX_NAME;
|
||||
}
|
||||
|
||||
/**
|
||||
* The name of the index where job and datafeed configuration
|
||||
* is stored
|
||||
* @return The index name
|
||||
*/
|
||||
public static String configIndexName() {
|
||||
return AnomalyDetectorsIndexFields.CONFIG_INDEX;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -7,6 +7,7 @@ package org.elasticsearch.xpack.core.ml.job.persistence;
|
||||
|
||||
public final class AnomalyDetectorsIndexFields {
|
||||
|
||||
public static final String CONFIG_INDEX = ".ml-config";
|
||||
public static final String RESULTS_INDEX_PREFIX = ".ml-anomalies-";
|
||||
public static final String STATE_INDEX_NAME = ".ml-state";
|
||||
public static final String RESULTS_INDEX_DEFAULT = "shared";
|
||||
|
@ -7,8 +7,18 @@ package org.elasticsearch.xpack.core.ml.job.persistence;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.ChunkingConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DelayedDataCheckConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.AnalysisLimits;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DataDescription;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DetectionRule;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Detector;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.ModelPlotConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Operator;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.RuleCondition;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSizeStats;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSnapshot;
|
||||
@ -34,8 +44,8 @@ import java.util.Collections;
|
||||
import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;
|
||||
|
||||
/**
|
||||
* Static methods to create Elasticsearch mappings for the autodetect
|
||||
* persisted objects/documents
|
||||
* Static methods to create Elasticsearch index mappings for the autodetect
|
||||
* persisted objects/documents and configurations
|
||||
* <p>
|
||||
* ElasticSearch automatically recognises array types so they are
|
||||
* not explicitly mapped as such. For arrays of objects the type
|
||||
@ -79,6 +89,11 @@ public class ElasticsearchMappings {
|
||||
*/
|
||||
public static final String ES_DOC = "_doc";
|
||||
|
||||
/**
|
||||
* The configuration document type
|
||||
*/
|
||||
public static final String CONFIG_TYPE = "config_type";
|
||||
|
||||
/**
|
||||
* Elasticsearch data types
|
||||
*/
|
||||
@ -95,6 +110,261 @@ public class ElasticsearchMappings {
|
||||
private ElasticsearchMappings() {
|
||||
}
|
||||
|
||||
public static XContentBuilder configMapping() throws IOException {
|
||||
XContentBuilder builder = jsonBuilder();
|
||||
builder.startObject();
|
||||
builder.startObject(DOC_TYPE);
|
||||
addMetaInformation(builder);
|
||||
addDefaultMapping(builder);
|
||||
builder.startObject(PROPERTIES);
|
||||
|
||||
addJobConfigFields(builder);
|
||||
addDatafeedConfigFields(builder);
|
||||
|
||||
builder.endObject()
|
||||
.endObject()
|
||||
.endObject();
|
||||
return builder;
|
||||
}
|
||||
|
||||
public static void addJobConfigFields(XContentBuilder builder) throws IOException {
|
||||
|
||||
builder.startObject(CONFIG_TYPE)
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Job.ID.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Job.JOB_TYPE.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Job.JOB_VERSION.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Job.GROUPS.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Job.ANALYSIS_CONFIG.getPreferredName())
|
||||
.startObject(PROPERTIES)
|
||||
.startObject(AnalysisConfig.BUCKET_SPAN.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(AnalysisConfig.CATEGORIZATION_FIELD_NAME.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(AnalysisConfig.CATEGORIZATION_FILTERS.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(AnalysisConfig.CATEGORIZATION_ANALYZER.getPreferredName())
|
||||
.field(ENABLED, false)
|
||||
.endObject()
|
||||
.startObject(AnalysisConfig.LATENCY.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(AnalysisConfig.SUMMARY_COUNT_FIELD_NAME.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(AnalysisConfig.DETECTORS.getPreferredName())
|
||||
.startObject(PROPERTIES)
|
||||
.startObject(Detector.DETECTOR_DESCRIPTION_FIELD.getPreferredName())
|
||||
.field(TYPE, TEXT)
|
||||
.endObject()
|
||||
.startObject(Detector.FUNCTION_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Detector.FIELD_NAME_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Detector.BY_FIELD_NAME_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Detector.OVER_FIELD_NAME_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Detector.PARTITION_FIELD_NAME_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Detector.USE_NULL_FIELD.getPreferredName())
|
||||
.field(TYPE, BOOLEAN)
|
||||
.endObject()
|
||||
.startObject(Detector.EXCLUDE_FREQUENT_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Detector.CUSTOM_RULES_FIELD.getPreferredName())
|
||||
.field(TYPE, NESTED)
|
||||
.startObject(PROPERTIES)
|
||||
.startObject(DetectionRule.ACTIONS_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
// RuleScope is a map
|
||||
.startObject(DetectionRule.SCOPE_FIELD.getPreferredName())
|
||||
.field(ENABLED, false)
|
||||
.endObject()
|
||||
.startObject(DetectionRule.CONDITIONS_FIELD.getPreferredName())
|
||||
.field(TYPE, NESTED)
|
||||
.startObject(PROPERTIES)
|
||||
.startObject(RuleCondition.APPLIES_TO_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Operator.OPERATOR_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(RuleCondition.VALUE_FIELD.getPreferredName())
|
||||
.field(TYPE, DOUBLE)
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
.startObject(Detector.DETECTOR_INDEX.getPreferredName())
|
||||
.field(TYPE, INTEGER)
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
|
||||
.startObject(AnalysisConfig.INFLUENCERS.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(AnalysisConfig.MULTIVARIATE_BY_FIELDS.getPreferredName())
|
||||
.field(TYPE, BOOLEAN)
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
|
||||
.startObject(Job.ANALYSIS_LIMITS.getPreferredName())
|
||||
.startObject(PROPERTIES)
|
||||
.startObject(AnalysisLimits.MODEL_MEMORY_LIMIT.getPreferredName())
|
||||
.field(TYPE, KEYWORD) // TODO Should be a ByteSizeValue
|
||||
.endObject()
|
||||
.startObject(AnalysisLimits.CATEGORIZATION_EXAMPLES_LIMIT.getPreferredName())
|
||||
.field(TYPE, LONG)
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
|
||||
.startObject(Job.CREATE_TIME.getPreferredName())
|
||||
.field(TYPE, DATE)
|
||||
.endObject()
|
||||
|
||||
.startObject(Job.CUSTOM_SETTINGS.getPreferredName())
|
||||
// Custom settings are an untyped map
|
||||
.field(ENABLED, false)
|
||||
.endObject()
|
||||
|
||||
.startObject(Job.DATA_DESCRIPTION.getPreferredName())
|
||||
.startObject(PROPERTIES)
|
||||
.startObject(DataDescription.FORMAT_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(DataDescription.TIME_FIELD_NAME_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(DataDescription.TIME_FORMAT_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(DataDescription.FIELD_DELIMITER_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(DataDescription.QUOTE_CHARACTER_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
|
||||
.startObject(Job.DESCRIPTION.getPreferredName())
|
||||
.field(TYPE, TEXT)
|
||||
.endObject()
|
||||
.startObject(Job.FINISHED_TIME.getPreferredName())
|
||||
.field(TYPE, DATE)
|
||||
.endObject()
|
||||
|
||||
.startObject(Job.MODEL_PLOT_CONFIG.getPreferredName())
|
||||
.startObject(PROPERTIES)
|
||||
.startObject(ModelPlotConfig.ENABLED_FIELD.getPreferredName())
|
||||
.field(TYPE, BOOLEAN)
|
||||
.endObject()
|
||||
.startObject(ModelPlotConfig.TERMS_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
|
||||
.startObject(Job.RENORMALIZATION_WINDOW_DAYS.getPreferredName())
|
||||
.field(TYPE, LONG) // TODO should be TimeValue
|
||||
.endObject()
|
||||
.startObject(Job.BACKGROUND_PERSIST_INTERVAL.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Job.MODEL_SNAPSHOT_RETENTION_DAYS.getPreferredName())
|
||||
.field(TYPE, LONG) // TODO should be TimeValue
|
||||
.endObject()
|
||||
.startObject(Job.RESULTS_RETENTION_DAYS.getPreferredName())
|
||||
.field(TYPE, LONG) // TODO should be TimeValue
|
||||
.endObject()
|
||||
.startObject(Job.MODEL_SNAPSHOT_ID.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Job.MODEL_SNAPSHOT_MIN_VERSION.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(Job.RESULTS_INDEX_NAME.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject();
|
||||
}
|
||||
|
||||
public static void addDatafeedConfigFields(XContentBuilder builder) throws IOException {
|
||||
builder.startObject(DatafeedConfig.ID.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.QUERY_DELAY.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.FREQUENCY.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.INDICES.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.TYPES.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.QUERY.getPreferredName())
|
||||
.field(ENABLED, false)
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.SCROLL_SIZE.getPreferredName())
|
||||
.field(TYPE, LONG)
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.AGGREGATIONS.getPreferredName())
|
||||
.field(ENABLED, false)
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.SCRIPT_FIELDS.getPreferredName())
|
||||
.field(ENABLED, false)
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.CHUNKING_CONFIG.getPreferredName())
|
||||
.startObject(PROPERTIES)
|
||||
.startObject(ChunkingConfig.MODE_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.startObject(ChunkingConfig.TIME_SPAN_FIELD.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.DELAYED_DATA_CHECK_CONFIG.getPreferredName())
|
||||
.startObject(PROPERTIES)
|
||||
.startObject(DelayedDataCheckConfig.ENABLED.getPreferredName())
|
||||
.field(TYPE, BOOLEAN)
|
||||
.endObject()
|
||||
.startObject(DelayedDataCheckConfig.CHECK_WINDOW.getPreferredName())
|
||||
.field(TYPE, KEYWORD)
|
||||
.endObject()
|
||||
.endObject()
|
||||
.endObject()
|
||||
.startObject(DatafeedConfig.HEADERS.getPreferredName())
|
||||
.field(ENABLED, false)
|
||||
.endObject();
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a default mapping which has a dynamic template that
|
||||
* treats all dynamically added fields as keywords. This is needed
|
||||
@ -129,11 +399,11 @@ public class ElasticsearchMappings {
|
||||
.endObject();
|
||||
}
|
||||
|
||||
public static XContentBuilder docMapping() throws IOException {
|
||||
return docMapping(Collections.emptyList());
|
||||
public static XContentBuilder resultsMapping() throws IOException {
|
||||
return resultsMapping(Collections.emptyList());
|
||||
}
|
||||
|
||||
public static XContentBuilder docMapping(Collection<String> extraTermFields) throws IOException {
|
||||
public static XContentBuilder resultsMapping(Collection<String> extraTermFields) throws IOException {
|
||||
XContentBuilder builder = jsonBuilder();
|
||||
builder.startObject();
|
||||
builder.startObject(DOC_TYPE);
|
||||
|
@ -5,8 +5,18 @@
|
||||
*/
|
||||
package org.elasticsearch.xpack.core.ml.job.results;
|
||||
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.ChunkingConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DelayedDataCheckConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.AnalysisLimits;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DataDescription;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DetectionRule;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Detector;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.ModelPlotConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Operator;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.RuleCondition;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.ElasticsearchMappings;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSizeStats;
|
||||
@ -36,7 +46,7 @@ public final class ReservedFieldNames {
|
||||
* 2.x requires mappings for given fields be consistent across all types
|
||||
* in a given index.)
|
||||
*/
|
||||
private static final String[] RESERVED_FIELD_NAME_ARRAY = {
|
||||
private static final String[] RESERVED_RESULT_FIELD_NAME_ARRAY = {
|
||||
ElasticsearchMappings.ALL_FIELD_VALUES,
|
||||
|
||||
Job.ID.getPreferredName(),
|
||||
@ -164,25 +174,116 @@ public final class ReservedFieldNames {
|
||||
};
|
||||
|
||||
/**
|
||||
* Test if fieldName is one of the reserved names or if it contains dots then
|
||||
* that the segment before the first dot is not a reserved name. A fieldName
|
||||
* containing dots represents nested fields in which case we only care about
|
||||
* the top level.
|
||||
* This array should be updated to contain all the field names that appear
|
||||
* in any documents we store in our config index.
|
||||
*/
|
||||
private static final String[] RESERVED_CONFIG_FIELD_NAME_ARRAY = {
|
||||
Job.ID.getPreferredName(),
|
||||
Job.JOB_TYPE.getPreferredName(),
|
||||
Job.JOB_VERSION.getPreferredName(),
|
||||
Job.GROUPS.getPreferredName(),
|
||||
Job.ANALYSIS_CONFIG.getPreferredName(),
|
||||
Job.ANALYSIS_LIMITS.getPreferredName(),
|
||||
Job.CREATE_TIME.getPreferredName(),
|
||||
Job.CUSTOM_SETTINGS.getPreferredName(),
|
||||
Job.DATA_DESCRIPTION.getPreferredName(),
|
||||
Job.DESCRIPTION.getPreferredName(),
|
||||
Job.FINISHED_TIME.getPreferredName(),
|
||||
Job.MODEL_PLOT_CONFIG.getPreferredName(),
|
||||
Job.RENORMALIZATION_WINDOW_DAYS.getPreferredName(),
|
||||
Job.BACKGROUND_PERSIST_INTERVAL.getPreferredName(),
|
||||
Job.MODEL_SNAPSHOT_RETENTION_DAYS.getPreferredName(),
|
||||
Job.RESULTS_RETENTION_DAYS.getPreferredName(),
|
||||
Job.MODEL_SNAPSHOT_ID.getPreferredName(),
|
||||
Job.MODEL_SNAPSHOT_MIN_VERSION.getPreferredName(),
|
||||
Job.RESULTS_INDEX_NAME.getPreferredName(),
|
||||
|
||||
AnalysisConfig.BUCKET_SPAN.getPreferredName(),
|
||||
AnalysisConfig.CATEGORIZATION_FIELD_NAME.getPreferredName(),
|
||||
AnalysisConfig.CATEGORIZATION_FILTERS.getPreferredName(),
|
||||
AnalysisConfig.CATEGORIZATION_ANALYZER.getPreferredName(),
|
||||
AnalysisConfig.LATENCY.getPreferredName(),
|
||||
AnalysisConfig.SUMMARY_COUNT_FIELD_NAME.getPreferredName(),
|
||||
AnalysisConfig.DETECTORS.getPreferredName(),
|
||||
AnalysisConfig.INFLUENCERS.getPreferredName(),
|
||||
AnalysisConfig.MULTIVARIATE_BY_FIELDS.getPreferredName(),
|
||||
|
||||
AnalysisLimits.MODEL_MEMORY_LIMIT.getPreferredName(),
|
||||
AnalysisLimits.CATEGORIZATION_EXAMPLES_LIMIT.getPreferredName(),
|
||||
|
||||
Detector.DETECTOR_DESCRIPTION_FIELD.getPreferredName(),
|
||||
Detector.FUNCTION_FIELD.getPreferredName(),
|
||||
Detector.FIELD_NAME_FIELD.getPreferredName(),
|
||||
Detector.BY_FIELD_NAME_FIELD.getPreferredName(),
|
||||
Detector.OVER_FIELD_NAME_FIELD.getPreferredName(),
|
||||
Detector.PARTITION_FIELD_NAME_FIELD.getPreferredName(),
|
||||
Detector.USE_NULL_FIELD.getPreferredName(),
|
||||
Detector.EXCLUDE_FREQUENT_FIELD.getPreferredName(),
|
||||
Detector.CUSTOM_RULES_FIELD.getPreferredName(),
|
||||
Detector.DETECTOR_INDEX.getPreferredName(),
|
||||
|
||||
DetectionRule.ACTIONS_FIELD.getPreferredName(),
|
||||
DetectionRule.CONDITIONS_FIELD.getPreferredName(),
|
||||
DetectionRule.SCOPE_FIELD.getPreferredName(),
|
||||
RuleCondition.APPLIES_TO_FIELD.getPreferredName(),
|
||||
RuleCondition.VALUE_FIELD.getPreferredName(),
|
||||
Operator.OPERATOR_FIELD.getPreferredName(),
|
||||
|
||||
DataDescription.FORMAT_FIELD.getPreferredName(),
|
||||
DataDescription.TIME_FIELD_NAME_FIELD.getPreferredName(),
|
||||
DataDescription.TIME_FORMAT_FIELD.getPreferredName(),
|
||||
DataDescription.FIELD_DELIMITER_FIELD.getPreferredName(),
|
||||
DataDescription.QUOTE_CHARACTER_FIELD.getPreferredName(),
|
||||
|
||||
ModelPlotConfig.ENABLED_FIELD.getPreferredName(),
|
||||
ModelPlotConfig.TERMS_FIELD.getPreferredName(),
|
||||
|
||||
DatafeedConfig.ID.getPreferredName(),
|
||||
DatafeedConfig.QUERY_DELAY.getPreferredName(),
|
||||
DatafeedConfig.FREQUENCY.getPreferredName(),
|
||||
DatafeedConfig.INDICES.getPreferredName(),
|
||||
DatafeedConfig.TYPES.getPreferredName(),
|
||||
DatafeedConfig.QUERY.getPreferredName(),
|
||||
DatafeedConfig.SCROLL_SIZE.getPreferredName(),
|
||||
DatafeedConfig.AGGREGATIONS.getPreferredName(),
|
||||
DatafeedConfig.SCRIPT_FIELDS.getPreferredName(),
|
||||
DatafeedConfig.CHUNKING_CONFIG.getPreferredName(),
|
||||
DatafeedConfig.HEADERS.getPreferredName(),
|
||||
DatafeedConfig.DELAYED_DATA_CHECK_CONFIG.getPreferredName(),
|
||||
DelayedDataCheckConfig.ENABLED.getPreferredName(),
|
||||
DelayedDataCheckConfig.CHECK_WINDOW.getPreferredName(),
|
||||
|
||||
ChunkingConfig.MODE_FIELD.getPreferredName(),
|
||||
ChunkingConfig.TIME_SPAN_FIELD.getPreferredName(),
|
||||
|
||||
ElasticsearchMappings.CONFIG_TYPE
|
||||
};
|
||||
|
||||
/**
|
||||
* Test if fieldName is one of the reserved result fieldnames or if it contains
|
||||
* dots then that the segment before the first dot is not a reserved results
|
||||
* fieldname. A fieldName containing dots represents nested fields in which
|
||||
* case we only care about the top level.
|
||||
*
|
||||
* @param fieldName Document field name. This may contain dots '.'
|
||||
* @return True if fieldName is not a reserved name or the top level segment
|
||||
* @return True if fieldName is not a reserved results fieldname or the top level segment
|
||||
* is not a reserved name.
|
||||
*/
|
||||
public static boolean isValidFieldName(String fieldName) {
|
||||
String[] segments = DOT_PATTERN.split(fieldName);
|
||||
return !RESERVED_FIELD_NAMES.contains(segments[0]);
|
||||
return RESERVED_RESULT_FIELD_NAMES.contains(segments[0]) == false;
|
||||
}
|
||||
|
||||
/**
|
||||
* A set of all reserved field names in our results. Fields from the raw
|
||||
* data with these names are not added to any result.
|
||||
*/
|
||||
public static final Set<String> RESERVED_FIELD_NAMES = new HashSet<>(Arrays.asList(RESERVED_FIELD_NAME_ARRAY));
|
||||
public static final Set<String> RESERVED_RESULT_FIELD_NAMES = new HashSet<>(Arrays.asList(RESERVED_RESULT_FIELD_NAME_ARRAY));
|
||||
|
||||
/**
|
||||
* A set of all reserved field names in our config.
|
||||
*/
|
||||
public static final Set<String> RESERVED_CONFIG_FIELD_NAMES = new HashSet<>(Arrays.asList(RESERVED_CONFIG_FIELD_NAME_ARRAY));
|
||||
|
||||
private ReservedFieldNames() {
|
||||
}
|
||||
|
@ -30,6 +30,10 @@ public class ExceptionsHelper {
|
||||
return new ResourceNotFoundException(Messages.getMessage(Messages.DATAFEED_NOT_FOUND, datafeedId));
|
||||
}
|
||||
|
||||
public static ResourceAlreadyExistsException datafeedAlreadyExists(String datafeedId) {
|
||||
return new ResourceAlreadyExistsException(Messages.getMessage(Messages.DATAFEED_ID_ALREADY_TAKEN, datafeedId));
|
||||
}
|
||||
|
||||
public static ElasticsearchException serverError(String msg) {
|
||||
return new ElasticsearchException(msg);
|
||||
}
|
||||
@ -54,6 +58,11 @@ public class ExceptionsHelper {
|
||||
return new ElasticsearchStatusException(msg, RestStatus.BAD_REQUEST, args);
|
||||
}
|
||||
|
||||
public static ElasticsearchStatusException configHasNotBeenMigrated(String verb, String id) {
|
||||
return new ElasticsearchStatusException("cannot {} as the configuration [{}] is temporarily pending migration",
|
||||
RestStatus.SERVICE_UNAVAILABLE, verb, id);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates an error message that explains there are shard failures, displays info
|
||||
* for the first failure (shard/reason) and kindly asks to see more info in the logs
|
||||
|
@ -12,9 +12,17 @@ package org.elasticsearch.xpack.core.ml.utils;
|
||||
public final class ToXContentParams {
|
||||
|
||||
/**
|
||||
* Parameter to indicate whether we are serialising to X Content for cluster state output.
|
||||
* Parameter to indicate whether we are serialising to X Content for
|
||||
* internal storage. Certain fields need to be persisted but should
|
||||
* not be visible everywhere.
|
||||
*/
|
||||
public static final String FOR_CLUSTER_STATE = "for_cluster_state";
|
||||
public static final String FOR_INTERNAL_STORAGE = "for_internal_storage";
|
||||
|
||||
/**
|
||||
* When serialising POJOs to X Content this indicates whether the type field
|
||||
* should be included or not
|
||||
*/
|
||||
public static final String INCLUDE_TYPE = "include_type";
|
||||
|
||||
private ToXContentParams() {
|
||||
}
|
||||
|
@ -0,0 +1,126 @@
|
||||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
|
||||
package org.elasticsearch.xpack.core.ml;
|
||||
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.core.ml.action.OpenJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.StartDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobTaskState;
|
||||
|
||||
import static org.hamcrest.Matchers.containsInAnyOrder;
|
||||
import static org.hamcrest.Matchers.empty;
|
||||
|
||||
public class MlTasksTests extends ESTestCase {
|
||||
public void testGetJobState() {
|
||||
PersistentTasksCustomMetaData.Builder tasksBuilder = PersistentTasksCustomMetaData.builder();
|
||||
// A missing task is a closed job
|
||||
assertEquals(JobState.CLOSED, MlTasks.getJobState("foo", tasksBuilder.build()));
|
||||
// A task with no status is opening
|
||||
tasksBuilder.addTask(MlTasks.jobTaskId("foo"), MlTasks.JOB_TASK_NAME, new OpenJobAction.JobParams("foo"),
|
||||
new PersistentTasksCustomMetaData.Assignment("bar", "test assignment"));
|
||||
assertEquals(JobState.OPENING, MlTasks.getJobState("foo", tasksBuilder.build()));
|
||||
|
||||
tasksBuilder.updateTaskState(MlTasks.jobTaskId("foo"), new JobTaskState(JobState.OPENED, tasksBuilder.getLastAllocationId()));
|
||||
assertEquals(JobState.OPENED, MlTasks.getJobState("foo", tasksBuilder.build()));
|
||||
}
|
||||
|
||||
public void testGetJobState_GivenNull() {
|
||||
assertEquals(JobState.CLOSED, MlTasks.getJobState("foo", null));
|
||||
}
|
||||
|
||||
public void testGetDatefeedState() {
|
||||
PersistentTasksCustomMetaData.Builder tasksBuilder = PersistentTasksCustomMetaData.builder();
|
||||
// A missing task is a stopped datafeed
|
||||
assertEquals(DatafeedState.STOPPED, MlTasks.getDatafeedState("foo", tasksBuilder.build()));
|
||||
|
||||
tasksBuilder.addTask(MlTasks.datafeedTaskId("foo"), MlTasks.DATAFEED_TASK_NAME,
|
||||
new StartDatafeedAction.DatafeedParams("foo", 0L),
|
||||
new PersistentTasksCustomMetaData.Assignment("bar", "test assignment"));
|
||||
assertEquals(DatafeedState.STOPPED, MlTasks.getDatafeedState("foo", tasksBuilder.build()));
|
||||
|
||||
tasksBuilder.updateTaskState(MlTasks.datafeedTaskId("foo"), DatafeedState.STARTED);
|
||||
assertEquals(DatafeedState.STARTED, MlTasks.getDatafeedState("foo", tasksBuilder.build()));
|
||||
}
|
||||
|
||||
public void testGetJobTask() {
|
||||
assertNull(MlTasks.getJobTask("foo", null));
|
||||
|
||||
PersistentTasksCustomMetaData.Builder tasksBuilder = PersistentTasksCustomMetaData.builder();
|
||||
tasksBuilder.addTask(MlTasks.jobTaskId("foo"), MlTasks.JOB_TASK_NAME, new OpenJobAction.JobParams("foo"),
|
||||
new PersistentTasksCustomMetaData.Assignment("bar", "test assignment"));
|
||||
|
||||
assertNotNull(MlTasks.getJobTask("foo", tasksBuilder.build()));
|
||||
assertNull(MlTasks.getJobTask("other", tasksBuilder.build()));
|
||||
}
|
||||
|
||||
public void testGetDatafeedTask() {
|
||||
assertNull(MlTasks.getDatafeedTask("foo", null));
|
||||
|
||||
PersistentTasksCustomMetaData.Builder tasksBuilder = PersistentTasksCustomMetaData.builder();
|
||||
tasksBuilder.addTask(MlTasks.datafeedTaskId("foo"), MlTasks.DATAFEED_TASK_NAME,
|
||||
new StartDatafeedAction.DatafeedParams("foo", 0L),
|
||||
new PersistentTasksCustomMetaData.Assignment("bar", "test assignment"));
|
||||
|
||||
assertNotNull(MlTasks.getDatafeedTask("foo", tasksBuilder.build()));
|
||||
assertNull(MlTasks.getDatafeedTask("other", tasksBuilder.build()));
|
||||
}
|
||||
|
||||
public void testOpenJobIds() {
|
||||
PersistentTasksCustomMetaData.Builder tasksBuilder = PersistentTasksCustomMetaData.builder();
|
||||
assertThat(MlTasks.openJobIds(tasksBuilder.build()), empty());
|
||||
|
||||
tasksBuilder.addTask(MlTasks.jobTaskId("foo-1"), MlTasks.JOB_TASK_NAME, new OpenJobAction.JobParams("foo-1"),
|
||||
new PersistentTasksCustomMetaData.Assignment("node-1", "test assignment"));
|
||||
tasksBuilder.addTask(MlTasks.jobTaskId("bar"), MlTasks.JOB_TASK_NAME, new OpenJobAction.JobParams("bar"),
|
||||
new PersistentTasksCustomMetaData.Assignment("node-1", "test assignment"));
|
||||
tasksBuilder.addTask(MlTasks.datafeedTaskId("df"), MlTasks.DATAFEED_TASK_NAME,
|
||||
new StartDatafeedAction.DatafeedParams("df", 0L),
|
||||
new PersistentTasksCustomMetaData.Assignment("node-1", "test assignment"));
|
||||
|
||||
assertThat(MlTasks.openJobIds(tasksBuilder.build()), containsInAnyOrder("foo-1", "bar"));
|
||||
}
|
||||
|
||||
public void testOpenJobIds_GivenNull() {
|
||||
assertThat(MlTasks.openJobIds(null), empty());
|
||||
}
|
||||
|
||||
public void testStartedDatafeedIds() {
|
||||
PersistentTasksCustomMetaData.Builder tasksBuilder = PersistentTasksCustomMetaData.builder();
|
||||
assertThat(MlTasks.openJobIds(tasksBuilder.build()), empty());
|
||||
|
||||
tasksBuilder.addTask(MlTasks.jobTaskId("job-1"), MlTasks.JOB_TASK_NAME, new OpenJobAction.JobParams("foo-1"),
|
||||
new PersistentTasksCustomMetaData.Assignment("node-1", "test assignment"));
|
||||
tasksBuilder.addTask(MlTasks.datafeedTaskId("df1"), MlTasks.DATAFEED_TASK_NAME,
|
||||
new StartDatafeedAction.DatafeedParams("df1", 0L),
|
||||
new PersistentTasksCustomMetaData.Assignment("node-1", "test assignment"));
|
||||
tasksBuilder.addTask(MlTasks.datafeedTaskId("df2"), MlTasks.DATAFEED_TASK_NAME,
|
||||
new StartDatafeedAction.DatafeedParams("df2", 0L),
|
||||
new PersistentTasksCustomMetaData.Assignment("node-2", "test assignment"));
|
||||
|
||||
assertThat(MlTasks.startedDatafeedIds(tasksBuilder.build()), containsInAnyOrder("df1", "df2"));
|
||||
}
|
||||
|
||||
public void testStartedDatafeedIds_GivenNull() {
|
||||
assertThat(MlTasks.startedDatafeedIds(null), empty());
|
||||
}
|
||||
|
||||
public void testTaskExistsForJob() {
|
||||
PersistentTasksCustomMetaData.Builder tasksBuilder = PersistentTasksCustomMetaData.builder();
|
||||
assertFalse(MlTasks.taskExistsForJob("job-1", tasksBuilder.build()));
|
||||
|
||||
tasksBuilder.addTask(MlTasks.jobTaskId("foo"), MlTasks.JOB_TASK_NAME, new OpenJobAction.JobParams("foo"),
|
||||
new PersistentTasksCustomMetaData.Assignment("node-1", "test assignment"));
|
||||
tasksBuilder.addTask(MlTasks.jobTaskId("bar"), MlTasks.JOB_TASK_NAME, new OpenJobAction.JobParams("bar"),
|
||||
new PersistentTasksCustomMetaData.Assignment("node-1", "test assignment"));
|
||||
|
||||
assertFalse(MlTasks.taskExistsForJob("job-1", tasksBuilder.build()));
|
||||
assertTrue(MlTasks.taskExistsForJob("foo", tasksBuilder.build()));
|
||||
}
|
||||
}
|
@ -12,6 +12,7 @@ import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
|
||||
public class DatafeedParamsTests extends AbstractSerializingTestCase<StartDatafeedAction.DatafeedParams> {
|
||||
@Override
|
||||
@ -28,6 +29,13 @@ public class DatafeedParamsTests extends AbstractSerializingTestCase<StartDatafe
|
||||
if (randomBoolean()) {
|
||||
params.setTimeout(TimeValue.timeValueMillis(randomNonNegativeLong()));
|
||||
}
|
||||
if (randomBoolean()) {
|
||||
params.setJobId(randomAlphaOfLength(10));
|
||||
}
|
||||
if (randomBoolean()) {
|
||||
params.setDatafeedIndices(Arrays.asList(randomAlphaOfLength(10), randomAlphaOfLength(10)));
|
||||
}
|
||||
|
||||
return params;
|
||||
}
|
||||
|
||||
|
@ -10,8 +10,10 @@ import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobTests;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.function.Predicate;
|
||||
|
||||
public class JobParamsTests extends AbstractSerializingTestCase<OpenJobAction.JobParams> {
|
||||
|
||||
@ -25,6 +27,9 @@ public class JobParamsTests extends AbstractSerializingTestCase<OpenJobAction.Jo
|
||||
if (randomBoolean()) {
|
||||
params.setTimeout(TimeValue.timeValueMillis(randomNonNegativeLong()));
|
||||
}
|
||||
if (randomBoolean()) {
|
||||
params.setJob(JobTests.createRandomizedJob());
|
||||
}
|
||||
return params;
|
||||
}
|
||||
|
||||
@ -42,4 +47,12 @@ public class JobParamsTests extends AbstractSerializingTestCase<OpenJobAction.Jo
|
||||
protected boolean supportsUnknownFields() {
|
||||
return true;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Predicate<String> getRandomFieldsExcludeFilter() {
|
||||
// Don't insert random fields into the job object as the
|
||||
// custom_fields member accepts arbitrary fields and new
|
||||
// fields inserted there will result in object inequality
|
||||
return path -> path.startsWith(OpenJobAction.JobParams.JOB.getPreferredName());
|
||||
}
|
||||
}
|
||||
|
@ -18,8 +18,14 @@ public class UpdateJobActionRequestTests
|
||||
// no need to randomize JobUpdate this is already tested in: JobUpdateTests
|
||||
JobUpdate.Builder jobUpdate = new JobUpdate.Builder(jobId);
|
||||
jobUpdate.setAnalysisLimits(new AnalysisLimits(100L, 100L));
|
||||
UpdateJobAction.Request request = new UpdateJobAction.Request(jobId, jobUpdate.build());
|
||||
request.setWaitForAck(randomBoolean());
|
||||
UpdateJobAction.Request request;
|
||||
if (randomBoolean()) {
|
||||
request = new UpdateJobAction.Request(jobId, jobUpdate.build());
|
||||
} else {
|
||||
// this call sets isInternal = true
|
||||
request = UpdateJobAction.Request.internal(jobId, jobUpdate.build());
|
||||
}
|
||||
|
||||
return request;
|
||||
}
|
||||
|
||||
|
@ -17,7 +17,9 @@ import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.DeprecationHandler;
|
||||
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentHelper;
|
||||
import org.elasticsearch.common.xcontent.XContentParseException;
|
||||
@ -45,18 +47,23 @@ import org.elasticsearch.test.AbstractSerializingTestCase;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.ChunkingConfig.Mode;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
import org.joda.time.DateTimeZone;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.TimeZone;
|
||||
|
||||
import static org.hamcrest.Matchers.containsString;
|
||||
import static org.hamcrest.Matchers.equalTo;
|
||||
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
|
||||
import static org.hamcrest.Matchers.hasEntry;
|
||||
import static org.hamcrest.Matchers.hasItem;
|
||||
import static org.hamcrest.Matchers.hasSize;
|
||||
import static org.hamcrest.Matchers.is;
|
||||
import static org.hamcrest.Matchers.lessThan;
|
||||
import static org.hamcrest.Matchers.not;
|
||||
@ -75,6 +82,10 @@ public class DatafeedConfigTests extends AbstractSerializingTestCase<DatafeedCon
|
||||
}
|
||||
|
||||
public static DatafeedConfig createRandomizedDatafeedConfig(String jobId, long bucketSpanMillis) {
|
||||
return createRandomizedDatafeedConfigBuilder(jobId, bucketSpanMillis).build();
|
||||
}
|
||||
|
||||
private static DatafeedConfig.Builder createRandomizedDatafeedConfigBuilder(String jobId, long bucketSpanMillis) {
|
||||
DatafeedConfig.Builder builder = new DatafeedConfig.Builder(randomValidDatafeedId(), jobId);
|
||||
builder.setIndices(randomStringList(1, 10));
|
||||
builder.setTypes(randomStringList(0, 10));
|
||||
@ -124,7 +135,7 @@ public class DatafeedConfigTests extends AbstractSerializingTestCase<DatafeedCon
|
||||
if (randomBoolean()) {
|
||||
builder.setDelayedDataCheckConfig(DelayedDataCheckConfigTests.createRandomizedConfig(bucketSpanMillis));
|
||||
}
|
||||
return builder.build();
|
||||
return builder;
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -257,6 +268,33 @@ public class DatafeedConfigTests extends AbstractSerializingTestCase<DatafeedCon
|
||||
assertNotNull(DatafeedConfig.LENIENT_PARSER.apply(parser, null).build());
|
||||
}
|
||||
|
||||
public void testToXContentForInternalStorage() throws IOException {
|
||||
DatafeedConfig.Builder builder = createRandomizedDatafeedConfigBuilder("foo", 300);
|
||||
|
||||
// headers are only persisted to cluster state
|
||||
Map<String, String> headers = new HashMap<>();
|
||||
headers.put("header-name", "header-value");
|
||||
builder.setHeaders(headers);
|
||||
DatafeedConfig config = builder.build();
|
||||
|
||||
ToXContent.MapParams params = new ToXContent.MapParams(Collections.singletonMap(ToXContentParams.FOR_INTERNAL_STORAGE, "true"));
|
||||
|
||||
BytesReference forClusterstateXContent = XContentHelper.toXContent(config, XContentType.JSON, params, false);
|
||||
XContentParser parser = XContentFactory.xContent(XContentType.JSON)
|
||||
.createParser(xContentRegistry(), LoggingDeprecationHandler.INSTANCE, forClusterstateXContent.streamInput());
|
||||
|
||||
DatafeedConfig parsedConfig = DatafeedConfig.LENIENT_PARSER.apply(parser, null).build();
|
||||
assertThat(parsedConfig.getHeaders(), hasEntry("header-name", "header-value"));
|
||||
|
||||
// headers are not written without the FOR_INTERNAL_STORAGE param
|
||||
BytesReference nonClusterstateXContent = XContentHelper.toXContent(config, XContentType.JSON, ToXContent.EMPTY_PARAMS, false);
|
||||
parser = XContentFactory.xContent(XContentType.JSON)
|
||||
.createParser(xContentRegistry(), LoggingDeprecationHandler.INSTANCE, nonClusterstateXContent.streamInput());
|
||||
|
||||
parsedConfig = DatafeedConfig.LENIENT_PARSER.apply(parser, null).build();
|
||||
assertThat(parsedConfig.getHeaders().entrySet(), hasSize(0));
|
||||
}
|
||||
|
||||
public void testCopyConstructor() {
|
||||
for (int i = 0; i < NUMBER_OF_TEST_RUNS; i++) {
|
||||
DatafeedConfig datafeedConfig = createTestInstance();
|
||||
|
@ -7,6 +7,7 @@ package org.elasticsearch.xpack.core.ml.job.config;
|
||||
|
||||
import com.carrotsearch.randomizedtesting.generators.CodepointSetGenerator;
|
||||
import org.elasticsearch.ElasticsearchStatusException;
|
||||
import org.elasticsearch.ResourceAlreadyExistsException;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
@ -435,10 +436,9 @@ public class JobTests extends AbstractSerializingTestCase<Job> {
|
||||
public void testBuilder_buildWithCreateTime() {
|
||||
Job.Builder builder = buildJobBuilder("foo");
|
||||
Date now = new Date();
|
||||
Job job = builder.setEstablishedModelMemory(randomNonNegativeLong()).build(now);
|
||||
Job job = builder.build(now);
|
||||
assertEquals(now, job.getCreateTime());
|
||||
assertEquals(Version.CURRENT, job.getJobVersion());
|
||||
assertNull(job.getEstablishedModelMemory());
|
||||
}
|
||||
|
||||
public void testJobWithoutVersion() throws IOException {
|
||||
@ -506,37 +506,11 @@ public class JobTests extends AbstractSerializingTestCase<Job> {
|
||||
assertThat(e.getMessage(), containsString("Invalid group id '$$$'"));
|
||||
}
|
||||
|
||||
public void testEstimateMemoryFootprint_GivenEstablished() {
|
||||
Job.Builder builder = buildJobBuilder("established");
|
||||
long establishedModelMemory = randomIntBetween(10_000, 2_000_000_000);
|
||||
builder.setEstablishedModelMemory(establishedModelMemory);
|
||||
if (randomBoolean()) {
|
||||
builder.setAnalysisLimits(new AnalysisLimits(randomNonNegativeLong(), null));
|
||||
}
|
||||
assertEquals(establishedModelMemory + Job.PROCESS_MEMORY_OVERHEAD.getBytes(), builder.build().estimateMemoryFootprint());
|
||||
}
|
||||
|
||||
public void testEstimateMemoryFootprint_GivenLimitAndNotEstablished() {
|
||||
Job.Builder builder = buildJobBuilder("limit");
|
||||
if (rarely()) {
|
||||
// An "established" model memory of 0 means "not established". Generally this won't be set, so getEstablishedModelMemory()
|
||||
// will return null, but if it returns 0 we shouldn't estimate the job's memory requirement to be 0.
|
||||
builder.setEstablishedModelMemory(0L);
|
||||
}
|
||||
ByteSizeValue limit = new ByteSizeValue(randomIntBetween(100, 10000), ByteSizeUnit.MB);
|
||||
builder.setAnalysisLimits(new AnalysisLimits(limit.getMb(), null));
|
||||
assertEquals(limit.getBytes() + Job.PROCESS_MEMORY_OVERHEAD.getBytes(), builder.build().estimateMemoryFootprint());
|
||||
}
|
||||
|
||||
public void testEstimateMemoryFootprint_GivenNoLimitAndNotEstablished() {
|
||||
Job.Builder builder = buildJobBuilder("nolimit");
|
||||
if (rarely()) {
|
||||
// An "established" model memory of 0 means "not established". Generally this won't be set, so getEstablishedModelMemory()
|
||||
// will return null, but if it returns 0 we shouldn't estimate the job's memory requirement to be 0.
|
||||
builder.setEstablishedModelMemory(0L);
|
||||
}
|
||||
assertEquals(ByteSizeUnit.MB.toBytes(AnalysisLimits.PRE_6_1_DEFAULT_MODEL_MEMORY_LIMIT_MB)
|
||||
+ Job.PROCESS_MEMORY_OVERHEAD.getBytes(), builder.build().estimateMemoryFootprint());
|
||||
public void testInvalidGroup_matchesJobId() {
|
||||
Job.Builder builder = buildJobBuilder("foo");
|
||||
builder.setGroups(Collections.singletonList("foo"));
|
||||
ResourceAlreadyExistsException e = expectThrows(ResourceAlreadyExistsException.class, builder::build);
|
||||
assertEquals(e.getMessage(), "job and group names must be unique but job [foo] and group [foo] have the same name");
|
||||
}
|
||||
|
||||
public void testEarliestValidTimestamp_GivenEmptyDataCounts() {
|
||||
@ -617,9 +591,6 @@ public class JobTests extends AbstractSerializingTestCase<Job> {
|
||||
if (randomBoolean()) {
|
||||
builder.setFinishedTime(new Date(randomNonNegativeLong()));
|
||||
}
|
||||
if (randomBoolean()) {
|
||||
builder.setEstablishedModelMemory(randomNonNegativeLong());
|
||||
}
|
||||
builder.setAnalysisConfig(AnalysisConfigTests.createRandomized());
|
||||
builder.setAnalysisLimits(AnalysisLimits.validateAndSetDefaults(AnalysisLimitsTests.createRandomized(), null,
|
||||
AnalysisLimits.DEFAULT_MODEL_MEMORY_LIMIT_MB));
|
||||
|
@ -90,12 +90,12 @@ public class JobUpdateTests extends AbstractSerializingTestCase<JobUpdate> {
|
||||
if (useInternalParser && randomBoolean()) {
|
||||
update.setModelSnapshotMinVersion(Version.CURRENT);
|
||||
}
|
||||
if (useInternalParser && randomBoolean()) {
|
||||
update.setEstablishedModelMemory(randomNonNegativeLong());
|
||||
}
|
||||
if (useInternalParser && randomBoolean()) {
|
||||
update.setJobVersion(randomFrom(Version.CURRENT, Version.V_6_2_0, Version.V_6_1_0));
|
||||
}
|
||||
if (useInternalParser) {
|
||||
update.setClearFinishTime(randomBoolean());
|
||||
}
|
||||
|
||||
return update.build();
|
||||
}
|
||||
|
@ -13,6 +13,9 @@ import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.test.ESTestCase;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.ModelPlotConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSizeStats;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSnapshot;
|
||||
@ -28,25 +31,28 @@ import java.io.IOException;
|
||||
import java.nio.charset.StandardCharsets;
|
||||
import java.util.Arrays;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
|
||||
public class ElasticsearchMappingsTests extends ESTestCase {
|
||||
|
||||
public void testReservedFields() throws Exception {
|
||||
Set<String> overridden = new HashSet<>();
|
||||
// These are not reserved because they're Elasticsearch keywords, not
|
||||
// field names
|
||||
private static List<String> KEYWORDS = Arrays.asList(
|
||||
ElasticsearchMappings.ANALYZER,
|
||||
ElasticsearchMappings.COPY_TO,
|
||||
ElasticsearchMappings.DYNAMIC,
|
||||
ElasticsearchMappings.ENABLED,
|
||||
ElasticsearchMappings.NESTED,
|
||||
ElasticsearchMappings.PROPERTIES,
|
||||
ElasticsearchMappings.TYPE,
|
||||
ElasticsearchMappings.WHITESPACE
|
||||
);
|
||||
|
||||
// These are not reserved because they're Elasticsearch keywords, not
|
||||
// field names
|
||||
overridden.add(ElasticsearchMappings.ANALYZER);
|
||||
overridden.add(ElasticsearchMappings.COPY_TO);
|
||||
overridden.add(ElasticsearchMappings.DYNAMIC);
|
||||
overridden.add(ElasticsearchMappings.ENABLED);
|
||||
overridden.add(ElasticsearchMappings.NESTED);
|
||||
overridden.add(ElasticsearchMappings.PROPERTIES);
|
||||
overridden.add(ElasticsearchMappings.TYPE);
|
||||
overridden.add(ElasticsearchMappings.WHITESPACE);
|
||||
public void testResultsMapppingReservedFields() throws Exception {
|
||||
Set<String> overridden = new HashSet<>(KEYWORDS);
|
||||
|
||||
// These are not reserved because they're data types, not field names
|
||||
overridden.add(Result.TYPE.getPreferredName());
|
||||
@ -57,25 +63,44 @@ public class ElasticsearchMappingsTests extends ESTestCase {
|
||||
overridden.add(Quantiles.TYPE.getPreferredName());
|
||||
|
||||
Set<String> expected = collectResultsDocFieldNames();
|
||||
|
||||
expected.removeAll(overridden);
|
||||
|
||||
if (ReservedFieldNames.RESERVED_FIELD_NAMES.size() != expected.size()) {
|
||||
Set<String> diff = new HashSet<>(ReservedFieldNames.RESERVED_FIELD_NAMES);
|
||||
compareFields(expected, ReservedFieldNames.RESERVED_RESULT_FIELD_NAMES);
|
||||
}
|
||||
|
||||
public void testConfigMapppingReservedFields() throws Exception {
|
||||
Set<String> overridden = new HashSet<>(KEYWORDS);
|
||||
|
||||
// These are not reserved because they're data types, not field names
|
||||
overridden.add(Job.TYPE);
|
||||
overridden.add(DatafeedConfig.TYPE);
|
||||
// ModelPlotConfig has an 'enabled' the same as one of the keywords
|
||||
overridden.remove(ModelPlotConfig.ENABLED_FIELD.getPreferredName());
|
||||
|
||||
Set<String> expected = collectConfigDocFieldNames();
|
||||
expected.removeAll(overridden);
|
||||
|
||||
compareFields(expected, ReservedFieldNames.RESERVED_CONFIG_FIELD_NAMES);
|
||||
}
|
||||
|
||||
|
||||
private void compareFields(Set<String> expected, Set<String> reserved) {
|
||||
if (reserved.size() != expected.size()) {
|
||||
Set<String> diff = new HashSet<>(reserved);
|
||||
diff.removeAll(expected);
|
||||
StringBuilder errorMessage = new StringBuilder("Fields in ReservedFieldNames but not in expected: ").append(diff);
|
||||
|
||||
diff = new HashSet<>(expected);
|
||||
diff.removeAll(ReservedFieldNames.RESERVED_FIELD_NAMES);
|
||||
diff.removeAll(reserved);
|
||||
errorMessage.append("\nFields in expected but not in ReservedFieldNames: ").append(diff);
|
||||
fail(errorMessage.toString());
|
||||
}
|
||||
assertEquals(ReservedFieldNames.RESERVED_FIELD_NAMES.size(), expected.size());
|
||||
assertEquals(reserved.size(), expected.size());
|
||||
|
||||
for (String s : expected) {
|
||||
// By comparing like this the failure messages say which string is missing
|
||||
String reserved = ReservedFieldNames.RESERVED_FIELD_NAMES.contains(s) ? s : null;
|
||||
assertEquals(s, reserved);
|
||||
String reservedField = reserved.contains(s) ? s : null;
|
||||
assertEquals(s, reservedField);
|
||||
}
|
||||
}
|
||||
|
||||
@ -105,10 +130,17 @@ public class ElasticsearchMappingsTests extends ESTestCase {
|
||||
|
||||
private Set<String> collectResultsDocFieldNames() throws IOException {
|
||||
// Only the mappings for the results index should be added below. Do NOT add mappings for other indexes here.
|
||||
return collectFieldNames(ElasticsearchMappings.resultsMapping());
|
||||
}
|
||||
|
||||
XContentBuilder builder = ElasticsearchMappings.docMapping();
|
||||
private Set<String> collectConfigDocFieldNames() throws IOException {
|
||||
// Only the mappings for the config index should be added below. Do NOT add mappings for other indexes here.
|
||||
return collectFieldNames(ElasticsearchMappings.configMapping());
|
||||
}
|
||||
|
||||
private Set<String> collectFieldNames(XContentBuilder mapping) throws IOException {
|
||||
BufferedInputStream inputStream =
|
||||
new BufferedInputStream(new ByteArrayInputStream(Strings.toString(builder).getBytes(StandardCharsets.UTF_8)));
|
||||
new BufferedInputStream(new ByteArrayInputStream(Strings.toString(mapping).getBytes(StandardCharsets.UTF_8)));
|
||||
JsonParser parser = new JsonFactory().createParser(inputStream);
|
||||
Set<String> fieldNames = new HashSet<>();
|
||||
boolean isAfterPropertiesStart = false;
|
||||
|
@ -20,20 +20,38 @@ import org.elasticsearch.xpack.core.ml.notifications.AuditorField;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
|
||||
public final class XPackRestTestHelper {
|
||||
|
||||
public static final List<String> ML_PRE_V660_TEMPLATES = Collections.unmodifiableList(
|
||||
Arrays.asList(AuditorField.NOTIFICATIONS_INDEX,
|
||||
MlMetaIndex.INDEX_NAME,
|
||||
AnomalyDetectorsIndex.jobStateIndexName(),
|
||||
AnomalyDetectorsIndex.jobResultsIndexPrefix()));
|
||||
|
||||
public static final List<String> ML_POST_V660_TEMPLATES = Collections.unmodifiableList(
|
||||
Arrays.asList(AuditorField.NOTIFICATIONS_INDEX,
|
||||
MlMetaIndex.INDEX_NAME,
|
||||
AnomalyDetectorsIndex.jobStateIndexName(),
|
||||
AnomalyDetectorsIndex.jobResultsIndexPrefix(),
|
||||
AnomalyDetectorsIndex.configIndexName()));
|
||||
|
||||
private XPackRestTestHelper() {
|
||||
}
|
||||
|
||||
/**
|
||||
* Waits for the Machine Learning templates to be created
|
||||
* and check the version is up to date
|
||||
* For each template name wait for the template to be created and
|
||||
* for the template version to be equal to the master node version.
|
||||
*
|
||||
* @param client The rest client
|
||||
* @param templateNames Names of the templates to wait for
|
||||
* @throws InterruptedException If the wait is interrupted
|
||||
*/
|
||||
public static void waitForMlTemplates(RestClient client) throws InterruptedException {
|
||||
public static void waitForTemplates(RestClient client, List<String> templateNames) throws InterruptedException {
|
||||
AtomicReference<Version> masterNodeVersion = new AtomicReference<>();
|
||||
ESTestCase.awaitBusy(() -> {
|
||||
String response;
|
||||
@ -53,8 +71,6 @@ public final class XPackRestTestHelper {
|
||||
return false;
|
||||
});
|
||||
|
||||
final List<String> templateNames = Arrays.asList(AuditorField.NOTIFICATIONS_INDEX, MlMetaIndex.INDEX_NAME,
|
||||
AnomalyDetectorsIndex.jobStateIndexName(), AnomalyDetectorsIndex.jobResultsIndexPrefix());
|
||||
for (String template : templateNames) {
|
||||
ESTestCase.awaitBusy(() -> {
|
||||
Map<?, ?> response;
|
||||
@ -74,5 +90,4 @@ public final class XPackRestTestHelper {
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -6,13 +6,11 @@
|
||||
package org.elasticsearch.xpack.ml.integration;
|
||||
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetJobsStatsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetRecordsAction;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DataDescription;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Detector;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSizeStats;
|
||||
import org.elasticsearch.xpack.core.ml.job.results.AnomalyRecord;
|
||||
import org.junit.After;
|
||||
|
||||
@ -29,7 +27,7 @@ import static org.hamcrest.Matchers.greaterThan;
|
||||
public class BasicRenormalizationIT extends MlNativeAutodetectIntegTestCase {
|
||||
|
||||
@After
|
||||
public void tearDownData() throws Exception {
|
||||
public void tearDownData() {
|
||||
cleanUp();
|
||||
}
|
||||
|
||||
@ -52,15 +50,6 @@ public class BasicRenormalizationIT extends MlNativeAutodetectIntegTestCase {
|
||||
// This is the key assertion: if renormalization never happened then the record_score would
|
||||
// be the same as the initial_record_score on the anomaly record that happened earlier
|
||||
assertThat(earlierRecord.getInitialRecordScore(), greaterThan(earlierRecord.getRecordScore()));
|
||||
|
||||
// Since this job ran for 50 buckets, it's a good place to assert
|
||||
// that established model memory matches model memory in the job stats
|
||||
assertBusy(() -> {
|
||||
GetJobsStatsAction.Response.JobStats jobStats = getJobStats(jobId).get(0);
|
||||
ModelSizeStats modelSizeStats = jobStats.getModelSizeStats();
|
||||
Job updatedJob = getJob(jobId).get(0);
|
||||
assertThat(updatedJob.getEstablishedModelMemory(), equalTo(modelSizeStats.getModelBytes()));
|
||||
});
|
||||
}
|
||||
|
||||
public void testRenormalizationDisabled() throws Exception {
|
||||
@ -94,7 +83,7 @@ public class BasicRenormalizationIT extends MlNativeAutodetectIntegTestCase {
|
||||
closeJob(job.getId());
|
||||
}
|
||||
|
||||
private Job.Builder buildAndRegisterJob(String jobId, TimeValue bucketSpan, Long renormalizationWindow) throws Exception {
|
||||
private Job.Builder buildAndRegisterJob(String jobId, TimeValue bucketSpan, Long renormalizationWindow) {
|
||||
Detector.Builder detector = new Detector.Builder("count", null);
|
||||
AnalysisConfig.Builder analysisConfig = new AnalysisConfig.Builder(Arrays.asList(detector.build()));
|
||||
analysisConfig.setBucketSpan(bucketSpan);
|
||||
|
@ -15,7 +15,6 @@ import org.elasticsearch.common.util.concurrent.ConcurrentCollections;
|
||||
import org.elasticsearch.common.util.concurrent.ConcurrentMapLong;
|
||||
import org.elasticsearch.xpack.core.ml.action.DeleteDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetDatafeedsStatsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetJobsStatsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.KillProcessAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.PutJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.StopDatafeedAction;
|
||||
@ -25,7 +24,6 @@ import org.elasticsearch.xpack.core.ml.datafeed.DatafeedState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSizeStats;
|
||||
import org.junit.After;
|
||||
|
||||
import java.util.ArrayList;
|
||||
@ -92,15 +90,6 @@ public class DatafeedJobsIT extends MlNativeAutodetectIntegTestCase {
|
||||
}, 60, TimeUnit.SECONDS);
|
||||
|
||||
waitUntilJobIsClosed(job.getId());
|
||||
|
||||
// Since this job ran for 168 buckets, it's a good place to assert
|
||||
// that established model memory matches model memory in the job stats
|
||||
assertBusy(() -> {
|
||||
GetJobsStatsAction.Response.JobStats jobStats = getJobStats(job.getId()).get(0);
|
||||
ModelSizeStats modelSizeStats = jobStats.getModelSizeStats();
|
||||
Job updatedJob = getJob(job.getId()).get(0);
|
||||
assertThat(updatedJob.getEstablishedModelMemory(), equalTo(modelSizeStats.getModelBytes()));
|
||||
});
|
||||
}
|
||||
|
||||
public void testRealtime() throws Exception {
|
||||
|
@ -949,7 +949,7 @@ public class DatafeedJobsRestIT extends ESRestTestCase {
|
||||
response = e.getResponse();
|
||||
assertThat(response.getStatusLine().getStatusCode(), equalTo(409));
|
||||
assertThat(EntityUtils.toString(response.getEntity()),
|
||||
containsString("Cannot delete job [" + jobId + "] because datafeed [" + datafeedId + "] refers to it"));
|
||||
containsString("Cannot delete job [" + jobId + "] because the job is opened"));
|
||||
|
||||
response = client().performRequest(new Request("POST", MachineLearning.BASE_PATH + "datafeeds/" + datafeedId + "/_stop"));
|
||||
assertThat(response.getStatusLine().getStatusCode(), equalTo(200));
|
||||
|
@ -95,8 +95,8 @@ public class DeleteExpiredDataIT extends MlNativeAutodetectIntegTestCase {
|
||||
}
|
||||
|
||||
public void testDeleteExpiredData() throws Exception {
|
||||
registerJob(newJobBuilder("no-retention").setResultsRetentionDays(null).setModelSnapshotRetentionDays(null));
|
||||
registerJob(newJobBuilder("results-retention").setResultsRetentionDays(1L).setModelSnapshotRetentionDays(null));
|
||||
registerJob(newJobBuilder("no-retention").setResultsRetentionDays(null).setModelSnapshotRetentionDays(1000L));
|
||||
registerJob(newJobBuilder("results-retention").setResultsRetentionDays(1L).setModelSnapshotRetentionDays(1000L));
|
||||
registerJob(newJobBuilder("snapshots-retention").setResultsRetentionDays(null).setModelSnapshotRetentionDays(2L));
|
||||
registerJob(newJobBuilder("snapshots-retention-with-retain").setResultsRetentionDays(null).setModelSnapshotRetentionDays(2L));
|
||||
registerJob(newJobBuilder("results-and-snapshots-retention").setResultsRetentionDays(1L).setModelSnapshotRetentionDays(2L));
|
||||
|
@ -33,7 +33,7 @@ import static org.hamcrest.Matchers.is;
|
||||
public class InterimResultsDeletedAfterReopeningJobIT extends MlNativeAutodetectIntegTestCase {
|
||||
|
||||
@After
|
||||
public void cleanUpTest() throws Exception {
|
||||
public void cleanUpTest() {
|
||||
cleanUp();
|
||||
}
|
||||
|
||||
|
@ -42,6 +42,7 @@ import org.elasticsearch.xpack.core.LocalStateCompositeXPackPlugin;
|
||||
import org.elasticsearch.xpack.core.XPackClientPlugin;
|
||||
import org.elasticsearch.xpack.core.XPackSettings;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.CloseJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.DeleteDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.DeleteJobAction;
|
||||
@ -445,9 +446,9 @@ abstract class MlNativeAutodetectIntegTestCase extends ESIntegTestCase {
|
||||
List<NamedWriteableRegistry.Entry> entries = new ArrayList<>(ClusterModule.getNamedWriteables());
|
||||
entries.addAll(new SearchModule(Settings.EMPTY, true, Collections.emptyList()).getNamedWriteables());
|
||||
entries.add(new NamedWriteableRegistry.Entry(MetaData.Custom.class, "ml", MlMetadata::new));
|
||||
entries.add(new NamedWriteableRegistry.Entry(PersistentTaskParams.class, StartDatafeedAction.TASK_NAME,
|
||||
entries.add(new NamedWriteableRegistry.Entry(PersistentTaskParams.class, MlTasks.DATAFEED_TASK_NAME,
|
||||
StartDatafeedAction.DatafeedParams::new));
|
||||
entries.add(new NamedWriteableRegistry.Entry(PersistentTaskParams.class, OpenJobAction.TASK_NAME,
|
||||
entries.add(new NamedWriteableRegistry.Entry(PersistentTaskParams.class, MlTasks.JOB_TASK_NAME,
|
||||
OpenJobAction.JobParams::new));
|
||||
entries.add(new NamedWriteableRegistry.Entry(PersistentTaskState.class, JobTaskState.NAME, JobTaskState::new));
|
||||
entries.add(new NamedWriteableRegistry.Entry(PersistentTaskState.class, DatafeedState.NAME, DatafeedState::fromStream));
|
||||
|
@ -7,14 +7,12 @@ package org.elasticsearch.xpack.ml.integration;
|
||||
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetBucketsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetJobsStatsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetOverallBucketsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.util.PageParams;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DataDescription;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Detector;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSizeStats;
|
||||
import org.junit.After;
|
||||
|
||||
import java.util.ArrayList;
|
||||
@ -36,7 +34,7 @@ public class OverallBucketsIT extends MlNativeAutodetectIntegTestCase {
|
||||
private static final long BUCKET_SPAN_SECONDS = 3600;
|
||||
|
||||
@After
|
||||
public void cleanUpTest() throws Exception {
|
||||
public void cleanUpTest() {
|
||||
cleanUp();
|
||||
}
|
||||
|
||||
@ -99,15 +97,6 @@ public class OverallBucketsIT extends MlNativeAutodetectIntegTestCase {
|
||||
GetOverallBucketsAction.INSTANCE, filteredOverallBucketsRequest).actionGet();
|
||||
assertThat(filteredOverallBucketsResponse.getOverallBuckets().count(), equalTo(2L));
|
||||
}
|
||||
|
||||
// Since this job ran for 3000 buckets, it's a good place to assert
|
||||
// that established model memory matches model memory in the job stats
|
||||
assertBusy(() -> {
|
||||
GetJobsStatsAction.Response.JobStats jobStats = getJobStats(job.getId()).get(0);
|
||||
ModelSizeStats modelSizeStats = jobStats.getModelSizeStats();
|
||||
Job updatedJob = getJob(job.getId()).get(0);
|
||||
assertThat(updatedJob.getEstablishedModelMemory(), equalTo(modelSizeStats.getModelBytes()));
|
||||
});
|
||||
}
|
||||
|
||||
private static Map<String, Object> createRecord(long timestamp) {
|
||||
|
@ -7,12 +7,10 @@ package org.elasticsearch.xpack.ml.integration;
|
||||
|
||||
import org.elasticsearch.ElasticsearchStatusException;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetJobsStatsAction;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DataDescription;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Detector;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSizeStats;
|
||||
import org.elasticsearch.xpack.core.ml.job.results.ForecastRequestStats;
|
||||
import org.junit.After;
|
||||
|
||||
@ -82,15 +80,6 @@ public class RestoreModelSnapshotIT extends MlNativeAutodetectIntegTestCase {
|
||||
});
|
||||
|
||||
closeJob(job.getId());
|
||||
|
||||
// Since these jobs ran for 72 buckets, it's a good place to assert
|
||||
// that established model memory matches model memory in the job stats
|
||||
assertBusy(() -> {
|
||||
GetJobsStatsAction.Response.JobStats jobStats = getJobStats(job.getId()).get(0);
|
||||
ModelSizeStats modelSizeStats = jobStats.getModelSizeStats();
|
||||
Job updatedJob = getJob(job.getId()).get(0);
|
||||
assertThat(updatedJob.getEstablishedModelMemory(), equalTo(modelSizeStats.getModelBytes()));
|
||||
});
|
||||
}
|
||||
|
||||
private Job.Builder buildAndRegisterJob(String jobId, TimeValue bucketSpan) throws Exception {
|
||||
|
@ -162,10 +162,12 @@ import org.elasticsearch.xpack.ml.action.TransportValidateDetectorAction;
|
||||
import org.elasticsearch.xpack.ml.action.TransportValidateJobConfigAction;
|
||||
import org.elasticsearch.xpack.ml.datafeed.DatafeedJobBuilder;
|
||||
import org.elasticsearch.xpack.ml.datafeed.DatafeedManager;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.JobManager;
|
||||
import org.elasticsearch.xpack.ml.job.UpdateJobProcessNotifier;
|
||||
import org.elasticsearch.xpack.ml.job.categorization.MlClassicTokenizer;
|
||||
import org.elasticsearch.xpack.ml.job.categorization.MlClassicTokenizerFactory;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobDataCountsPersister;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsPersister;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider;
|
||||
@ -179,6 +181,7 @@ import org.elasticsearch.xpack.ml.job.process.normalizer.NativeNormalizerProcess
|
||||
import org.elasticsearch.xpack.ml.job.process.normalizer.NormalizerFactory;
|
||||
import org.elasticsearch.xpack.ml.job.process.normalizer.NormalizerProcessFactory;
|
||||
import org.elasticsearch.xpack.ml.notifications.Auditor;
|
||||
import org.elasticsearch.xpack.ml.process.MlMemoryTracker;
|
||||
import org.elasticsearch.xpack.ml.process.NativeController;
|
||||
import org.elasticsearch.xpack.ml.process.NativeControllerHolder;
|
||||
import org.elasticsearch.xpack.ml.rest.RestDeleteExpiredDataAction;
|
||||
@ -264,7 +267,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
|
||||
public static final Setting<Integer> MAX_MACHINE_MEMORY_PERCENT =
|
||||
Setting.intSetting("xpack.ml.max_machine_memory_percent", 30, 5, 90, Property.Dynamic, Property.NodeScope);
|
||||
public static final Setting<Integer> MAX_LAZY_ML_NODES =
|
||||
Setting.intSetting("xpack.ml.max_lazy_ml_nodes", 0, 0, 3, Property.Dynamic, Property.NodeScope);
|
||||
Setting.intSetting("xpack.ml.max_lazy_ml_nodes", 0, 0, 3, Property.Dynamic, Property.NodeScope);
|
||||
|
||||
private static final Logger logger = LogManager.getLogger(XPackPlugin.class);
|
||||
|
||||
@ -275,6 +278,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
|
||||
|
||||
private final SetOnce<AutodetectProcessManager> autodetectProcessManager = new SetOnce<>();
|
||||
private final SetOnce<DatafeedManager> datafeedManager = new SetOnce<>();
|
||||
private final SetOnce<MlMemoryTracker> memoryTracker = new SetOnce<>();
|
||||
|
||||
public MachineLearning(Settings settings, Path configPath) {
|
||||
this.settings = settings;
|
||||
@ -299,7 +303,8 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
|
||||
AutodetectBuilder.MAX_ANOMALY_RECORDS_SETTING_DYNAMIC,
|
||||
AutodetectProcessManager.MAX_RUNNING_JOBS_PER_NODE,
|
||||
AutodetectProcessManager.MAX_OPEN_JOBS_PER_NODE,
|
||||
AutodetectProcessManager.MIN_DISK_SPACE_OFF_HEAP));
|
||||
AutodetectProcessManager.MIN_DISK_SPACE_OFF_HEAP,
|
||||
MlConfigMigrationEligibilityCheck.ENABLE_CONFIG_MIGRATION));
|
||||
}
|
||||
|
||||
public Settings additionalSettings() {
|
||||
@ -367,8 +372,10 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
|
||||
|
||||
Auditor auditor = new Auditor(client, clusterService.getNodeName());
|
||||
JobResultsProvider jobResultsProvider = new JobResultsProvider(client, settings);
|
||||
JobConfigProvider jobConfigProvider = new JobConfigProvider(client);
|
||||
DatafeedConfigProvider datafeedConfigProvider = new DatafeedConfigProvider(client, xContentRegistry);
|
||||
UpdateJobProcessNotifier notifier = new UpdateJobProcessNotifier(client, clusterService, threadPool);
|
||||
JobManager jobManager = new JobManager(env, settings, jobResultsProvider, clusterService, auditor, client, notifier);
|
||||
JobManager jobManager = new JobManager(env, settings, jobResultsProvider, clusterService, auditor, threadPool, client, notifier);
|
||||
|
||||
JobDataCountsPersister jobDataCountsPersister = new JobDataCountsPersister(client);
|
||||
JobResultsPersister jobResultsPersister = new JobResultsPersister(client);
|
||||
@ -406,12 +413,15 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
|
||||
jobManager, jobResultsProvider, jobResultsPersister, jobDataCountsPersister, autodetectProcessFactory,
|
||||
normalizerFactory, xContentRegistry, auditor);
|
||||
this.autodetectProcessManager.set(autodetectProcessManager);
|
||||
DatafeedJobBuilder datafeedJobBuilder = new DatafeedJobBuilder(client, jobResultsProvider, auditor, System::currentTimeMillis);
|
||||
DatafeedJobBuilder datafeedJobBuilder = new DatafeedJobBuilder(client, settings, xContentRegistry,
|
||||
auditor, System::currentTimeMillis);
|
||||
DatafeedManager datafeedManager = new DatafeedManager(threadPool, client, clusterService, datafeedJobBuilder,
|
||||
System::currentTimeMillis, auditor);
|
||||
this.datafeedManager.set(datafeedManager);
|
||||
MlLifeCycleService mlLifeCycleService = new MlLifeCycleService(environment, clusterService, datafeedManager,
|
||||
autodetectProcessManager);
|
||||
MlMemoryTracker memoryTracker = new MlMemoryTracker(settings, clusterService, threadPool, jobManager, jobResultsProvider);
|
||||
this.memoryTracker.set(memoryTracker);
|
||||
|
||||
// This object's constructor attaches to the license state, so there's no need to retain another reference to it
|
||||
new InvalidLicenseEnforcer(getLicenseState(), threadPool, datafeedManager, autodetectProcessManager);
|
||||
@ -422,13 +432,16 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
|
||||
return Arrays.asList(
|
||||
mlLifeCycleService,
|
||||
jobResultsProvider,
|
||||
jobConfigProvider,
|
||||
datafeedConfigProvider,
|
||||
jobManager,
|
||||
autodetectProcessManager,
|
||||
new MlInitializationService(settings, threadPool, clusterService, client),
|
||||
jobDataCountsPersister,
|
||||
datafeedManager,
|
||||
auditor,
|
||||
new MlAssignmentNotifier(auditor, clusterService)
|
||||
new MlAssignmentNotifier(settings, auditor, threadPool, client, clusterService),
|
||||
memoryTracker
|
||||
);
|
||||
}
|
||||
|
||||
@ -441,8 +454,9 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
|
||||
}
|
||||
|
||||
return Arrays.asList(
|
||||
new TransportOpenJobAction.OpenJobPersistentTasksExecutor(settings, clusterService, autodetectProcessManager.get()),
|
||||
new TransportStartDatafeedAction.StartDatafeedPersistentTasksExecutor(datafeedManager.get())
|
||||
new TransportOpenJobAction.OpenJobPersistentTasksExecutor(settings, clusterService, autodetectProcessManager.get(),
|
||||
memoryTracker.get(), client),
|
||||
new TransportStartDatafeedAction.StartDatafeedPersistentTasksExecutor( datafeedManager.get())
|
||||
);
|
||||
}
|
||||
|
||||
@ -648,6 +662,23 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
|
||||
logger.warn("Error loading the template for the " + MlMetaIndex.INDEX_NAME + " index", e);
|
||||
}
|
||||
|
||||
try (XContentBuilder configMapping = ElasticsearchMappings.configMapping()) {
|
||||
IndexTemplateMetaData configTemplate = IndexTemplateMetaData.builder(AnomalyDetectorsIndex.configIndexName())
|
||||
.patterns(Collections.singletonList(AnomalyDetectorsIndex.configIndexName()))
|
||||
.settings(Settings.builder()
|
||||
// Our indexes are small and one shard puts the
|
||||
// least possible burden on Elasticsearch
|
||||
.put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)
|
||||
.put(IndexMetaData.SETTING_AUTO_EXPAND_REPLICAS, "0-1")
|
||||
.put(UnassignedInfo.INDEX_DELAYED_NODE_LEFT_TIMEOUT_SETTING.getKey(), delayedNodeTimeOutSetting))
|
||||
.version(Version.CURRENT.id)
|
||||
.putMapping(ElasticsearchMappings.DOC_TYPE, Strings.toString(configMapping))
|
||||
.build();
|
||||
templates.put(AnomalyDetectorsIndex.configIndexName(), configTemplate);
|
||||
} catch (IOException e) {
|
||||
logger.warn("Error loading the template for the " + AnomalyDetectorsIndex.configIndexName() + " index", e);
|
||||
}
|
||||
|
||||
try (XContentBuilder stateMapping = ElasticsearchMappings.stateMapping()) {
|
||||
IndexTemplateMetaData stateTemplate = IndexTemplateMetaData.builder(AnomalyDetectorsIndex.jobStateIndexName())
|
||||
.patterns(Collections.singletonList(AnomalyDetectorsIndex.jobStateIndexName()))
|
||||
@ -663,7 +694,7 @@ public class MachineLearning extends Plugin implements ActionPlugin, AnalysisPlu
|
||||
logger.error("Error loading the template for the " + AnomalyDetectorsIndex.jobStateIndexName() + " index", e);
|
||||
}
|
||||
|
||||
try (XContentBuilder docMapping = ElasticsearchMappings.docMapping()) {
|
||||
try (XContentBuilder docMapping = ElasticsearchMappings.resultsMapping()) {
|
||||
IndexTemplateMetaData jobResultsTemplate = IndexTemplateMetaData.builder(AnomalyDetectorsIndex.jobResultsIndexPrefix())
|
||||
.patterns(Collections.singletonList(AnomalyDetectorsIndex.jobResultsIndexPrefix() + "*"))
|
||||
.settings(Settings.builder()
|
||||
|
@ -7,68 +7,75 @@ package org.elasticsearch.xpack.ml;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterChangedEvent;
|
||||
import org.elasticsearch.cluster.ClusterStateListener;
|
||||
import org.elasticsearch.cluster.LocalNodeMasterListener;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.action.OpenJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.StartDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData.Assignment;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData.PersistentTask;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.OpenJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.StartDatafeedAction;
|
||||
import org.elasticsearch.xpack.ml.notifications.Auditor;
|
||||
|
||||
import java.util.Objects;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
|
||||
public class MlAssignmentNotifier implements ClusterStateListener, LocalNodeMasterListener {
|
||||
|
||||
public class MlAssignmentNotifier implements ClusterStateListener {
|
||||
private static final Logger logger = LogManager.getLogger(MlAssignmentNotifier.class);
|
||||
|
||||
private final Auditor auditor;
|
||||
private final ClusterService clusterService;
|
||||
private final MlConfigMigrator mlConfigMigrator;
|
||||
private final ThreadPool threadPool;
|
||||
|
||||
private final AtomicBoolean enabled = new AtomicBoolean(false);
|
||||
|
||||
MlAssignmentNotifier(Auditor auditor, ClusterService clusterService) {
|
||||
MlAssignmentNotifier(Settings settings, Auditor auditor, ThreadPool threadPool, Client client, ClusterService clusterService) {
|
||||
this.auditor = auditor;
|
||||
this.clusterService = clusterService;
|
||||
clusterService.addLocalNodeMasterListener(this);
|
||||
this.mlConfigMigrator = new MlConfigMigrator(settings, client, clusterService);
|
||||
this.threadPool = threadPool;
|
||||
clusterService.addListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onMaster() {
|
||||
if (enabled.compareAndSet(false, true)) {
|
||||
clusterService.addListener(this);
|
||||
}
|
||||
MlAssignmentNotifier(Auditor auditor, ThreadPool threadPool, MlConfigMigrator mlConfigMigrator, ClusterService clusterService) {
|
||||
this.auditor = auditor;
|
||||
this.mlConfigMigrator = mlConfigMigrator;
|
||||
this.threadPool = threadPool;
|
||||
clusterService.addListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void offMaster() {
|
||||
if (enabled.compareAndSet(true, false)) {
|
||||
clusterService.removeListener(this);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public String executorName() {
|
||||
private String executorName() {
|
||||
return ThreadPool.Names.GENERIC;
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clusterChanged(ClusterChangedEvent event) {
|
||||
if (enabled.get() == false) {
|
||||
|
||||
if (event.localNodeMaster() == false) {
|
||||
return;
|
||||
}
|
||||
|
||||
mlConfigMigrator.migrateConfigsWithoutTasks(event.state(), ActionListener.wrap(
|
||||
response -> threadPool.executor(executorName()).execute(() -> auditChangesToMlTasks(event)),
|
||||
e -> {
|
||||
logger.error("error migrating ml configurations", e);
|
||||
threadPool.executor(executorName()).execute(() -> auditChangesToMlTasks(event));
|
||||
}
|
||||
));
|
||||
}
|
||||
|
||||
private void auditChangesToMlTasks(ClusterChangedEvent event) {
|
||||
|
||||
if (event.metaDataChanged() == false) {
|
||||
return;
|
||||
}
|
||||
|
||||
PersistentTasksCustomMetaData previous = event.previousState().getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
PersistentTasksCustomMetaData current = event.state().getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
|
||||
if (Objects.equals(previous, current)) {
|
||||
return;
|
||||
}
|
||||
@ -80,7 +87,7 @@ public class MlAssignmentNotifier implements ClusterStateListener, LocalNodeMast
|
||||
if (Objects.equals(currentAssignment, previousAssignment)) {
|
||||
continue;
|
||||
}
|
||||
if (OpenJobAction.TASK_NAME.equals(currentTask.getTaskName())) {
|
||||
if (MlTasks.JOB_TASK_NAME.equals(currentTask.getTaskName())) {
|
||||
String jobId = ((OpenJobAction.JobParams) currentTask.getParams()).getJobId();
|
||||
if (currentAssignment.getExecutorNode() == null) {
|
||||
auditor.warning(jobId, "No node found to open job. Reasons [" + currentAssignment.getExplanation() + "]");
|
||||
@ -88,17 +95,21 @@ public class MlAssignmentNotifier implements ClusterStateListener, LocalNodeMast
|
||||
DiscoveryNode node = event.state().nodes().get(currentAssignment.getExecutorNode());
|
||||
auditor.info(jobId, "Opening job on node [" + node.toString() + "]");
|
||||
}
|
||||
} else if (StartDatafeedAction.TASK_NAME.equals(currentTask.getTaskName())) {
|
||||
String datafeedId = ((StartDatafeedAction.DatafeedParams) currentTask.getParams()).getDatafeedId();
|
||||
DatafeedConfig datafeedConfig = MlMetadata.getMlMetadata(event.state()).getDatafeed(datafeedId);
|
||||
} else if (MlTasks.DATAFEED_TASK_NAME.equals(currentTask.getTaskName())) {
|
||||
StartDatafeedAction.DatafeedParams datafeedParams = (StartDatafeedAction.DatafeedParams) currentTask.getParams();
|
||||
String jobId = datafeedParams.getJobId();
|
||||
if (currentAssignment.getExecutorNode() == null) {
|
||||
String msg = "No node found to start datafeed [" + datafeedId +"]. Reasons [" +
|
||||
String msg = "No node found to start datafeed [" + datafeedParams.getDatafeedId() +"]. Reasons [" +
|
||||
currentAssignment.getExplanation() + "]";
|
||||
logger.warn("[{}] {}", datafeedConfig.getJobId(), msg);
|
||||
auditor.warning(datafeedConfig.getJobId(), msg);
|
||||
logger.warn("[{}] {}", jobId, msg);
|
||||
if (jobId != null) {
|
||||
auditor.warning(jobId, msg);
|
||||
}
|
||||
} else {
|
||||
DiscoveryNode node = event.state().nodes().get(currentAssignment.getExecutorNode());
|
||||
auditor.info(datafeedConfig.getJobId(), "Starting datafeed [" + datafeedId + "] on node [" + node + "]");
|
||||
if (jobId != null) {
|
||||
auditor.info(jobId, "Starting datafeed [" + datafeedParams.getDatafeedId() + "] on node [" + node + "]");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,112 @@
|
||||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.ml;
|
||||
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.settings.Setting;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
|
||||
/**
|
||||
* Checks whether migration can start and whether ML resources (e.g. jobs, datafeeds)
|
||||
* are eligible to be migrated from the cluster state into the config index
|
||||
*/
|
||||
public class MlConfigMigrationEligibilityCheck {
|
||||
|
||||
private static final Version MIN_NODE_VERSION = Version.V_6_6_0;
|
||||
|
||||
public static final Setting<Boolean> ENABLE_CONFIG_MIGRATION = Setting.boolSetting(
|
||||
"xpack.ml.enable_config_migration", true, Setting.Property.Dynamic, Setting.Property.NodeScope);
|
||||
|
||||
private volatile boolean isConfigMigrationEnabled;
|
||||
|
||||
public MlConfigMigrationEligibilityCheck(Settings settings, ClusterService clusterService) {
|
||||
isConfigMigrationEnabled = ENABLE_CONFIG_MIGRATION.get(settings);
|
||||
clusterService.getClusterSettings().addSettingsUpdateConsumer(ENABLE_CONFIG_MIGRATION, this::setConfigMigrationEnabled);
|
||||
}
|
||||
|
||||
private void setConfigMigrationEnabled(boolean configMigrationEnabled) {
|
||||
this.isConfigMigrationEnabled = configMigrationEnabled;
|
||||
}
|
||||
|
||||
/**
|
||||
* Can migration start? Returns:
|
||||
* False if config migration is disabled via the setting {@link #ENABLE_CONFIG_MIGRATION}
|
||||
* False if the min node version of the cluster is before {@link #MIN_NODE_VERSION}
|
||||
* True otherwise
|
||||
* @param clusterState The cluster state
|
||||
* @return A boolean that dictates if config migration can start
|
||||
*/
|
||||
public boolean canStartMigration(ClusterState clusterState) {
|
||||
if (isConfigMigrationEnabled == false) {
|
||||
return false;
|
||||
}
|
||||
|
||||
Version minNodeVersion = clusterState.nodes().getMinNodeVersion();
|
||||
if (minNodeVersion.before(MIN_NODE_VERSION)) {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Is the job a eligible for migration? Returns:
|
||||
* False if {@link #canStartMigration(ClusterState)} returns {@code false}
|
||||
* False if the {@link Job#isDeleting()}
|
||||
* False if the job has a persistent task
|
||||
* True otherwise i.e. the job is present, not deleting
|
||||
* and does not have a persistent task.
|
||||
*
|
||||
* @param jobId The job Id
|
||||
* @param clusterState The cluster state
|
||||
* @return A boolean depending on the conditions listed above
|
||||
*/
|
||||
public boolean jobIsEligibleForMigration(String jobId, ClusterState clusterState) {
|
||||
if (canStartMigration(clusterState) == false) {
|
||||
return false;
|
||||
}
|
||||
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
Job job = mlMetadata.getJobs().get(jobId);
|
||||
|
||||
if (job == null || job.isDeleting()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
PersistentTasksCustomMetaData persistentTasks = clusterState.metaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
return MlTasks.openJobIds(persistentTasks).contains(jobId) == false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Is the datafeed a eligible for migration? Returns:
|
||||
* False if {@link #canStartMigration(ClusterState)} returns {@code false}
|
||||
* False if the datafeed is not in the cluster state
|
||||
* False if the datafeed has a persistent task
|
||||
* True otherwise i.e. the datafeed is present and does not have a persistent task.
|
||||
*
|
||||
* @param datafeedId The datafeed Id
|
||||
* @param clusterState The cluster state
|
||||
* @return A boolean depending on the conditions listed above
|
||||
*/
|
||||
public boolean datafeedIsEligibleForMigration(String datafeedId, ClusterState clusterState) {
|
||||
if (canStartMigration(clusterState) == false) {
|
||||
return false;
|
||||
}
|
||||
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
if (mlMetadata.getDatafeeds().containsKey(datafeedId) == false) {
|
||||
return false;
|
||||
}
|
||||
|
||||
PersistentTasksCustomMetaData persistentTasks = clusterState.metaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
return MlTasks.startedDatafeedIds(persistentTasks).contains(datafeedId) == false;
|
||||
}
|
||||
}
|
@ -0,0 +1,538 @@
|
||||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.ml;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.DocWriteRequest;
|
||||
import org.elasticsearch.action.DocWriteResponse;
|
||||
import org.elasticsearch.action.bulk.BulkItemResponse;
|
||||
import org.elasticsearch.action.bulk.BulkRequestBuilder;
|
||||
import org.elasticsearch.action.bulk.BulkResponse;
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.index.IndexRequestBuilder;
|
||||
import org.elasticsearch.action.index.IndexResponse;
|
||||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.ClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.util.concurrent.EsExecutors;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.ToXContentObject;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.AnomalyDetectorsIndex;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.ElasticsearchMappings;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.utils.ChainTaskExecutor;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.Iterator;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.atomic.AtomicBoolean;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
import java.util.function.Function;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin;
|
||||
|
||||
/**
|
||||
* Migrates job and datafeed configurations from the clusterstate to
|
||||
* index documents.
|
||||
*
|
||||
* There are 3 steps to the migration process
|
||||
* 1. Read config from the clusterstate
|
||||
* - If a job or datafeed is added after this call it will be added to the index
|
||||
* - If deleted then it's possible the config will be copied before it is deleted.
|
||||
* Mitigate against this by filtering out jobs marked as deleting
|
||||
* 2. Copy the config to the index
|
||||
* - The index operation could fail, don't delete from clusterstate in this case
|
||||
* 3. Remove config from the clusterstate
|
||||
* - Before this happens config is duplicated in index and clusterstate, all ops
|
||||
* must prefer to use the index config at this stage
|
||||
* - If the clusterstate update fails then the config will remain duplicated
|
||||
* and the migration process should try again
|
||||
*
|
||||
* If there was an error in step 3 and the config is in both the clusterstate and
|
||||
* index then when the migrator retries it must not overwrite an existing job config
|
||||
* document as once the index document is present all update operations will function
|
||||
* on that rather than the clusterstate.
|
||||
*
|
||||
* The number of configs indexed in each bulk operation is limited by {@link #MAX_BULK_WRITE_SIZE}
|
||||
* pairs of datafeeds and jobs are migrated together.
|
||||
*/
|
||||
public class MlConfigMigrator {
|
||||
|
||||
private static final Logger logger = LogManager.getLogger(MlConfigMigrator.class);
|
||||
|
||||
public static final String MIGRATED_FROM_VERSION = "migrated from version";
|
||||
|
||||
static final int MAX_BULK_WRITE_SIZE = 100;
|
||||
|
||||
private final Client client;
|
||||
private final ClusterService clusterService;
|
||||
private final MlConfigMigrationEligibilityCheck migrationEligibilityCheck;
|
||||
|
||||
private final AtomicBoolean migrationInProgress;
|
||||
private final AtomicBoolean tookConfigSnapshot;
|
||||
|
||||
public MlConfigMigrator(Settings settings, Client client, ClusterService clusterService) {
|
||||
this.client = Objects.requireNonNull(client);
|
||||
this.clusterService = Objects.requireNonNull(clusterService);
|
||||
this.migrationEligibilityCheck = new MlConfigMigrationEligibilityCheck(settings, clusterService);
|
||||
this.migrationInProgress = new AtomicBoolean(false);
|
||||
this.tookConfigSnapshot = new AtomicBoolean(false);
|
||||
}
|
||||
|
||||
/**
|
||||
* Migrate ml job and datafeed configurations from the clusterstate
|
||||
* to index documents.
|
||||
*
|
||||
* Configs to be migrated are read from the cluster state then bulk
|
||||
* indexed into .ml-config. Those successfully indexed are then removed
|
||||
* from the clusterstate.
|
||||
*
|
||||
* Migrated jobs have the job version set to v6.6.0 and the custom settings
|
||||
* map has an entry added recording the fact the job was migrated and its
|
||||
* original version e.g.
|
||||
* "migrated from version" : v6.1.0
|
||||
*
|
||||
*
|
||||
* @param clusterState The current clusterstate
|
||||
* @param listener The success listener
|
||||
*/
|
||||
public void migrateConfigsWithoutTasks(ClusterState clusterState, ActionListener<Boolean> listener) {
|
||||
|
||||
if (migrationEligibilityCheck.canStartMigration(clusterState) == false) {
|
||||
listener.onResponse(false);
|
||||
return;
|
||||
}
|
||||
|
||||
if (migrationInProgress.compareAndSet(false, true) == false) {
|
||||
listener.onResponse(Boolean.FALSE);
|
||||
return;
|
||||
}
|
||||
|
||||
logger.debug("migrating ml configurations");
|
||||
|
||||
ActionListener<Boolean> unMarkMigrationInProgress = ActionListener.wrap(
|
||||
response -> {
|
||||
migrationInProgress.set(false);
|
||||
listener.onResponse(response);
|
||||
},
|
||||
e -> {
|
||||
migrationInProgress.set(false);
|
||||
listener.onFailure(e);
|
||||
}
|
||||
);
|
||||
|
||||
snapshotMlMeta(MlMetadata.getMlMetadata(clusterState), ActionListener.wrap(
|
||||
response -> {
|
||||
// We have successfully snapshotted the ML configs so we don't need to try again
|
||||
tookConfigSnapshot.set(true);
|
||||
|
||||
List<JobsAndDatafeeds> batches = splitInBatches(clusterState);
|
||||
if (batches.isEmpty()) {
|
||||
unMarkMigrationInProgress.onResponse(Boolean.FALSE);
|
||||
return;
|
||||
}
|
||||
migrateBatches(batches, unMarkMigrationInProgress);
|
||||
},
|
||||
unMarkMigrationInProgress::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
private void migrateBatches(List<JobsAndDatafeeds> batches, ActionListener<Boolean> listener) {
|
||||
ChainTaskExecutor chainTaskExecutor = new ChainTaskExecutor(EsExecutors.newDirectExecutorService(), true);
|
||||
for (JobsAndDatafeeds batch : batches) {
|
||||
chainTaskExecutor.add(chainedListener -> writeConfigToIndex(batch.datafeedConfigs, batch.jobs, ActionListener.wrap(
|
||||
failedDocumentIds -> {
|
||||
List<String> successfulJobWrites = filterFailedJobConfigWrites(failedDocumentIds, batch.jobs);
|
||||
List<String> successfulDatafeedWrites =
|
||||
filterFailedDatafeedConfigWrites(failedDocumentIds, batch.datafeedConfigs);
|
||||
removeFromClusterState(successfulJobWrites, successfulDatafeedWrites, chainedListener);
|
||||
},
|
||||
chainedListener::onFailure
|
||||
)));
|
||||
}
|
||||
chainTaskExecutor.execute(ActionListener.wrap(aVoid -> listener.onResponse(true), listener::onFailure));
|
||||
}
|
||||
|
||||
// Exposed for testing
|
||||
public void writeConfigToIndex(Collection<DatafeedConfig> datafeedsToMigrate,
|
||||
Collection<Job> jobsToMigrate,
|
||||
ActionListener<Set<String>> listener) {
|
||||
|
||||
BulkRequestBuilder bulkRequestBuilder = client.prepareBulk();
|
||||
addJobIndexRequests(jobsToMigrate, bulkRequestBuilder);
|
||||
addDatafeedIndexRequests(datafeedsToMigrate, bulkRequestBuilder);
|
||||
bulkRequestBuilder.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, bulkRequestBuilder.request(),
|
||||
ActionListener.<BulkResponse>wrap(
|
||||
bulkResponse -> {
|
||||
Set<String> failedDocumentIds = documentsNotWritten(bulkResponse);
|
||||
listener.onResponse(failedDocumentIds);
|
||||
},
|
||||
listener::onFailure),
|
||||
client::bulk
|
||||
);
|
||||
}
|
||||
|
||||
private void removeFromClusterState(List<String> jobsToRemoveIds, List<String> datafeedsToRemoveIds,
|
||||
ActionListener<Void> listener) {
|
||||
if (jobsToRemoveIds.isEmpty() && datafeedsToRemoveIds.isEmpty()) {
|
||||
listener.onResponse(null);
|
||||
return;
|
||||
}
|
||||
|
||||
AtomicReference<RemovalResult> removedConfigs = new AtomicReference<>();
|
||||
|
||||
clusterService.submitStateUpdateTask("remove-migrated-ml-configs", new ClusterStateUpdateTask() {
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
RemovalResult removed = removeJobsAndDatafeeds(jobsToRemoveIds, datafeedsToRemoveIds,
|
||||
MlMetadata.getMlMetadata(currentState));
|
||||
removedConfigs.set(removed);
|
||||
ClusterState.Builder newState = ClusterState.builder(currentState);
|
||||
newState.metaData(MetaData.builder(currentState.getMetaData())
|
||||
.putCustom(MlMetadata.TYPE, removed.mlMetadata)
|
||||
.build());
|
||||
return newState.build();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(String source, Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {
|
||||
if (removedConfigs.get() != null) {
|
||||
if (removedConfigs.get().removedJobIds.isEmpty() == false) {
|
||||
logger.info("ml job configurations migrated: {}", removedConfigs.get().removedJobIds);
|
||||
}
|
||||
if (removedConfigs.get().removedDatafeedIds.isEmpty() == false) {
|
||||
logger.info("ml datafeed configurations migrated: {}", removedConfigs.get().removedDatafeedIds);
|
||||
}
|
||||
}
|
||||
listener.onResponse(null);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
static class RemovalResult {
|
||||
MlMetadata mlMetadata;
|
||||
List<String> removedJobIds;
|
||||
List<String> removedDatafeedIds;
|
||||
|
||||
RemovalResult(MlMetadata mlMetadata, List<String> removedJobIds, List<String> removedDatafeedIds) {
|
||||
this.mlMetadata = mlMetadata;
|
||||
this.removedJobIds = removedJobIds;
|
||||
this.removedDatafeedIds = removedDatafeedIds;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove the datafeeds and jobs listed in the parameters from
|
||||
* mlMetadata if they exist. An account of removed jobs and datafeeds
|
||||
* is returned in the result structure alongside a new MlMetadata
|
||||
* with the config removed.
|
||||
*
|
||||
* @param jobsToRemove Jobs
|
||||
* @param datafeedsToRemove Datafeeds
|
||||
* @param mlMetadata MlMetadata
|
||||
* @return Structure tracking which jobs and datafeeds were actually removed
|
||||
* and the new MlMetadata
|
||||
*/
|
||||
static RemovalResult removeJobsAndDatafeeds(List<String> jobsToRemove, List<String> datafeedsToRemove, MlMetadata mlMetadata) {
|
||||
Map<String, Job> currentJobs = new HashMap<>(mlMetadata.getJobs());
|
||||
List<String> removedJobIds = new ArrayList<>();
|
||||
for (String jobId : jobsToRemove) {
|
||||
if (currentJobs.remove(jobId) != null) {
|
||||
removedJobIds.add(jobId);
|
||||
}
|
||||
}
|
||||
|
||||
Map<String, DatafeedConfig> currentDatafeeds = new HashMap<>(mlMetadata.getDatafeeds());
|
||||
List<String> removedDatafeedIds = new ArrayList<>();
|
||||
for (String datafeedId : datafeedsToRemove) {
|
||||
if (currentDatafeeds.remove(datafeedId) != null) {
|
||||
removedDatafeedIds.add(datafeedId);
|
||||
}
|
||||
}
|
||||
|
||||
MlMetadata.Builder builder = new MlMetadata.Builder();
|
||||
builder.putJobs(currentJobs.values())
|
||||
.putDatafeeds(currentDatafeeds.values());
|
||||
|
||||
return new RemovalResult(builder.build(), removedJobIds, removedDatafeedIds);
|
||||
}
|
||||
|
||||
private void addJobIndexRequests(Collection<Job> jobs, BulkRequestBuilder bulkRequestBuilder) {
|
||||
ToXContent.Params params = new ToXContent.MapParams(JobConfigProvider.TO_XCONTENT_PARAMS);
|
||||
for (Job job : jobs) {
|
||||
bulkRequestBuilder.add(indexRequest(job, Job.documentId(job.getId()), params));
|
||||
}
|
||||
}
|
||||
|
||||
private void addDatafeedIndexRequests(Collection<DatafeedConfig> datafeedConfigs, BulkRequestBuilder bulkRequestBuilder) {
|
||||
ToXContent.Params params = new ToXContent.MapParams(DatafeedConfigProvider.TO_XCONTENT_PARAMS);
|
||||
for (DatafeedConfig datafeedConfig : datafeedConfigs) {
|
||||
bulkRequestBuilder.add(indexRequest(datafeedConfig, DatafeedConfig.documentId(datafeedConfig.getId()), params));
|
||||
}
|
||||
}
|
||||
|
||||
private IndexRequest indexRequest(ToXContentObject source, String documentId, ToXContent.Params params) {
|
||||
IndexRequest indexRequest = new IndexRequest(AnomalyDetectorsIndex.configIndexName(), ElasticsearchMappings.DOC_TYPE, documentId);
|
||||
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
indexRequest.source(source.toXContent(builder, params));
|
||||
} catch (IOException e) {
|
||||
throw new IllegalStateException("failed to serialise object [" + documentId + "]", e);
|
||||
}
|
||||
return indexRequest;
|
||||
}
|
||||
|
||||
|
||||
// public for testing
|
||||
public void snapshotMlMeta(MlMetadata mlMetadata, ActionListener<Boolean> listener) {
|
||||
|
||||
if (tookConfigSnapshot.get()) {
|
||||
listener.onResponse(true);
|
||||
return;
|
||||
}
|
||||
|
||||
if (mlMetadata.getJobs().isEmpty() && mlMetadata.getDatafeeds().isEmpty()) {
|
||||
listener.onResponse(true);
|
||||
return;
|
||||
}
|
||||
|
||||
logger.debug("taking a snapshot of ml_metadata");
|
||||
String documentId = "ml-config";
|
||||
IndexRequestBuilder indexRequest = client.prepareIndex(AnomalyDetectorsIndex.jobStateIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, documentId)
|
||||
.setOpType(DocWriteRequest.OpType.CREATE);
|
||||
|
||||
ToXContent.MapParams params = new ToXContent.MapParams(Collections.singletonMap(ToXContentParams.FOR_INTERNAL_STORAGE, "true"));
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
builder.startObject();
|
||||
mlMetadata.toXContent(builder, params);
|
||||
builder.endObject();
|
||||
|
||||
indexRequest.setSource(builder);
|
||||
} catch (IOException e) {
|
||||
logger.error("failed to serialise ml_metadata", e);
|
||||
listener.onFailure(e);
|
||||
return;
|
||||
}
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, indexRequest.request(),
|
||||
ActionListener.<IndexResponse>wrap(
|
||||
indexResponse -> {
|
||||
listener.onResponse(indexResponse.getResult() == DocWriteResponse.Result.CREATED);
|
||||
},
|
||||
listener::onFailure),
|
||||
client::index
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
public static Job updateJobForMigration(Job job) {
|
||||
Job.Builder builder = new Job.Builder(job);
|
||||
Map<String, Object> custom = job.getCustomSettings() == null ? new HashMap<>() : new HashMap<>(job.getCustomSettings());
|
||||
custom.put(MIGRATED_FROM_VERSION, job.getJobVersion());
|
||||
builder.setCustomSettings(custom);
|
||||
// Pre v5.5 (ml beta) jobs do not have a version.
|
||||
// These jobs cannot be opened, we rely on the missing version
|
||||
// to indicate this.
|
||||
// See TransportOpenJobAction.validate()
|
||||
if (job.getJobVersion() != null) {
|
||||
builder.setJobVersion(Version.CURRENT);
|
||||
}
|
||||
return builder.build();
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter jobs marked as deleting from the list of jobs
|
||||
* are not marked as deleting.
|
||||
*
|
||||
* @param jobs The jobs to filter
|
||||
* @return Jobs not marked as deleting
|
||||
*/
|
||||
public static List<Job> nonDeletingJobs(List<Job> jobs) {
|
||||
return jobs.stream()
|
||||
.filter(job -> job.isDeleting() == false)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the configurations for all closed jobs in the cluster state.
|
||||
* Closed jobs are those that do not have an associated persistent task.
|
||||
*
|
||||
* @param clusterState The cluster state
|
||||
* @return The closed job configurations
|
||||
*/
|
||||
public static List<Job> closedJobConfigs(ClusterState clusterState) {
|
||||
PersistentTasksCustomMetaData persistentTasks = clusterState.metaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
Set<String> openJobIds = MlTasks.openJobIds(persistentTasks);
|
||||
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
return mlMetadata.getJobs().values().stream()
|
||||
.filter(job -> openJobIds.contains(job.getId()) == false)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the configurations for stopped datafeeds in the cluster state.
|
||||
* Stopped datafeeds are those that do not have an associated persistent task.
|
||||
*
|
||||
* @param clusterState The cluster state
|
||||
* @return The closed job configurations
|
||||
*/
|
||||
public static List<DatafeedConfig> stoppedDatafeedConfigs(ClusterState clusterState) {
|
||||
PersistentTasksCustomMetaData persistentTasks = clusterState.metaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
Set<String> startedDatafeedIds = MlTasks.startedDatafeedIds(persistentTasks);
|
||||
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
return mlMetadata.getDatafeeds().values().stream()
|
||||
.filter(datafeedConfig-> startedDatafeedIds.contains(datafeedConfig.getId()) == false)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
public static class JobsAndDatafeeds {
|
||||
List<Job> jobs;
|
||||
List<DatafeedConfig> datafeedConfigs;
|
||||
|
||||
private JobsAndDatafeeds() {
|
||||
jobs = new ArrayList<>();
|
||||
datafeedConfigs = new ArrayList<>();
|
||||
}
|
||||
|
||||
public int totalCount() {
|
||||
return jobs.size() + datafeedConfigs.size();
|
||||
}
|
||||
}
|
||||
|
||||
public static List<JobsAndDatafeeds> splitInBatches(ClusterState clusterState) {
|
||||
Collection<DatafeedConfig> stoppedDatafeeds = stoppedDatafeedConfigs(clusterState);
|
||||
Map<String, Job> eligibleJobs = nonDeletingJobs(closedJobConfigs(clusterState)).stream()
|
||||
.map(MlConfigMigrator::updateJobForMigration)
|
||||
.collect(Collectors.toMap(Job::getId, Function.identity(), (a, b) -> a));
|
||||
|
||||
List<JobsAndDatafeeds> batches = new ArrayList<>();
|
||||
while (stoppedDatafeeds.isEmpty() == false || eligibleJobs.isEmpty() == false) {
|
||||
JobsAndDatafeeds batch = limitWrites(stoppedDatafeeds, eligibleJobs);
|
||||
batches.add(batch);
|
||||
stoppedDatafeeds.removeAll(batch.datafeedConfigs);
|
||||
batch.jobs.forEach(job -> eligibleJobs.remove(job.getId()));
|
||||
}
|
||||
return batches;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return at most {@link #MAX_BULK_WRITE_SIZE} configs favouring
|
||||
* datafeed and job pairs so if a datafeed is chosen so is its job.
|
||||
*
|
||||
* @param datafeedsToMigrate Datafeed configs
|
||||
* @param jobsToMigrate Job configs
|
||||
* @return Job and datafeed configs
|
||||
*/
|
||||
public static JobsAndDatafeeds limitWrites(Collection<DatafeedConfig> datafeedsToMigrate, Map<String, Job> jobsToMigrate) {
|
||||
JobsAndDatafeeds jobsAndDatafeeds = new JobsAndDatafeeds();
|
||||
|
||||
if (datafeedsToMigrate.size() + jobsToMigrate.size() <= MAX_BULK_WRITE_SIZE) {
|
||||
jobsAndDatafeeds.jobs.addAll(jobsToMigrate.values());
|
||||
jobsAndDatafeeds.datafeedConfigs.addAll(datafeedsToMigrate);
|
||||
return jobsAndDatafeeds;
|
||||
}
|
||||
|
||||
int count = 0;
|
||||
|
||||
// prioritise datafeed and job pairs
|
||||
for (DatafeedConfig datafeedConfig : datafeedsToMigrate) {
|
||||
if (count < MAX_BULK_WRITE_SIZE) {
|
||||
jobsAndDatafeeds.datafeedConfigs.add(datafeedConfig);
|
||||
count++;
|
||||
Job datafeedsJob = jobsToMigrate.remove(datafeedConfig.getJobId());
|
||||
if (datafeedsJob != null) {
|
||||
jobsAndDatafeeds.jobs.add(datafeedsJob);
|
||||
count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// are there jobs without datafeeds to migrate
|
||||
Iterator<Job> iter = jobsToMigrate.values().iterator();
|
||||
while (iter.hasNext() && count < MAX_BULK_WRITE_SIZE) {
|
||||
jobsAndDatafeeds.jobs.add(iter.next());
|
||||
count++;
|
||||
}
|
||||
|
||||
return jobsAndDatafeeds;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for failures in the bulk response and return the
|
||||
* Ids of any documents not written to the index
|
||||
*
|
||||
* If the index operation failed because the document already
|
||||
* exists this is not considered an error.
|
||||
*
|
||||
* @param response BulkResponse
|
||||
* @return The set of document Ids not written by the bulk request
|
||||
*/
|
||||
static Set<String> documentsNotWritten(BulkResponse response) {
|
||||
Set<String> failedDocumentIds = new HashSet<>();
|
||||
|
||||
for (BulkItemResponse itemResponse : response.getItems()) {
|
||||
if (itemResponse.isFailed()) {
|
||||
BulkItemResponse.Failure failure = itemResponse.getFailure();
|
||||
failedDocumentIds.add(itemResponse.getFailure().getId());
|
||||
logger.info("failed to index ml configuration [" + itemResponse.getFailure().getId() + "], " +
|
||||
itemResponse.getFailure().getMessage());
|
||||
} else {
|
||||
logger.info("ml configuration [" + itemResponse.getId() + "] indexed");
|
||||
}
|
||||
}
|
||||
return failedDocumentIds;
|
||||
}
|
||||
|
||||
static List<String> filterFailedJobConfigWrites(Set<String> failedDocumentIds, List<Job> jobs) {
|
||||
return jobs.stream()
|
||||
.map(Job::getId)
|
||||
.filter(id -> failedDocumentIds.contains(Job.documentId(id)) == false)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
|
||||
static List<String> filterFailedDatafeedConfigWrites(Set<String> failedDocumentIds, Collection<DatafeedConfig> datafeeds) {
|
||||
return datafeeds.stream()
|
||||
.map(DatafeedConfig::getId)
|
||||
.filter(id -> failedDocumentIds.contains(DatafeedConfig.documentId(id)) == false)
|
||||
.collect(Collectors.toList());
|
||||
}
|
||||
}
|
@ -11,6 +11,7 @@ import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterChangedEvent;
|
||||
import org.elasticsearch.cluster.ClusterStateListener;
|
||||
import org.elasticsearch.cluster.LocalNodeMasterListener;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.component.LifecycleListener;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
@ -18,7 +19,7 @@ import org.elasticsearch.gateway.GatewayService;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.ml.annotations.AnnotationIndex;
|
||||
|
||||
class MlInitializationService implements ClusterStateListener {
|
||||
class MlInitializationService implements LocalNodeMasterListener, ClusterStateListener {
|
||||
|
||||
private static final Logger logger = LogManager.getLogger(MlInitializationService.class);
|
||||
|
||||
@ -37,6 +38,16 @@ class MlInitializationService implements ClusterStateListener {
|
||||
clusterService.addListener(this);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onMaster() {
|
||||
installDailyMaintenanceService();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void offMaster() {
|
||||
uninstallDailyMaintenanceService();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clusterChanged(ClusterChangedEvent event) {
|
||||
if (event.state().blocks().hasGlobalBlock(GatewayService.STATE_NOT_RECOVERED_BLOCK)) {
|
||||
@ -45,7 +56,6 @@ class MlInitializationService implements ClusterStateListener {
|
||||
}
|
||||
|
||||
if (event.localNodeMaster()) {
|
||||
installDailyMaintenanceService();
|
||||
AnnotationIndex.createAnnotationsIndex(settings, client, event.state(), ActionListener.wrap(
|
||||
r -> {
|
||||
if (r) {
|
||||
@ -53,11 +63,14 @@ class MlInitializationService implements ClusterStateListener {
|
||||
}
|
||||
},
|
||||
e -> logger.error("Error creating ML annotations index or aliases", e)));
|
||||
} else {
|
||||
uninstallDailyMaintenanceService();
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public String executorName() {
|
||||
return ThreadPool.Names.GENERIC;
|
||||
}
|
||||
|
||||
private void installDailyMaintenanceService() {
|
||||
if (mlDailyMaintenanceService == null) {
|
||||
mlDailyMaintenanceService = new MlDailyMaintenanceService(clusterService.getClusterName(), threadPool, client);
|
||||
|
@ -6,7 +6,6 @@
|
||||
package org.elasticsearch.xpack.ml.action;
|
||||
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.ResourceNotFoundException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.ActionListenerResponseHandler;
|
||||
import org.elasticsearch.action.FailedNodeException;
|
||||
@ -26,32 +25,26 @@ import org.elasticsearch.persistent.PersistentTasksService;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.CloseJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.FinalizeJobExecutionAction;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobTaskState;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.notifications.Auditor;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Optional;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
import java.util.function.Consumer;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin;
|
||||
|
||||
public class TransportCloseJobAction extends TransportTasksAction<TransportOpenJobAction.JobTask, CloseJobAction.Request,
|
||||
CloseJobAction.Response, CloseJobAction.Response> {
|
||||
|
||||
@ -60,11 +53,14 @@ public class TransportCloseJobAction extends TransportTasksAction<TransportOpenJ
|
||||
private final ClusterService clusterService;
|
||||
private final Auditor auditor;
|
||||
private final PersistentTasksService persistentTasksService;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
|
||||
@Inject
|
||||
public TransportCloseJobAction(TransportService transportService, ThreadPool threadPool, ActionFilters actionFilters,
|
||||
ClusterService clusterService, Client client, Auditor auditor,
|
||||
PersistentTasksService persistentTasksService) {
|
||||
PersistentTasksService persistentTasksService, JobConfigProvider jobConfigProvider,
|
||||
DatafeedConfigProvider datafeedConfigProvider) {
|
||||
// We fork in innerTaskOperation(...), so we can use ThreadPool.Names.SAME here:
|
||||
super(CloseJobAction.NAME, clusterService, transportService, actionFilters,
|
||||
CloseJobAction.Request::new, CloseJobAction.Response::new, CloseJobAction.Response::new, ThreadPool.Names.SAME);
|
||||
@ -73,50 +69,158 @@ public class TransportCloseJobAction extends TransportTasksAction<TransportOpenJ
|
||||
this.clusterService = clusterService;
|
||||
this.auditor = auditor;
|
||||
this.persistentTasksService = persistentTasksService;
|
||||
this.jobConfigProvider = jobConfigProvider;
|
||||
this.datafeedConfigProvider = datafeedConfigProvider;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, CloseJobAction.Request request, ActionListener<CloseJobAction.Response> listener) {
|
||||
final ClusterState state = clusterService.state();
|
||||
final DiscoveryNodes nodes = state.nodes();
|
||||
if (request.isLocal() == false && nodes.isLocalNodeElectedMaster() == false) {
|
||||
// Delegates close job to elected master node, so it becomes the coordinating node.
|
||||
// See comment in OpenJobAction.Transport class for more information.
|
||||
if (nodes.getMasterNode() == null) {
|
||||
listener.onFailure(new MasterNotDiscoveredException("no known master node"));
|
||||
} else {
|
||||
transportService.sendRequest(nodes.getMasterNode(), actionName, request,
|
||||
new ActionListenerResponseHandler<>(listener, CloseJobAction.Response::new));
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
* Closing of multiple jobs:
|
||||
*
|
||||
* 1. Resolve and validate jobs first: if any job does not meet the
|
||||
* criteria (e.g. open datafeed), fail immediately, do not close any
|
||||
* job
|
||||
*
|
||||
* 2. Internally a task request is created for every open job, so there
|
||||
* are n inner tasks for 1 user request
|
||||
*
|
||||
* 3. No task is created for closing jobs but those will be waited on
|
||||
*
|
||||
* 4. Collect n inner task results or failures and send 1 outer
|
||||
* result/failure
|
||||
*/
|
||||
|
||||
PersistentTasksCustomMetaData tasksMetaData = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
jobConfigProvider.expandJobsIds(request.getJobId(), request.allowNoJobs(), true, ActionListener.wrap(
|
||||
expandedJobIds -> {
|
||||
validate(expandedJobIds, request.isForce(), tasksMetaData, ActionListener.wrap(
|
||||
response -> {
|
||||
request.setOpenJobIds(response.openJobIds.toArray(new String[0]));
|
||||
if (response.openJobIds.isEmpty() && response.closingJobIds.isEmpty()) {
|
||||
listener.onResponse(new CloseJobAction.Response(true));
|
||||
return;
|
||||
}
|
||||
|
||||
if (request.isForce() == false) {
|
||||
Set<String> executorNodes = new HashSet<>();
|
||||
PersistentTasksCustomMetaData tasks = state.metaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
for (String resolvedJobId : request.getOpenJobIds()) {
|
||||
PersistentTasksCustomMetaData.PersistentTask<?> jobTask =
|
||||
MlTasks.getJobTask(resolvedJobId, tasks);
|
||||
|
||||
if (jobTask == null || jobTask.isAssigned() == false) {
|
||||
String message = "Cannot close job [" + resolvedJobId + "] because the job does not have "
|
||||
+ "an assigned node. Use force close to close the job";
|
||||
listener.onFailure(ExceptionsHelper.conflictStatusException(message));
|
||||
return;
|
||||
} else {
|
||||
executorNodes.add(jobTask.getExecutorNode());
|
||||
}
|
||||
}
|
||||
request.setNodes(executorNodes.toArray(new String[executorNodes.size()]));
|
||||
}
|
||||
|
||||
if (request.isForce()) {
|
||||
List<String> jobIdsToForceClose = new ArrayList<>(response.openJobIds);
|
||||
jobIdsToForceClose.addAll(response.closingJobIds);
|
||||
forceCloseJob(state, request, jobIdsToForceClose, listener);
|
||||
} else {
|
||||
normalCloseJob(state, task, request, response.openJobIds, response.closingJobIds, listener);
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
class OpenAndClosingIds {
|
||||
OpenAndClosingIds() {
|
||||
openJobIds = new ArrayList<>();
|
||||
closingJobIds = new ArrayList<>();
|
||||
}
|
||||
List<String> openJobIds;
|
||||
List<String> closingJobIds;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve the requested jobs and add their IDs to one of the list arguments
|
||||
* depending on job state.
|
||||
* Separate the job Ids into open and closing job Ids and validate.
|
||||
* If a job is failed it is will not be closed unless the force parameter
|
||||
* in request is true.
|
||||
* It is an error if the datafeed the job uses is not stopped
|
||||
*
|
||||
* Opened jobs are added to {@code openJobIds} and closing jobs added to {@code closingJobIds}. Failed jobs are added
|
||||
* to {@code openJobIds} if allowFailed is set otherwise an exception is thrown.
|
||||
* @param request The close job request
|
||||
* @param state Cluster state
|
||||
* @param openJobIds Opened or failed jobs are added to this list
|
||||
* @param closingJobIds Closing jobs are added to this list
|
||||
* @param expandedJobIds The job ids
|
||||
* @param forceClose Force close the job(s)
|
||||
* @param tasksMetaData Persistent tasks
|
||||
* @param listener Resolved job Ids listener
|
||||
*/
|
||||
static void resolveAndValidateJobId(CloseJobAction.Request request, ClusterState state, List<String> openJobIds,
|
||||
List<String> closingJobIds) {
|
||||
PersistentTasksCustomMetaData tasksMetaData = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
final MlMetadata mlMetadata = MlMetadata.getMlMetadata(state);
|
||||
void validate(Collection<String> expandedJobIds, boolean forceClose, PersistentTasksCustomMetaData tasksMetaData,
|
||||
ActionListener<OpenAndClosingIds> listener) {
|
||||
|
||||
List<String> failedJobs = new ArrayList<>();
|
||||
checkDatafeedsHaveStopped(expandedJobIds, tasksMetaData, ActionListener.wrap(
|
||||
response -> {
|
||||
OpenAndClosingIds ids = new OpenAndClosingIds();
|
||||
List<String> failedJobs = new ArrayList<>();
|
||||
|
||||
Consumer<String> jobIdProcessor = id -> {
|
||||
validateJobAndTaskState(id, mlMetadata, tasksMetaData);
|
||||
Job job = mlMetadata.getJobs().get(id);
|
||||
if (job.isDeleting()) {
|
||||
return;
|
||||
}
|
||||
addJobAccordingToState(id, tasksMetaData, openJobIds, closingJobIds, failedJobs);
|
||||
};
|
||||
for (String jobId : expandedJobIds) {
|
||||
addJobAccordingToState(jobId, tasksMetaData, ids.openJobIds, ids.closingJobIds, failedJobs);
|
||||
}
|
||||
|
||||
Set<String> expandedJobIds = mlMetadata.expandJobIds(request.getJobId(), request.allowNoJobs());
|
||||
expandedJobIds.forEach(jobIdProcessor::accept);
|
||||
if (request.isForce() == false && failedJobs.size() > 0) {
|
||||
if (expandedJobIds.size() == 1) {
|
||||
throw ExceptionsHelper.conflictStatusException("cannot close job [{}] because it failed, use force close",
|
||||
expandedJobIds.iterator().next());
|
||||
}
|
||||
throw ExceptionsHelper.conflictStatusException("one or more jobs have state failed, use force close");
|
||||
}
|
||||
if (forceClose == false && failedJobs.size() > 0) {
|
||||
if (expandedJobIds.size() == 1) {
|
||||
listener.onFailure(
|
||||
ExceptionsHelper.conflictStatusException("cannot close job [{}] because it failed, use force close",
|
||||
expandedJobIds.iterator().next()));
|
||||
return;
|
||||
}
|
||||
listener.onFailure(
|
||||
ExceptionsHelper.conflictStatusException("one or more jobs have state failed, use force close"));
|
||||
return;
|
||||
}
|
||||
|
||||
// allowFailed == true
|
||||
openJobIds.addAll(failedJobs);
|
||||
// If there are failed jobs force close is true
|
||||
ids.openJobIds.addAll(failedJobs);
|
||||
listener.onResponse(ids);
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
private static void addJobAccordingToState(String jobId, PersistentTasksCustomMetaData tasksMetaData,
|
||||
void checkDatafeedsHaveStopped(Collection<String> jobIds, PersistentTasksCustomMetaData tasksMetaData,
|
||||
ActionListener<Boolean> listener) {
|
||||
datafeedConfigProvider.findDatafeedsForJobIds(jobIds, ActionListener.wrap(
|
||||
datafeedIds -> {
|
||||
for (String datafeedId : datafeedIds) {
|
||||
DatafeedState datafeedState = MlTasks.getDatafeedState(datafeedId, tasksMetaData);
|
||||
if (datafeedState != DatafeedState.STOPPED) {
|
||||
listener.onFailure(ExceptionsHelper.conflictStatusException(
|
||||
"cannot close job datafeed [{}] hasn't been stopped", datafeedId));
|
||||
return;
|
||||
}
|
||||
}
|
||||
listener.onResponse(Boolean.TRUE);
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
static void addJobAccordingToState(String jobId, PersistentTasksCustomMetaData tasksMetaData,
|
||||
List<String> openJobs, List<String> closingJobs, List<String> failedJobs) {
|
||||
|
||||
JobState jobState = MlTasks.getJobState(jobId, tasksMetaData);
|
||||
@ -158,98 +262,6 @@ public class TransportCloseJobAction extends TransportTasksAction<TransportOpenJ
|
||||
return waitForCloseRequest;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate the close request. Throws an exception on any of these conditions:
|
||||
* <ul>
|
||||
* <li>If the job does not exist</li>
|
||||
* <li>If the job has a data feed the feed must be closed first</li>
|
||||
* <li>If the job is opening</li>
|
||||
* </ul>
|
||||
*
|
||||
* @param jobId Job Id
|
||||
* @param mlMetadata ML MetaData
|
||||
* @param tasks Persistent tasks
|
||||
*/
|
||||
static void validateJobAndTaskState(String jobId, MlMetadata mlMetadata, PersistentTasksCustomMetaData tasks) {
|
||||
Job job = mlMetadata.getJobs().get(jobId);
|
||||
if (job == null) {
|
||||
throw new ResourceNotFoundException("cannot close job, because job [" + jobId + "] does not exist");
|
||||
}
|
||||
|
||||
Optional<DatafeedConfig> datafeed = mlMetadata.getDatafeedByJobId(jobId);
|
||||
if (datafeed.isPresent()) {
|
||||
DatafeedState datafeedState = MlTasks.getDatafeedState(datafeed.get().getId(), tasks);
|
||||
if (datafeedState != DatafeedState.STOPPED) {
|
||||
throw ExceptionsHelper.conflictStatusException("cannot close job [{}], datafeed hasn't been stopped", jobId);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, CloseJobAction.Request request, ActionListener<CloseJobAction.Response> listener) {
|
||||
final ClusterState state = clusterService.state();
|
||||
final DiscoveryNodes nodes = state.nodes();
|
||||
if (request.isLocal() == false && nodes.isLocalNodeElectedMaster() == false) {
|
||||
// Delegates close job to elected master node, so it becomes the coordinating node.
|
||||
// See comment in OpenJobAction.Transport class for more information.
|
||||
if (nodes.getMasterNode() == null) {
|
||||
listener.onFailure(new MasterNotDiscoveredException("no known master node"));
|
||||
} else {
|
||||
transportService.sendRequest(nodes.getMasterNode(), actionName, request,
|
||||
new ActionListenerResponseHandler<>(listener, CloseJobAction.Response::new));
|
||||
}
|
||||
} else {
|
||||
/*
|
||||
* Closing of multiple jobs:
|
||||
*
|
||||
* 1. Resolve and validate jobs first: if any job does not meet the
|
||||
* criteria (e.g. open datafeed), fail immediately, do not close any
|
||||
* job
|
||||
*
|
||||
* 2. Internally a task request is created for every open job, so there
|
||||
* are n inner tasks for 1 user request
|
||||
*
|
||||
* 3. No task is created for closing jobs but those will be waited on
|
||||
*
|
||||
* 4. Collect n inner task results or failures and send 1 outer
|
||||
* result/failure
|
||||
*/
|
||||
|
||||
List<String> openJobIds = new ArrayList<>();
|
||||
List<String> closingJobIds = new ArrayList<>();
|
||||
resolveAndValidateJobId(request, state, openJobIds, closingJobIds);
|
||||
request.setOpenJobIds(openJobIds.toArray(new String[0]));
|
||||
if (openJobIds.isEmpty() && closingJobIds.isEmpty()) {
|
||||
listener.onResponse(new CloseJobAction.Response(true));
|
||||
return;
|
||||
}
|
||||
|
||||
if (request.isForce() == false) {
|
||||
Set<String> executorNodes = new HashSet<>();
|
||||
PersistentTasksCustomMetaData tasks = state.metaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
for (String resolvedJobId : request.getOpenJobIds()) {
|
||||
PersistentTasksCustomMetaData.PersistentTask<?> jobTask = MlTasks.getJobTask(resolvedJobId, tasks);
|
||||
if (jobTask == null || jobTask.isAssigned() == false) {
|
||||
String message = "Cannot close job [" + resolvedJobId + "] because the job does not have an assigned node." +
|
||||
" Use force close to close the job";
|
||||
listener.onFailure(ExceptionsHelper.conflictStatusException(message));
|
||||
return;
|
||||
} else {
|
||||
executorNodes.add(jobTask.getExecutorNode());
|
||||
}
|
||||
}
|
||||
request.setNodes(executorNodes.toArray(new String[executorNodes.size()]));
|
||||
}
|
||||
|
||||
if (request.isForce()) {
|
||||
List<String> jobIdsToForceClose = new ArrayList<>(openJobIds);
|
||||
jobIdsToForceClose.addAll(closingJobIds);
|
||||
forceCloseJob(state, request, jobIdsToForceClose, listener);
|
||||
} else {
|
||||
normalCloseJob(state, task, request, openJobIds, closingJobIds, listener);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void taskOperation(CloseJobAction.Request request, TransportOpenJobAction.JobTask jobTask,
|
||||
@ -403,10 +415,7 @@ public class TransportCloseJobAction extends TransportTasksAction<TransportOpenJ
|
||||
}, request.getCloseTimeout(), new ActionListener<Boolean>() {
|
||||
@Override
|
||||
public void onResponse(Boolean result) {
|
||||
FinalizeJobExecutionAction.Request finalizeRequest = new FinalizeJobExecutionAction.Request(
|
||||
waitForCloseRequest.jobsToFinalize.toArray(new String[0]));
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, FinalizeJobExecutionAction.INSTANCE, finalizeRequest,
|
||||
ActionListener.wrap(r -> listener.onResponse(response), listener::onFailure));
|
||||
listener.onResponse(response);
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -61,8 +61,11 @@ public class TransportDeleteCalendarAction extends HandledTransportAction<Delete
|
||||
listener.onFailure(new ResourceNotFoundException("No calendar with id [" + calendarId + "]"));
|
||||
return;
|
||||
}
|
||||
jobManager.updateProcessOnCalendarChanged(calendar.getJobIds());
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
|
||||
jobManager.updateProcessOnCalendarChanged(calendar.getJobIds(), ActionListener.wrap(
|
||||
r -> listener.onResponse(new AcknowledgedResponse(true)),
|
||||
listener::onFailure
|
||||
));
|
||||
},
|
||||
listener::onFailure));
|
||||
},
|
||||
|
@ -100,8 +100,10 @@ public class TransportDeleteCalendarEventAction extends HandledTransportAction<D
|
||||
if (response.status() == RestStatus.NOT_FOUND) {
|
||||
listener.onFailure(new ResourceNotFoundException("No event with id [" + eventId + "]"));
|
||||
} else {
|
||||
jobManager.updateProcessOnCalendarChanged(calendar.getJobIds());
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
jobManager.updateProcessOnCalendarChanged(calendar.getJobIds(), ActionListener.wrap(
|
||||
r -> listener.onResponse(new AcknowledgedResponse(true)),
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -11,41 +11,51 @@ import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.persistent.PersistentTasksService;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.XPackPlugin;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.DeleteDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.IsolateDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedState;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.MlConfigMigrationEligibilityCheck;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin;
|
||||
|
||||
public class TransportDeleteDatafeedAction extends TransportMasterNodeAction<DeleteDatafeedAction.Request, AcknowledgedResponse> {
|
||||
|
||||
private Client client;
|
||||
private PersistentTasksService persistentTasksService;
|
||||
private final Client client;
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
private final ClusterService clusterService;
|
||||
private final PersistentTasksService persistentTasksService;
|
||||
private final MlConfigMigrationEligibilityCheck migrationEligibilityCheck;
|
||||
|
||||
@Inject
|
||||
public TransportDeleteDatafeedAction(TransportService transportService, ClusterService clusterService,
|
||||
public TransportDeleteDatafeedAction(Settings settings, TransportService transportService, ClusterService clusterService,
|
||||
ThreadPool threadPool, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
Client client, PersistentTasksService persistentTasksService) {
|
||||
Client client, PersistentTasksService persistentTasksService,
|
||||
NamedXContentRegistry xContentRegistry) {
|
||||
super(DeleteDatafeedAction.NAME, transportService, clusterService, threadPool, actionFilters,
|
||||
indexNameExpressionResolver, DeleteDatafeedAction.Request::new);
|
||||
this.client = client;
|
||||
this.datafeedConfigProvider = new DatafeedConfigProvider(client, xContentRegistry);
|
||||
this.persistentTasksService = persistentTasksService;
|
||||
this.clusterService = clusterService;
|
||||
this.migrationEligibilityCheck = new MlConfigMigrationEligibilityCheck(settings, clusterService);
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -60,18 +70,24 @@ public class TransportDeleteDatafeedAction extends TransportMasterNodeAction<Del
|
||||
|
||||
@Override
|
||||
protected void masterOperation(DeleteDatafeedAction.Request request, ClusterState state,
|
||||
ActionListener<AcknowledgedResponse> listener) throws Exception {
|
||||
ActionListener<AcknowledgedResponse> listener) {
|
||||
|
||||
if (migrationEligibilityCheck.datafeedIsEligibleForMigration(request.getDatafeedId(), state)) {
|
||||
listener.onFailure(ExceptionsHelper.configHasNotBeenMigrated("delete datafeed", request.getDatafeedId()));
|
||||
return;
|
||||
}
|
||||
|
||||
if (request.isForce()) {
|
||||
forceDeleteDatafeed(request, state, listener);
|
||||
} else {
|
||||
deleteDatafeedFromMetadata(request, listener);
|
||||
deleteDatafeedConfig(request, listener);
|
||||
}
|
||||
}
|
||||
|
||||
private void forceDeleteDatafeed(DeleteDatafeedAction.Request request, ClusterState state,
|
||||
ActionListener<AcknowledgedResponse> listener) {
|
||||
ActionListener<Boolean> finalListener = ActionListener.wrap(
|
||||
response -> deleteDatafeedFromMetadata(request, listener),
|
||||
response -> deleteDatafeedConfig(request, listener),
|
||||
listener::onFailure
|
||||
);
|
||||
|
||||
@ -110,28 +126,19 @@ public class TransportDeleteDatafeedAction extends TransportMasterNodeAction<Del
|
||||
}
|
||||
}
|
||||
|
||||
private void deleteDatafeedFromMetadata(DeleteDatafeedAction.Request request, ActionListener<AcknowledgedResponse> listener) {
|
||||
clusterService.submitStateUpdateTask("delete-datafeed-" + request.getDatafeedId(),
|
||||
new AckedClusterStateUpdateTask<AcknowledgedResponse>(request, listener) {
|
||||
private void deleteDatafeedConfig(DeleteDatafeedAction.Request request, ActionListener<AcknowledgedResponse> listener) {
|
||||
// Check datafeed is stopped
|
||||
PersistentTasksCustomMetaData tasks = clusterService.state().getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
if (MlTasks.getDatafeedTask(request.getDatafeedId(), tasks) != null) {
|
||||
listener.onFailure(ExceptionsHelper.conflictStatusException(
|
||||
Messages.getMessage(Messages.DATAFEED_CANNOT_DELETE_IN_CURRENT_STATE, request.getDatafeedId(), DatafeedState.STARTED)));
|
||||
return;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected AcknowledgedResponse newResponse(boolean acknowledged) {
|
||||
return new AcknowledgedResponse(acknowledged);
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
XPackPlugin.checkReadyForXPackCustomMetadata(currentState);
|
||||
MlMetadata currentMetadata = MlMetadata.getMlMetadata(currentState);
|
||||
PersistentTasksCustomMetaData persistentTasks =
|
||||
currentState.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
MlMetadata newMetadata = new MlMetadata.Builder(currentMetadata)
|
||||
.removeDatafeed(request.getDatafeedId(), persistentTasks).build();
|
||||
return ClusterState.builder(currentState).metaData(
|
||||
MetaData.builder(currentState.getMetaData()).putCustom(MlMetadata.TYPE, newMetadata).build())
|
||||
.build();
|
||||
}
|
||||
});
|
||||
datafeedConfigProvider.deleteDatafeedConfig(request.getDatafeedId(), ActionListener.wrap(
|
||||
deleteResponse -> listener.onResponse(new AcknowledgedResponse(true)),
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -56,9 +56,9 @@ public class TransportDeleteExpiredDataAction extends HandledTransportAction<Del
|
||||
private void deleteExpiredData(ActionListener<DeleteExpiredDataAction.Response> listener) {
|
||||
Auditor auditor = new Auditor(client, clusterService.getNodeName());
|
||||
List<MlDataRemover> dataRemovers = Arrays.asList(
|
||||
new ExpiredResultsRemover(client, clusterService, auditor),
|
||||
new ExpiredResultsRemover(client, auditor),
|
||||
new ExpiredForecastsRemover(client, threadPool),
|
||||
new ExpiredModelSnapshotsRemover(client, threadPool, clusterService),
|
||||
new ExpiredModelSnapshotsRemover(client, threadPool),
|
||||
new UnusedStateRemover(client, clusterService)
|
||||
);
|
||||
Iterator<MlDataRemover> dataRemoversIterator = new VolatileCursorIterator<>(dataRemovers);
|
||||
|
@ -16,23 +16,21 @@ import org.elasticsearch.action.support.HandledTransportAction;
|
||||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.rest.RestStatus;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetaIndex;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.action.DeleteFilterAction;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Detector;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.MlFilter;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.function.Supplier;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
@ -41,25 +39,39 @@ import static org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin;
|
||||
public class TransportDeleteFilterAction extends HandledTransportAction<DeleteFilterAction.Request, AcknowledgedResponse> {
|
||||
|
||||
private final Client client;
|
||||
private final ClusterService clusterService;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
|
||||
@Inject
|
||||
public TransportDeleteFilterAction(TransportService transportService, ActionFilters actionFilters,
|
||||
ClusterService clusterService, Client client) {
|
||||
public TransportDeleteFilterAction(TransportService transportService,
|
||||
ActionFilters actionFilters, Client client,
|
||||
JobConfigProvider jobConfigProvider) {
|
||||
super(DeleteFilterAction.NAME, transportService, actionFilters,
|
||||
(Supplier<DeleteFilterAction.Request>) DeleteFilterAction.Request::new);
|
||||
this.clusterService = clusterService;
|
||||
this.client = client;
|
||||
this.jobConfigProvider = jobConfigProvider;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, DeleteFilterAction.Request request, ActionListener<AcknowledgedResponse> listener) {
|
||||
|
||||
final String filterId = request.getFilterId();
|
||||
ClusterState state = clusterService.state();
|
||||
Map<String, Job> jobs = MlMetadata.getMlMetadata(state).getJobs();
|
||||
jobConfigProvider.findJobsWithCustomRules(ActionListener.wrap(
|
||||
jobs-> {
|
||||
List<String> currentlyUsedBy = findJobsUsingFilter(jobs, filterId);
|
||||
if (!currentlyUsedBy.isEmpty()) {
|
||||
listener.onFailure(ExceptionsHelper.conflictStatusException(
|
||||
Messages.getMessage(Messages.FILTER_CANNOT_DELETE, filterId, currentlyUsedBy)));
|
||||
} else {
|
||||
deleteFilter(filterId, listener);
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
private static List<String> findJobsUsingFilter(List<Job> jobs, String filterId) {
|
||||
List<String> currentlyUsedBy = new ArrayList<>();
|
||||
for (Job job : jobs.values()) {
|
||||
for (Job job : jobs) {
|
||||
List<Detector> detectors = job.getAnalysisConfig().getDetectors();
|
||||
for (Detector detector : detectors) {
|
||||
if (detector.extractReferencedFilters().contains(filterId)) {
|
||||
@ -68,31 +80,31 @@ public class TransportDeleteFilterAction extends HandledTransportAction<DeleteFi
|
||||
}
|
||||
}
|
||||
}
|
||||
if (!currentlyUsedBy.isEmpty()) {
|
||||
throw ExceptionsHelper.conflictStatusException("Cannot delete filter, currently used by jobs: "
|
||||
+ currentlyUsedBy);
|
||||
}
|
||||
return currentlyUsedBy;
|
||||
}
|
||||
|
||||
DeleteRequest deleteRequest = new DeleteRequest(MlMetaIndex.INDEX_NAME, MlMetaIndex.TYPE, MlFilter.documentId(filterId));
|
||||
private void deleteFilter(String filterId, ActionListener<AcknowledgedResponse> listener) {
|
||||
DeleteRequest deleteRequest = new DeleteRequest(MlMetaIndex.INDEX_NAME, MlMetaIndex.TYPE,
|
||||
MlFilter.documentId(filterId));
|
||||
BulkRequestBuilder bulkRequestBuilder = client.prepareBulk();
|
||||
bulkRequestBuilder.add(deleteRequest);
|
||||
bulkRequestBuilder.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, BulkAction.INSTANCE, bulkRequestBuilder.request(),
|
||||
new ActionListener<BulkResponse>() {
|
||||
@Override
|
||||
public void onResponse(BulkResponse bulkResponse) {
|
||||
if (bulkResponse.getItems()[0].status() == RestStatus.NOT_FOUND) {
|
||||
listener.onFailure(new ResourceNotFoundException("Could not delete filter with ID [" + filterId
|
||||
+ "] because it does not exist"));
|
||||
} else {
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
}
|
||||
new ActionListener<BulkResponse>() {
|
||||
@Override
|
||||
public void onResponse(BulkResponse bulkResponse) {
|
||||
if (bulkResponse.getItems()[0].status() == RestStatus.NOT_FOUND) {
|
||||
listener.onFailure(new ResourceNotFoundException("Could not delete filter with ID [" + filterId
|
||||
+ "] because it does not exist"));
|
||||
} else {
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
listener.onFailure(ExceptionsHelper.serverError("Could not delete filter with ID [" + filterId + "]", e));
|
||||
}
|
||||
});
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
listener.onFailure(ExceptionsHelper.serverError("Could not delete filter with ID [" + filterId + "]", e));
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
@ -24,18 +24,16 @@ import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.client.ParentTaskAssigningClient;
|
||||
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.ClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.AliasMetaData;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.CheckedConsumer;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
import org.elasticsearch.index.query.ConstantScoreQueryBuilder;
|
||||
import org.elasticsearch.index.query.IdsQueryBuilder;
|
||||
@ -51,31 +49,38 @@ import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.tasks.TaskId;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.DeleteJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetModelSnapshotsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.KillProcessAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.util.PageParams;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobTaskState;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.AnomalyDetectorsIndex;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.AnomalyDetectorsIndexFields;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.CategorizerState;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSnapshot;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.Quantiles;
|
||||
import org.elasticsearch.xpack.ml.job.JobManager;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.MlConfigMigrationEligibilityCheck;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobDataDeleter;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider;
|
||||
import org.elasticsearch.xpack.ml.notifications.Auditor;
|
||||
import org.elasticsearch.xpack.ml.process.MlMemoryTracker;
|
||||
import org.elasticsearch.xpack.ml.utils.MlIndicesUtils;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
import java.util.function.Consumer;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
@ -89,6 +94,10 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
private final PersistentTasksService persistentTasksService;
|
||||
private final Auditor auditor;
|
||||
private final JobResultsProvider jobResultsProvider;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
private final MlMemoryTracker memoryTracker;
|
||||
private final MlConfigMigrationEligibilityCheck migrationEligibilityCheck;
|
||||
|
||||
/**
|
||||
* A map of task listeners by job_id.
|
||||
@ -99,16 +108,22 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
private final Map<String, List<ActionListener<AcknowledgedResponse>>> listenersByJobId;
|
||||
|
||||
@Inject
|
||||
public TransportDeleteJobAction(TransportService transportService, ClusterService clusterService,
|
||||
public TransportDeleteJobAction(Settings settings, TransportService transportService, ClusterService clusterService,
|
||||
ThreadPool threadPool, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver, PersistentTasksService persistentTasksService,
|
||||
Client client, Auditor auditor, JobResultsProvider jobResultsProvider) {
|
||||
Client client, Auditor auditor, JobResultsProvider jobResultsProvider,
|
||||
JobConfigProvider jobConfigProvider, DatafeedConfigProvider datafeedConfigProvider,
|
||||
MlMemoryTracker memoryTracker) {
|
||||
super(DeleteJobAction.NAME, transportService, clusterService, threadPool, actionFilters,
|
||||
indexNameExpressionResolver, DeleteJobAction.Request::new);
|
||||
this.client = client;
|
||||
this.persistentTasksService = persistentTasksService;
|
||||
this.auditor = auditor;
|
||||
this.jobResultsProvider = jobResultsProvider;
|
||||
this.jobConfigProvider = jobConfigProvider;
|
||||
this.datafeedConfigProvider = datafeedConfigProvider;
|
||||
this.memoryTracker = memoryTracker;
|
||||
this.migrationEligibilityCheck = new MlConfigMigrationEligibilityCheck(settings, clusterService);
|
||||
this.listenersByJobId = new HashMap<>();
|
||||
}
|
||||
|
||||
@ -135,9 +150,17 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
@Override
|
||||
protected void masterOperation(Task task, DeleteJobAction.Request request, ClusterState state,
|
||||
ActionListener<AcknowledgedResponse> listener) {
|
||||
|
||||
if (migrationEligibilityCheck.jobIsEligibleForMigration(request.getJobId(), state)) {
|
||||
listener.onFailure(ExceptionsHelper.configHasNotBeenMigrated("delete job", request.getJobId()));
|
||||
return;
|
||||
}
|
||||
|
||||
logger.debug("Deleting job '{}'", request.getJobId());
|
||||
|
||||
JobManager.getJobOrThrowIfUnknown(request.getJobId(), state);
|
||||
if (request.isForce() == false) {
|
||||
checkJobIsNotOpen(request.getJobId(), state);
|
||||
}
|
||||
|
||||
TaskId taskId = new TaskId(clusterService.localNode().getId(), task.getId());
|
||||
ParentTaskAssigningClient parentTaskClient = new ParentTaskAssigningClient(client, taskId);
|
||||
@ -156,8 +179,6 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
}
|
||||
}
|
||||
|
||||
auditor.info(request.getJobId(), Messages.getMessage(Messages.JOB_AUDIT_DELETING, taskId));
|
||||
|
||||
// The listener that will be executed at the end of the chain will notify all listeners
|
||||
ActionListener<AcknowledgedResponse> finalListener = ActionListener.wrap(
|
||||
ack -> notifyListeners(request.getJobId(), ack, null),
|
||||
@ -177,7 +198,16 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
finalListener.onFailure(e);
|
||||
});
|
||||
|
||||
markJobAsDeleting(request.getJobId(), markAsDeletingListener, request.isForce());
|
||||
ActionListener<Boolean> jobExistsListener = ActionListener.wrap(
|
||||
response -> {
|
||||
auditor.info(request.getJobId(), Messages.getMessage(Messages.JOB_AUDIT_DELETING, taskId));
|
||||
markJobAsDeletingIfNotUsed(request.getJobId(), markAsDeletingListener);
|
||||
},
|
||||
e -> finalListener.onFailure(e));
|
||||
|
||||
// First check that the job exists, because we don't want to audit
|
||||
// the beginning of its deletion if it didn't exist in the first place
|
||||
jobConfigProvider.jobExists(request.getJobId(), true, jobExistsListener);
|
||||
}
|
||||
|
||||
private void notifyListeners(String jobId, @Nullable AcknowledgedResponse ack, @Nullable Exception error) {
|
||||
@ -201,6 +231,9 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
ActionListener<AcknowledgedResponse> listener) {
|
||||
String jobId = request.getJobId();
|
||||
|
||||
// We clean up the memory tracker on delete rather than close as close is not a master node action
|
||||
memoryTracker.removeJob(jobId);
|
||||
|
||||
// Step 4. When the job has been removed from the cluster state, return a response
|
||||
// -------
|
||||
CheckedConsumer<Boolean, Exception> apiResponseHandler = jobDeleted -> {
|
||||
@ -213,33 +246,15 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
}
|
||||
};
|
||||
|
||||
// Step 3. When the physical storage has been deleted, remove from Cluster State
|
||||
// Step 3. When the physical storage has been deleted, delete the job config document
|
||||
// -------
|
||||
CheckedConsumer<Boolean, Exception> deleteJobStateHandler = response -> clusterService.submitStateUpdateTask(
|
||||
"delete-job-" + jobId,
|
||||
new AckedClusterStateUpdateTask<Boolean>(request, ActionListener.wrap(apiResponseHandler, listener::onFailure)) {
|
||||
|
||||
@Override
|
||||
protected Boolean newResponse(boolean acknowledged) {
|
||||
return acknowledged && response;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
MlMetadata currentMlMetadata = MlMetadata.getMlMetadata(currentState);
|
||||
if (currentMlMetadata.getJobs().containsKey(jobId) == false) {
|
||||
// We wouldn't have got here if the job never existed so
|
||||
// the Job must have been deleted by another action.
|
||||
// Don't error in this case
|
||||
return currentState;
|
||||
}
|
||||
|
||||
MlMetadata.Builder builder = new MlMetadata.Builder(currentMlMetadata);
|
||||
builder.deleteJob(jobId, currentState.getMetaData().custom(PersistentTasksCustomMetaData.TYPE));
|
||||
return buildNewClusterState(currentState, builder);
|
||||
}
|
||||
});
|
||||
|
||||
// Don't report an error if the document has already been deleted
|
||||
CheckedConsumer<Boolean, Exception> deleteJobStateHandler = response -> jobConfigProvider.deleteJob(jobId, false,
|
||||
ActionListener.wrap(
|
||||
deleteResponse -> apiResponseHandler.accept(Boolean.TRUE),
|
||||
listener::onFailure
|
||||
)
|
||||
);
|
||||
|
||||
// Step 2. Remove the job from any calendars
|
||||
CheckedConsumer<Boolean, Exception> removeFromCalendarsHandler = response -> jobResultsProvider.removeJobFromCalendars(jobId,
|
||||
@ -253,26 +268,26 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
private void deleteJobDocuments(ParentTaskAssigningClient parentTaskClient, String jobId,
|
||||
CheckedConsumer<Boolean, Exception> finishedHandler, Consumer<Exception> failureHandler) {
|
||||
|
||||
final String indexName = AnomalyDetectorsIndex.getPhysicalIndexFromState(clusterService.state(), jobId);
|
||||
final String indexPattern = indexName + "-*";
|
||||
AtomicReference<String> indexName = new AtomicReference<>();
|
||||
|
||||
final ActionListener<AcknowledgedResponse> completionHandler = ActionListener.wrap(
|
||||
response -> finishedHandler.accept(response.isAcknowledged()),
|
||||
failureHandler);
|
||||
|
||||
// Step 7. If we did not drop the index and after DBQ state done, we delete the aliases
|
||||
// Step 8. If we did not drop the index and after DBQ state done, we delete the aliases
|
||||
ActionListener<BulkByScrollResponse> dbqHandler = ActionListener.wrap(
|
||||
bulkByScrollResponse -> {
|
||||
if (bulkByScrollResponse == null) { // no action was taken by DBQ, assume Index was deleted
|
||||
completionHandler.onResponse(new AcknowledgedResponse(true));
|
||||
} else {
|
||||
if (bulkByScrollResponse.isTimedOut()) {
|
||||
logger.warn("[{}] DeleteByQuery for indices [{}, {}] timed out.", jobId, indexName, indexPattern);
|
||||
logger.warn("[{}] DeleteByQuery for indices [{}, {}] timed out.", jobId, indexName.get(),
|
||||
indexName.get() + "-*");
|
||||
}
|
||||
if (!bulkByScrollResponse.getBulkFailures().isEmpty()) {
|
||||
logger.warn("[{}] {} failures and {} conflicts encountered while running DeleteByQuery on indices [{}, {}].",
|
||||
jobId, bulkByScrollResponse.getBulkFailures().size(), bulkByScrollResponse.getVersionConflicts(),
|
||||
indexName, indexPattern);
|
||||
indexName.get(), indexName.get() + "-*");
|
||||
for (BulkItemResponse.Failure failure : bulkByScrollResponse.getBulkFailures()) {
|
||||
logger.warn("DBQ failure: " + failure);
|
||||
}
|
||||
@ -282,12 +297,13 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
},
|
||||
failureHandler);
|
||||
|
||||
// Step 6. If we did not delete the index, we run a delete by query
|
||||
// Step 7. If we did not delete the index, we run a delete by query
|
||||
ActionListener<Boolean> deleteByQueryExecutor = ActionListener.wrap(
|
||||
response -> {
|
||||
if (response) {
|
||||
logger.info("Running DBQ on [" + indexName + "," + indexPattern + "] for job [" + jobId + "]");
|
||||
DeleteByQueryRequest request = new DeleteByQueryRequest(indexName, indexPattern);
|
||||
String indexPattern = indexName.get() + "-*";
|
||||
logger.info("Running DBQ on [" + indexName.get() + "," + indexPattern + "] for job [" + jobId + "]");
|
||||
DeleteByQueryRequest request = new DeleteByQueryRequest(indexName.get(), indexPattern);
|
||||
ConstantScoreQueryBuilder query =
|
||||
new ConstantScoreQueryBuilder(new TermQueryBuilder(Job.ID.getPreferredName(), jobId));
|
||||
request.setQuery(query);
|
||||
@ -303,15 +319,15 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
},
|
||||
failureHandler);
|
||||
|
||||
// Step 5. If we have any hits, that means we are NOT the only job on this index, and should not delete it
|
||||
// Step 6. If we have any hits, that means we are NOT the only job on this index, and should not delete it
|
||||
// if we do not have any hits, we can drop the index and then skip the DBQ and alias deletion
|
||||
ActionListener<SearchResponse> customIndexSearchHandler = ActionListener.wrap(
|
||||
searchResponse -> {
|
||||
if (searchResponse == null || searchResponse.getHits().getTotalHits().value > 0) {
|
||||
deleteByQueryExecutor.onResponse(true); // We need to run DBQ and alias deletion
|
||||
} else {
|
||||
logger.info("Running DELETE Index on [" + indexName + "] for job [" + jobId + "]");
|
||||
DeleteIndexRequest request = new DeleteIndexRequest(indexName);
|
||||
logger.info("Running DELETE Index on [" + indexName.get() + "] for job [" + jobId + "]");
|
||||
DeleteIndexRequest request = new DeleteIndexRequest(indexName.get());
|
||||
request.indicesOptions(IndicesOptions.lenientExpandOpen());
|
||||
// If we have deleted the index, then we don't need to delete the aliases or run the DBQ
|
||||
executeAsyncWithOrigin(
|
||||
@ -333,10 +349,12 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
}
|
||||
);
|
||||
|
||||
// Step 4. Determine if we are on a shared index by looking at `.ml-anomalies-shared` or the custom index's aliases
|
||||
ActionListener<Boolean> deleteCategorizerStateHandler = ActionListener.wrap(
|
||||
response -> {
|
||||
if (indexName.equals(AnomalyDetectorsIndexFields.RESULTS_INDEX_PREFIX +
|
||||
// Step 5. Determine if we are on a shared index by looking at `.ml-anomalies-shared` or the custom index's aliases
|
||||
ActionListener<Job.Builder> getJobHandler = ActionListener.wrap(
|
||||
builder -> {
|
||||
Job job = builder.build();
|
||||
indexName.set(job.getResultsIndexName());
|
||||
if (indexName.get().equals(AnomalyDetectorsIndexFields.RESULTS_INDEX_PREFIX +
|
||||
AnomalyDetectorsIndexFields.RESULTS_INDEX_DEFAULT)) {
|
||||
//don't bother searching the index any further, we are on the default shared
|
||||
customIndexSearchHandler.onResponse(null);
|
||||
@ -347,7 +365,7 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
.query(QueryBuilders.boolQuery().filter(
|
||||
QueryBuilders.boolQuery().mustNot(QueryBuilders.termQuery(Job.ID.getPreferredName(), jobId))));
|
||||
|
||||
SearchRequest searchRequest = new SearchRequest(indexName);
|
||||
SearchRequest searchRequest = new SearchRequest(indexName.get());
|
||||
searchRequest.source(source);
|
||||
executeAsyncWithOrigin(parentTaskClient, ML_ORIGIN, SearchAction.INSTANCE, searchRequest, customIndexSearchHandler);
|
||||
}
|
||||
@ -355,6 +373,14 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
failureHandler
|
||||
);
|
||||
|
||||
// Step 4. Get the job as the result index name is required
|
||||
ActionListener<Boolean> deleteCategorizerStateHandler = ActionListener.wrap(
|
||||
response -> {
|
||||
jobConfigProvider.getJob(jobId, getJobHandler);
|
||||
},
|
||||
failureHandler
|
||||
);
|
||||
|
||||
// Step 3. Delete quantiles done, delete the categorizer state
|
||||
ActionListener<Boolean> deleteQuantilesHandler = ActionListener.wrap(
|
||||
response -> deleteCategorizerState(parentTaskClient, jobId, 1, deleteCategorizerStateHandler),
|
||||
@ -557,36 +583,28 @@ public class TransportDeleteJobAction extends TransportMasterNodeAction<DeleteJo
|
||||
}
|
||||
}
|
||||
|
||||
private void markJobAsDeleting(String jobId, ActionListener<Boolean> listener, boolean force) {
|
||||
clusterService.submitStateUpdateTask("mark-job-as-deleted", new ClusterStateUpdateTask() {
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
PersistentTasksCustomMetaData tasks = currentState.metaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
MlMetadata.Builder builder = new MlMetadata.Builder(MlMetadata.getMlMetadata(currentState));
|
||||
builder.markJobAsDeleting(jobId, tasks, force);
|
||||
return buildNewClusterState(currentState, builder);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(String source, Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {
|
||||
logger.debug("Job [" + jobId + "] is successfully marked as deleted");
|
||||
listener.onResponse(true);
|
||||
}
|
||||
});
|
||||
private void checkJobIsNotOpen(String jobId, ClusterState state) {
|
||||
PersistentTasksCustomMetaData tasks = state.metaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
PersistentTasksCustomMetaData.PersistentTask<?> jobTask = MlTasks.getJobTask(jobId, tasks);
|
||||
if (jobTask != null) {
|
||||
JobTaskState jobTaskState = (JobTaskState) jobTask.getState();
|
||||
throw ExceptionsHelper.conflictStatusException("Cannot delete job [" + jobId + "] because the job is "
|
||||
+ ((jobTaskState == null) ? JobState.OPENING : jobTaskState.getState()));
|
||||
}
|
||||
}
|
||||
|
||||
static boolean jobIsDeletedFromState(String jobId, ClusterState clusterState) {
|
||||
return !MlMetadata.getMlMetadata(clusterState).getJobs().containsKey(jobId);
|
||||
}
|
||||
private void markJobAsDeletingIfNotUsed(String jobId, ActionListener<Boolean> listener) {
|
||||
|
||||
private static ClusterState buildNewClusterState(ClusterState currentState, MlMetadata.Builder builder) {
|
||||
ClusterState.Builder newState = ClusterState.builder(currentState);
|
||||
newState.metaData(MetaData.builder(currentState.getMetaData()).putCustom(MlMetadata.TYPE, builder.build()).build());
|
||||
return newState.build();
|
||||
datafeedConfigProvider.findDatafeedsForJobIds(Collections.singletonList(jobId), ActionListener.wrap(
|
||||
datafeedIds -> {
|
||||
if (datafeedIds.isEmpty() == false) {
|
||||
listener.onFailure(ExceptionsHelper.conflictStatusException("Cannot delete job [" + jobId + "] because datafeed ["
|
||||
+ datafeedIds.iterator().next() + "] refers to it"));
|
||||
return;
|
||||
}
|
||||
jobConfigProvider.markJobAsDeleting(jobId, listener);
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
}
|
||||
|
@ -12,12 +12,10 @@ import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.HandledTransportAction;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.action.DeleteModelSnapshotAction;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSnapshot;
|
||||
import org.elasticsearch.xpack.ml.job.JobManager;
|
||||
@ -32,18 +30,18 @@ public class TransportDeleteModelSnapshotAction extends HandledTransportAction<D
|
||||
AcknowledgedResponse> {
|
||||
|
||||
private final Client client;
|
||||
private final JobManager jobManager;
|
||||
private final JobResultsProvider jobResultsProvider;
|
||||
private final ClusterService clusterService;
|
||||
private final Auditor auditor;
|
||||
|
||||
@Inject
|
||||
public TransportDeleteModelSnapshotAction(TransportService transportService, ActionFilters actionFilters,
|
||||
JobResultsProvider jobResultsProvider, ClusterService clusterService, Client client,
|
||||
JobResultsProvider jobResultsProvider, Client client, JobManager jobManager,
|
||||
Auditor auditor) {
|
||||
super(DeleteModelSnapshotAction.NAME, transportService, actionFilters, DeleteModelSnapshotAction.Request::new);
|
||||
this.client = client;
|
||||
this.jobManager = jobManager;
|
||||
this.jobResultsProvider = jobResultsProvider;
|
||||
this.clusterService = clusterService;
|
||||
this.auditor = auditor;
|
||||
}
|
||||
|
||||
@ -68,32 +66,40 @@ public class TransportDeleteModelSnapshotAction extends HandledTransportAction<D
|
||||
ModelSnapshot deleteCandidate = deleteCandidates.get(0);
|
||||
|
||||
// Verify the snapshot is not being used
|
||||
Job job = JobManager.getJobOrThrowIfUnknown(request.getJobId(), clusterService.state());
|
||||
String currentModelInUse = job.getModelSnapshotId();
|
||||
if (currentModelInUse != null && currentModelInUse.equals(request.getSnapshotId())) {
|
||||
throw new IllegalArgumentException(Messages.getMessage(Messages.REST_CANNOT_DELETE_HIGHEST_PRIORITY,
|
||||
request.getSnapshotId(), request.getJobId()));
|
||||
}
|
||||
jobManager.getJob(request.getJobId(), ActionListener.wrap(
|
||||
job -> {
|
||||
String currentModelInUse = job.getModelSnapshotId();
|
||||
if (currentModelInUse != null && currentModelInUse.equals(request.getSnapshotId())) {
|
||||
listener.onFailure(
|
||||
new IllegalArgumentException(Messages.getMessage(Messages.REST_CANNOT_DELETE_HIGHEST_PRIORITY,
|
||||
request.getSnapshotId(), request.getJobId())));
|
||||
return;
|
||||
}
|
||||
|
||||
// Delete the snapshot and any associated state files
|
||||
JobDataDeleter deleter = new JobDataDeleter(client, request.getJobId());
|
||||
deleter.deleteModelSnapshots(Collections.singletonList(deleteCandidate), new ActionListener<BulkResponse>() {
|
||||
@Override
|
||||
public void onResponse(BulkResponse bulkResponse) {
|
||||
String msg = Messages.getMessage(Messages.JOB_AUDIT_SNAPSHOT_DELETED, deleteCandidate.getSnapshotId(),
|
||||
deleteCandidate.getDescription());
|
||||
auditor.info(request.getJobId(), msg);
|
||||
logger.debug("[{}] {}", request.getJobId(), msg);
|
||||
// We don't care about the bulk response, just that it succeeded
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
}
|
||||
// Delete the snapshot and any associated state files
|
||||
JobDataDeleter deleter = new JobDataDeleter(client, request.getJobId());
|
||||
deleter.deleteModelSnapshots(Collections.singletonList(deleteCandidate),
|
||||
new ActionListener<BulkResponse>() {
|
||||
@Override
|
||||
public void onResponse(BulkResponse bulkResponse) {
|
||||
String msg = Messages.getMessage(Messages.JOB_AUDIT_SNAPSHOT_DELETED,
|
||||
deleteCandidate.getSnapshotId(), deleteCandidate.getDescription());
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
});
|
||||
auditor.info(request.getJobId(), msg);
|
||||
logger.debug("[{}] {}", request.getJobId(), msg);
|
||||
// We don't care about the bulk response, just that it succeeded
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
});
|
||||
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}, listener::onFailure);
|
||||
}
|
||||
}
|
||||
|
@ -7,33 +7,46 @@ package org.elasticsearch.xpack.ml.action;
|
||||
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.action.update.UpdateAction;
|
||||
import org.elasticsearch.action.update.UpdateRequest;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.ClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.XPackPlugin;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.action.FinalizeJobExecutionAction;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.AnomalyDetectorsIndex;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.ElasticsearchMappings;
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
import org.elasticsearch.xpack.ml.utils.ChainTaskExecutor;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.Date;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin;
|
||||
|
||||
public class TransportFinalizeJobExecutionAction extends TransportMasterNodeAction<FinalizeJobExecutionAction.Request,
|
||||
AcknowledgedResponse> {
|
||||
|
||||
private final Client client;
|
||||
|
||||
@Inject
|
||||
public TransportFinalizeJobExecutionAction(TransportService transportService, ClusterService clusterService, ThreadPool threadPool,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
Client client) {
|
||||
super(FinalizeJobExecutionAction.NAME, transportService, clusterService, threadPool, actionFilters,
|
||||
indexNameExpressionResolver, FinalizeJobExecutionAction.Request::new);
|
||||
this.client = client;
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -48,41 +61,37 @@ public class TransportFinalizeJobExecutionAction extends TransportMasterNodeActi
|
||||
|
||||
@Override
|
||||
protected void masterOperation(FinalizeJobExecutionAction.Request request, ClusterState state,
|
||||
ActionListener<AcknowledgedResponse> listener) throws Exception {
|
||||
ActionListener<AcknowledgedResponse> listener) {
|
||||
String jobIdString = String.join(",", request.getJobIds());
|
||||
String source = "finalize_job_execution [" + jobIdString + "]";
|
||||
logger.debug("finalizing jobs [{}]", jobIdString);
|
||||
clusterService.submitStateUpdateTask(source, new ClusterStateUpdateTask() {
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
XPackPlugin.checkReadyForXPackCustomMetadata(currentState);
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(currentState);
|
||||
MlMetadata.Builder mlMetadataBuilder = new MlMetadata.Builder(mlMetadata);
|
||||
Date finishedTime = new Date();
|
||||
|
||||
for (String jobId : request.getJobIds()) {
|
||||
Job.Builder jobBuilder = new Job.Builder(mlMetadata.getJobs().get(jobId));
|
||||
jobBuilder.setFinishedTime(finishedTime);
|
||||
mlMetadataBuilder.putJob(jobBuilder.build(), true);
|
||||
}
|
||||
ClusterState.Builder builder = ClusterState.builder(currentState);
|
||||
return builder.metaData(new MetaData.Builder(currentState.metaData())
|
||||
.putCustom(MlMetadata.TYPE, mlMetadataBuilder.build()))
|
||||
.build();
|
||||
}
|
||||
ChainTaskExecutor chainTaskExecutor = new ChainTaskExecutor(threadPool.executor(
|
||||
MachineLearning.UTILITY_THREAD_POOL_NAME), true);
|
||||
|
||||
@Override
|
||||
public void onFailure(String source, Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
Map<String, Object> update = Collections.singletonMap(Job.FINISHED_TIME.getPreferredName(), new Date());
|
||||
|
||||
@Override
|
||||
public void clusterStateProcessed(String source, ClusterState oldState,
|
||||
ClusterState newState) {
|
||||
logger.debug("finalized job [{}]", jobIdString);
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
}
|
||||
});
|
||||
for (String jobId: request.getJobIds()) {
|
||||
UpdateRequest updateRequest = new UpdateRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(jobId));
|
||||
updateRequest.retryOnConflict(3);
|
||||
updateRequest.doc(update);
|
||||
updateRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
|
||||
|
||||
chainTaskExecutor.add(chainedListener -> {
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, UpdateAction.INSTANCE, updateRequest, ActionListener.wrap(
|
||||
updateResponse -> chainedListener.onResponse(null),
|
||||
chainedListener::onFailure
|
||||
));
|
||||
});
|
||||
}
|
||||
|
||||
chainTaskExecutor.execute(ActionListener.wrap(
|
||||
aVoid -> {
|
||||
logger.debug("finalized job [{}]", jobIdString);
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -9,7 +9,6 @@ import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.Version;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.unit.ByteSizeUnit;
|
||||
@ -38,70 +37,80 @@ public class TransportForecastJobAction extends TransportJobTaskAction<ForecastJ
|
||||
private static final ByteSizeValue FORECAST_LOCAL_STORAGE_LIMIT = new ByteSizeValue(500, ByteSizeUnit.MB);
|
||||
|
||||
private final JobResultsProvider jobResultsProvider;
|
||||
private final JobManager jobManager;
|
||||
@Inject
|
||||
public TransportForecastJobAction(TransportService transportService, ClusterService clusterService, ActionFilters actionFilters,
|
||||
JobResultsProvider jobResultsProvider, AutodetectProcessManager processManager) {
|
||||
public TransportForecastJobAction(TransportService transportService,
|
||||
ClusterService clusterService, ActionFilters actionFilters,
|
||||
JobResultsProvider jobResultsProvider, AutodetectProcessManager processManager,
|
||||
JobManager jobManager) {
|
||||
super(ForecastJobAction.NAME, clusterService, transportService, actionFilters,
|
||||
ForecastJobAction.Request::new, ForecastJobAction.Response::new,
|
||||
ThreadPool.Names.SAME, processManager);
|
||||
this.jobResultsProvider = jobResultsProvider;
|
||||
this.jobManager = jobManager;
|
||||
// ThreadPool.Names.SAME, because operations is executed by autodetect worker thread
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void taskOperation(ForecastJobAction.Request request, TransportOpenJobAction.JobTask task,
|
||||
ActionListener<ForecastJobAction.Response> listener) {
|
||||
ClusterState state = clusterService.state();
|
||||
Job job = JobManager.getJobOrThrowIfUnknown(task.getJobId(), state);
|
||||
validate(job, request);
|
||||
jobManager.getJob(task.getJobId(), ActionListener.wrap(
|
||||
job -> {
|
||||
validate(job, request);
|
||||
|
||||
ForecastParams.Builder paramsBuilder = ForecastParams.builder();
|
||||
ForecastParams.Builder paramsBuilder = ForecastParams.builder();
|
||||
|
||||
if (request.getDuration() != null) {
|
||||
paramsBuilder.duration(request.getDuration());
|
||||
}
|
||||
|
||||
if (request.getExpiresIn() != null) {
|
||||
paramsBuilder.expiresIn(request.getExpiresIn());
|
||||
}
|
||||
|
||||
// tmp storage might be null, we do not log here, because it might not be
|
||||
// required
|
||||
Path tmpStorage = processManager.tryGetTmpStorage(task, FORECAST_LOCAL_STORAGE_LIMIT);
|
||||
if (tmpStorage != null) {
|
||||
paramsBuilder.tmpStorage(tmpStorage.toString());
|
||||
}
|
||||
|
||||
ForecastParams params = paramsBuilder.build();
|
||||
processManager.forecastJob(task, params, e -> {
|
||||
if (e == null) {
|
||||
Consumer<ForecastRequestStats> forecastRequestStatsHandler = forecastRequestStats -> {
|
||||
if (forecastRequestStats == null) {
|
||||
// paranoia case, it should not happen that we do not retrieve a result
|
||||
listener.onFailure(new ElasticsearchException(
|
||||
"Cannot run forecast: internal error, please check the logs"));
|
||||
} else if (forecastRequestStats.getStatus() == ForecastRequestStats.ForecastRequestStatus.FAILED) {
|
||||
List<String> messages = forecastRequestStats.getMessages();
|
||||
if (messages.size() > 0) {
|
||||
listener.onFailure(ExceptionsHelper.badRequestException("Cannot run forecast: "
|
||||
+ messages.get(0)));
|
||||
} else {
|
||||
// paranoia case, it should not be possible to have an empty message list
|
||||
listener.onFailure(
|
||||
new ElasticsearchException(
|
||||
"Cannot run forecast: internal error, please check the logs"));
|
||||
}
|
||||
} else {
|
||||
listener.onResponse(new ForecastJobAction.Response(true, params.getForecastId()));
|
||||
if (request.getDuration() != null) {
|
||||
paramsBuilder.duration(request.getDuration());
|
||||
}
|
||||
};
|
||||
|
||||
jobResultsProvider.getForecastRequestStats(request.getJobId(), params.getForecastId(),
|
||||
forecastRequestStatsHandler, listener::onFailure);
|
||||
if (request.getExpiresIn() != null) {
|
||||
paramsBuilder.expiresIn(request.getExpiresIn());
|
||||
}
|
||||
|
||||
// tmp storage might be null, we do not log here, because it might not be
|
||||
// required
|
||||
Path tmpStorage = processManager.tryGetTmpStorage(task, FORECAST_LOCAL_STORAGE_LIMIT);
|
||||
if (tmpStorage != null) {
|
||||
paramsBuilder.tmpStorage(tmpStorage.toString());
|
||||
}
|
||||
|
||||
ForecastParams params = paramsBuilder.build();
|
||||
processManager.forecastJob(task, params, e -> {
|
||||
if (e == null) {
|
||||
; getForecastRequestStats(request.getJobId(), params.getForecastId(), listener);
|
||||
} else {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
});
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
private void getForecastRequestStats(String jobId, String forecastId, ActionListener<ForecastJobAction.Response> listener) {
|
||||
Consumer<ForecastRequestStats> forecastRequestStatsHandler = forecastRequestStats -> {
|
||||
if (forecastRequestStats == null) {
|
||||
// paranoia case, it should not happen that we do not retrieve a result
|
||||
listener.onFailure(new ElasticsearchException(
|
||||
"Cannot run forecast: internal error, please check the logs"));
|
||||
} else if (forecastRequestStats.getStatus() == ForecastRequestStats.ForecastRequestStatus.FAILED) {
|
||||
List<String> messages = forecastRequestStats.getMessages();
|
||||
if (messages.size() > 0) {
|
||||
listener.onFailure(ExceptionsHelper.badRequestException("Cannot run forecast: "
|
||||
+ messages.get(0)));
|
||||
} else {
|
||||
// paranoia case, it should not be possible to have an empty message list
|
||||
listener.onFailure(
|
||||
new ElasticsearchException(
|
||||
"Cannot run forecast: internal error, please check the logs"));
|
||||
}
|
||||
} else {
|
||||
listener.onFailure(e);
|
||||
listener.onResponse(new ForecastJobAction.Response(true, forecastId));
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
jobResultsProvider.getForecastRequestStats(jobId, forecastId, forecastRequestStatsHandler, listener::onFailure);
|
||||
}
|
||||
|
||||
static void validate(Job job, ForecastJobAction.Request request) {
|
||||
|
@ -36,28 +36,33 @@ public class TransportGetBucketsAction extends HandledTransportAction<GetBuckets
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, GetBucketsAction.Request request, ActionListener<GetBucketsAction.Response> listener) {
|
||||
jobManager.getJobOrThrowIfUnknown(request.getJobId());
|
||||
jobManager.jobExists(request.getJobId(), ActionListener.wrap(
|
||||
ok -> {
|
||||
BucketsQueryBuilder query =
|
||||
new BucketsQueryBuilder().expand(request.isExpand())
|
||||
.includeInterim(request.isExcludeInterim() == false)
|
||||
.start(request.getStart())
|
||||
.end(request.getEnd())
|
||||
.anomalyScoreThreshold(request.getAnomalyScore())
|
||||
.sortField(request.getSort())
|
||||
.sortDescending(request.isDescending());
|
||||
|
||||
BucketsQueryBuilder query =
|
||||
new BucketsQueryBuilder().expand(request.isExpand())
|
||||
.includeInterim(request.isExcludeInterim() == false)
|
||||
.start(request.getStart())
|
||||
.end(request.getEnd())
|
||||
.anomalyScoreThreshold(request.getAnomalyScore())
|
||||
.sortField(request.getSort())
|
||||
.sortDescending(request.isDescending());
|
||||
if (request.getPageParams() != null) {
|
||||
query.from(request.getPageParams().getFrom())
|
||||
.size(request.getPageParams().getSize());
|
||||
}
|
||||
if (request.getTimestamp() != null) {
|
||||
query.timestamp(request.getTimestamp());
|
||||
} else {
|
||||
query.start(request.getStart());
|
||||
query.end(request.getEnd());
|
||||
}
|
||||
jobResultsProvider.buckets(request.getJobId(), query, q ->
|
||||
listener.onResponse(new GetBucketsAction.Response(q)), listener::onFailure, client);
|
||||
|
||||
if (request.getPageParams() != null) {
|
||||
query.from(request.getPageParams().getFrom())
|
||||
.size(request.getPageParams().getSize());
|
||||
}
|
||||
if (request.getTimestamp() != null) {
|
||||
query.timestamp(request.getTimestamp());
|
||||
} else {
|
||||
query.start(request.getStart());
|
||||
query.end(request.getEnd());
|
||||
}
|
||||
jobResultsProvider.buckets(request.getJobId(), query, q ->
|
||||
listener.onResponse(new GetBucketsAction.Response(q)), listener::onFailure, client);
|
||||
},
|
||||
listener::onFailure
|
||||
|
||||
));
|
||||
}
|
||||
}
|
||||
|
@ -8,38 +8,36 @@ package org.elasticsearch.xpack.ml.action;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.HandledTransportAction;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetCalendarEventsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetCalendarsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.util.QueryPage;
|
||||
import org.elasticsearch.xpack.core.ml.calendars.ScheduledEvent;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.ScheduledEventsQueryBuilder;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.List;
|
||||
import java.util.function.Supplier;
|
||||
|
||||
public class TransportGetCalendarEventsAction extends HandledTransportAction<GetCalendarEventsAction.Request,
|
||||
GetCalendarEventsAction.Response> {
|
||||
|
||||
private final JobResultsProvider jobResultsProvider;
|
||||
private final ClusterService clusterService;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
|
||||
@Inject
|
||||
public TransportGetCalendarEventsAction(TransportService transportService, ActionFilters actionFilters, ClusterService clusterService,
|
||||
JobResultsProvider jobResultsProvider) {
|
||||
public TransportGetCalendarEventsAction(TransportService transportService,
|
||||
ActionFilters actionFilters, JobResultsProvider jobResultsProvider,
|
||||
JobConfigProvider jobConfigProvider) {
|
||||
super(GetCalendarEventsAction.NAME, transportService, actionFilters,
|
||||
(Supplier<GetCalendarEventsAction.Request>) GetCalendarEventsAction.Request::new);
|
||||
this.jobResultsProvider = jobResultsProvider;
|
||||
this.clusterService = clusterService;
|
||||
this.jobConfigProvider = jobConfigProvider;
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -65,26 +63,28 @@ public class TransportGetCalendarEventsAction extends HandledTransportAction<Get
|
||||
);
|
||||
|
||||
if (request.getJobId() != null) {
|
||||
ClusterState state = clusterService.state();
|
||||
MlMetadata currentMlMetadata = MlMetadata.getMlMetadata(state);
|
||||
|
||||
List<String> jobGroups;
|
||||
String requestId = request.getJobId();
|
||||
jobConfigProvider.getJob(request.getJobId(), ActionListener.wrap(
|
||||
jobBuiler -> {
|
||||
Job job = jobBuiler.build();
|
||||
jobResultsProvider.scheduledEventsForJob(request.getJobId(), job.getGroups(), query, eventsListener);
|
||||
|
||||
Job job = currentMlMetadata.getJobs().get(request.getJobId());
|
||||
if (job == null) {
|
||||
// Check if the requested id is a job group
|
||||
if (currentMlMetadata.isGroupOrJob(request.getJobId()) == false) {
|
||||
listener.onFailure(ExceptionsHelper.missingJobException(request.getJobId()));
|
||||
return;
|
||||
}
|
||||
jobGroups = Collections.singletonList(request.getJobId());
|
||||
requestId = null;
|
||||
} else {
|
||||
jobGroups = job.getGroups();
|
||||
}
|
||||
|
||||
jobResultsProvider.scheduledEventsForJob(requestId, jobGroups, query, eventsListener);
|
||||
},
|
||||
jobNotFound -> {
|
||||
// is the request Id a group?
|
||||
jobConfigProvider.groupExists(request.getJobId(), ActionListener.wrap(
|
||||
groupExists -> {
|
||||
if (groupExists) {
|
||||
jobResultsProvider.scheduledEventsForJob(
|
||||
null, Collections.singletonList(request.getJobId()), query, eventsListener);
|
||||
} else {
|
||||
listener.onFailure(ExceptionsHelper.missingJobException(request.getJobId()));
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
));
|
||||
} else {
|
||||
jobResultsProvider.scheduledEvents(query, eventsListener);
|
||||
}
|
||||
|
@ -36,11 +36,14 @@ public class TransportGetCategoriesAction extends HandledTransportAction<GetCate
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, GetCategoriesAction.Request request, ActionListener<GetCategoriesAction.Response> listener) {
|
||||
jobManager.getJobOrThrowIfUnknown(request.getJobId());
|
||||
|
||||
Integer from = request.getPageParams() != null ? request.getPageParams().getFrom() : null;
|
||||
Integer size = request.getPageParams() != null ? request.getPageParams().getSize() : null;
|
||||
jobResultsProvider.categoryDefinitions(request.getJobId(), request.getCategoryId(), true, from, size,
|
||||
r -> listener.onResponse(new GetCategoriesAction.Response(r)), listener::onFailure, client);
|
||||
jobManager.jobExists(request.getJobId(), ActionListener.wrap(
|
||||
jobExists -> {
|
||||
Integer from = request.getPageParams() != null ? request.getPageParams().getFrom() : null;
|
||||
Integer size = request.getPageParams() != null ? request.getPageParams().getSize() : null;
|
||||
jobResultsProvider.categoryDefinitions(request.getJobId(), request.getCategoryId(), true, from, size,
|
||||
r -> listener.onResponse(new GetCategoriesAction.Response(r)), listener::onFailure, client);
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
}
|
||||
|
@ -8,31 +8,44 @@ package org.elasticsearch.xpack.ml.action;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeReadAction;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetDatafeedsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.util.QueryPage;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.Comparator;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
|
||||
public class TransportGetDatafeedsAction extends TransportMasterNodeReadAction<GetDatafeedsAction.Request, GetDatafeedsAction.Response> {
|
||||
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
|
||||
@Inject
|
||||
public TransportGetDatafeedsAction(TransportService transportService, ClusterService clusterService,
|
||||
ThreadPool threadPool, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
super(GetDatafeedsAction.NAME, transportService, clusterService, threadPool, actionFilters,
|
||||
GetDatafeedsAction.Request::new, indexNameExpressionResolver);
|
||||
public TransportGetDatafeedsAction(TransportService transportService,
|
||||
ClusterService clusterService, ThreadPool threadPool,
|
||||
ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
Client client, NamedXContentRegistry xContentRegistry) {
|
||||
super(GetDatafeedsAction.NAME, transportService, clusterService, threadPool, actionFilters,
|
||||
GetDatafeedsAction.Request::new, indexNameExpressionResolver);
|
||||
|
||||
datafeedConfigProvider = new DatafeedConfigProvider(client, xContentRegistry);
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -47,18 +60,54 @@ public class TransportGetDatafeedsAction extends TransportMasterNodeReadAction<G
|
||||
|
||||
@Override
|
||||
protected void masterOperation(GetDatafeedsAction.Request request, ClusterState state,
|
||||
ActionListener<GetDatafeedsAction.Response> listener) throws Exception {
|
||||
ActionListener<GetDatafeedsAction.Response> listener) {
|
||||
logger.debug("Get datafeed '{}'", request.getDatafeedId());
|
||||
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(state);
|
||||
Set<String> expandedDatafeedIds = mlMetadata.expandDatafeedIds(request.getDatafeedId(), request.allowNoDatafeeds());
|
||||
List<DatafeedConfig> datafeedConfigs = new ArrayList<>();
|
||||
for (String expandedDatafeedId : expandedDatafeedIds) {
|
||||
datafeedConfigs.add(mlMetadata.getDatafeed(expandedDatafeedId));
|
||||
Map<String, DatafeedConfig> clusterStateConfigs =
|
||||
expandClusterStateDatafeeds(request.getDatafeedId(), request.allowNoDatafeeds(), state);
|
||||
|
||||
datafeedConfigProvider.expandDatafeedConfigs(request.getDatafeedId(), request.allowNoDatafeeds(), ActionListener.wrap(
|
||||
datafeedBuilders -> {
|
||||
// Check for duplicate datafeeds
|
||||
for (DatafeedConfig.Builder datafeed : datafeedBuilders) {
|
||||
if (clusterStateConfigs.containsKey(datafeed.getId())) {
|
||||
listener.onFailure(new IllegalStateException("Datafeed [" + datafeed.getId() + "] configuration " +
|
||||
"exists in both clusterstate and index"));
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Merge cluster state and index configs
|
||||
List<DatafeedConfig> datafeeds = new ArrayList<>(datafeedBuilders.size() + clusterStateConfigs.values().size());
|
||||
for (DatafeedConfig.Builder builder: datafeedBuilders) {
|
||||
datafeeds.add(builder.build());
|
||||
}
|
||||
|
||||
datafeeds.addAll(clusterStateConfigs.values());
|
||||
Collections.sort(datafeeds, Comparator.comparing(DatafeedConfig::getId));
|
||||
listener.onResponse(new GetDatafeedsAction.Response(new QueryPage<>(datafeeds, datafeeds.size(),
|
||||
DatafeedConfig.RESULTS_FIELD)));
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
Map<String, DatafeedConfig> expandClusterStateDatafeeds(String datafeedExpression, boolean allowNoDatafeeds,
|
||||
ClusterState clusterState) {
|
||||
|
||||
Map<String, DatafeedConfig> configById = new HashMap<>();
|
||||
try {
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
Set<String> expandedDatafeedIds = mlMetadata.expandDatafeedIds(datafeedExpression, allowNoDatafeeds);
|
||||
|
||||
for (String expandedDatafeedId : expandedDatafeedIds) {
|
||||
configById.put(expandedDatafeedId, mlMetadata.getDatafeed(expandedDatafeedId));
|
||||
}
|
||||
} catch (Exception e){
|
||||
// ignore
|
||||
}
|
||||
|
||||
listener.onResponse(new GetDatafeedsAction.Response(new QueryPage<>(datafeedConfigs, datafeedConfigs.size(),
|
||||
DatafeedConfig.RESULTS_FIELD)));
|
||||
return configById;
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -18,26 +18,29 @@ import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetDatafeedsStatsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.util.QueryPage;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedState;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
|
||||
import java.util.List;
|
||||
import java.util.Set;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
public class TransportGetDatafeedsStatsAction extends TransportMasterNodeReadAction<GetDatafeedsStatsAction.Request,
|
||||
GetDatafeedsStatsAction.Response> {
|
||||
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
|
||||
@Inject
|
||||
public TransportGetDatafeedsStatsAction(TransportService transportService, ClusterService clusterService,
|
||||
ThreadPool threadPool, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
DatafeedConfigProvider datafeedConfigProvider) {
|
||||
super(GetDatafeedsStatsAction.NAME, transportService, clusterService, threadPool, actionFilters,
|
||||
GetDatafeedsStatsAction.Request::new, indexNameExpressionResolver);
|
||||
this.datafeedConfigProvider = datafeedConfigProvider;
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -55,16 +58,18 @@ public class TransportGetDatafeedsStatsAction extends TransportMasterNodeReadAct
|
||||
ActionListener<GetDatafeedsStatsAction.Response> listener) throws Exception {
|
||||
logger.debug("Get stats for datafeed '{}'", request.getDatafeedId());
|
||||
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(state);
|
||||
Set<String> expandedDatafeedIds = mlMetadata.expandDatafeedIds(request.getDatafeedId(), request.allowNoDatafeeds());
|
||||
|
||||
PersistentTasksCustomMetaData tasksInProgress = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
List<GetDatafeedsStatsAction.Response.DatafeedStats> results = expandedDatafeedIds.stream()
|
||||
.map(datafeedId -> getDatafeedStats(datafeedId, state, tasksInProgress))
|
||||
.collect(Collectors.toList());
|
||||
QueryPage<GetDatafeedsStatsAction.Response.DatafeedStats> statsPage = new QueryPage<>(results, results.size(),
|
||||
DatafeedConfig.RESULTS_FIELD);
|
||||
listener.onResponse(new GetDatafeedsStatsAction.Response(statsPage));
|
||||
datafeedConfigProvider.expandDatafeedIds(request.getDatafeedId(), request.allowNoDatafeeds(), ActionListener.wrap(
|
||||
expandedDatafeedIds -> {
|
||||
PersistentTasksCustomMetaData tasksInProgress = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
List<GetDatafeedsStatsAction.Response.DatafeedStats> results = expandedDatafeedIds.stream()
|
||||
.map(datafeedId -> getDatafeedStats(datafeedId, state, tasksInProgress))
|
||||
.collect(Collectors.toList());
|
||||
QueryPage<GetDatafeedsStatsAction.Response.DatafeedStats> statsPage = new QueryPage<>(results, results.size(),
|
||||
DatafeedConfig.RESULTS_FIELD);
|
||||
listener.onResponse(new GetDatafeedsStatsAction.Response(statsPage));
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
private static GetDatafeedsStatsAction.Response.DatafeedStats getDatafeedStats(String datafeedId, ClusterState state,
|
||||
|
@ -37,18 +37,21 @@ public class TransportGetInfluencersAction extends HandledTransportAction<GetInf
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, GetInfluencersAction.Request request, ActionListener<GetInfluencersAction.Response> listener) {
|
||||
jobManager.getJobOrThrowIfUnknown(request.getJobId());
|
||||
|
||||
InfluencersQueryBuilder.InfluencersQuery query = new InfluencersQueryBuilder()
|
||||
.includeInterim(request.isExcludeInterim() == false)
|
||||
.start(request.getStart())
|
||||
.end(request.getEnd())
|
||||
.from(request.getPageParams().getFrom())
|
||||
.size(request.getPageParams().getSize())
|
||||
.influencerScoreThreshold(request.getInfluencerScore())
|
||||
.sortField(request.getSort())
|
||||
.sortDescending(request.isDescending()).build();
|
||||
jobResultsProvider.influencers(request.getJobId(), query,
|
||||
page -> listener.onResponse(new GetInfluencersAction.Response(page)), listener::onFailure, client);
|
||||
jobManager.jobExists(request.getJobId(), ActionListener.wrap(
|
||||
jobExists -> {
|
||||
InfluencersQueryBuilder.InfluencersQuery query = new InfluencersQueryBuilder()
|
||||
.includeInterim(request.isExcludeInterim() == false)
|
||||
.start(request.getStart())
|
||||
.end(request.getEnd())
|
||||
.from(request.getPageParams().getFrom())
|
||||
.size(request.getPageParams().getSize())
|
||||
.influencerScoreThreshold(request.getInfluencerScore())
|
||||
.sortField(request.getSort())
|
||||
.sortDescending(request.isDescending()).build();
|
||||
jobResultsProvider.influencers(request.getJobId(), query,
|
||||
page -> listener.onResponse(new GetInfluencersAction.Response(page)), listener::onFailure, client);
|
||||
},
|
||||
listener::onFailure)
|
||||
);
|
||||
}
|
||||
}
|
||||
|
@ -17,8 +17,6 @@ import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetJobsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.util.QueryPage;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.ml.job.JobManager;
|
||||
|
||||
public class TransportGetJobsAction extends TransportMasterNodeReadAction<GetJobsAction.Request, GetJobsAction.Response> {
|
||||
@ -46,11 +44,15 @@ public class TransportGetJobsAction extends TransportMasterNodeReadAction<GetJob
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void masterOperation(GetJobsAction.Request request, ClusterState state, ActionListener<GetJobsAction.Response> listener)
|
||||
throws Exception {
|
||||
protected void masterOperation(GetJobsAction.Request request, ClusterState state,
|
||||
ActionListener<GetJobsAction.Response> listener) {
|
||||
logger.debug("Get job '{}'", request.getJobId());
|
||||
QueryPage<Job> jobs = jobManager.expandJobs(request.getJobId(), request.allowNoJobs(), state);
|
||||
listener.onResponse(new GetJobsAction.Response(jobs));
|
||||
jobManager.expandJobs(request.getJobId(), request.allowNoJobs(), ActionListener.wrap(
|
||||
jobs -> {
|
||||
listener.onResponse(new GetJobsAction.Response(jobs));
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -21,7 +21,6 @@ import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetJobsStatsAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.GetJobsStatsAction.Response.JobStats;
|
||||
@ -31,6 +30,7 @@ import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSizeStats;
|
||||
import org.elasticsearch.xpack.core.ml.stats.ForecastStats;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider;
|
||||
import org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager;
|
||||
|
||||
@ -51,25 +51,35 @@ public class TransportGetJobsStatsAction extends TransportTasksAction<TransportO
|
||||
private final ClusterService clusterService;
|
||||
private final AutodetectProcessManager processManager;
|
||||
private final JobResultsProvider jobResultsProvider;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
|
||||
@Inject
|
||||
public TransportGetJobsStatsAction(TransportService transportService, ActionFilters actionFilters, ClusterService clusterService,
|
||||
AutodetectProcessManager processManager, JobResultsProvider jobResultsProvider) {
|
||||
AutodetectProcessManager processManager, JobResultsProvider jobResultsProvider,
|
||||
JobConfigProvider jobConfigProvider) {
|
||||
super(GetJobsStatsAction.NAME, clusterService, transportService, actionFilters, GetJobsStatsAction.Request::new,
|
||||
GetJobsStatsAction.Response::new, in -> new QueryPage<>(in, JobStats::new), ThreadPool.Names.MANAGEMENT);
|
||||
this.clusterService = clusterService;
|
||||
this.processManager = processManager;
|
||||
this.jobResultsProvider = jobResultsProvider;
|
||||
this.jobConfigProvider = jobConfigProvider;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, GetJobsStatsAction.Request request, ActionListener<GetJobsStatsAction.Response> listener) {
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterService.state());
|
||||
request.setExpandedJobsIds(new ArrayList<>(mlMetadata.expandJobIds(request.getJobId(), request.allowNoJobs())));
|
||||
ActionListener<GetJobsStatsAction.Response> finalListener = listener;
|
||||
listener = ActionListener.wrap(response -> gatherStatsForClosedJobs(mlMetadata,
|
||||
request, response, finalListener), listener::onFailure);
|
||||
super.doExecute(task, request, listener);
|
||||
protected void doExecute(Task task, GetJobsStatsAction.Request request, ActionListener<GetJobsStatsAction.Response> finalListener) {
|
||||
logger.debug("Get stats for job [{}]", request.getJobId());
|
||||
|
||||
jobConfigProvider.expandJobsIds(request.getJobId(), request.allowNoJobs(), true, ActionListener.wrap(
|
||||
expandedIds -> {
|
||||
request.setExpandedJobsIds(new ArrayList<>(expandedIds));
|
||||
ActionListener<GetJobsStatsAction.Response> jobStatsListener = ActionListener.wrap(
|
||||
response -> gatherStatsForClosedJobs(request, response, finalListener),
|
||||
finalListener::onFailure
|
||||
);
|
||||
super.doExecute(task, request, jobStatsListener);
|
||||
},
|
||||
finalListener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -89,7 +99,6 @@ public class TransportGetJobsStatsAction extends TransportTasksAction<TransportO
|
||||
protected void taskOperation(GetJobsStatsAction.Request request, TransportOpenJobAction.JobTask task,
|
||||
ActionListener<QueryPage<JobStats>> listener) {
|
||||
String jobId = task.getJobId();
|
||||
logger.debug("Get stats for job [{}]", jobId);
|
||||
ClusterState state = clusterService.state();
|
||||
PersistentTasksCustomMetaData tasks = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
Optional<Tuple<DataCounts, ModelSizeStats>> stats = processManager.getStatistics(task);
|
||||
@ -112,21 +121,20 @@ public class TransportGetJobsStatsAction extends TransportTasksAction<TransportO
|
||||
|
||||
// Up until now we gathered the stats for jobs that were open,
|
||||
// This method will fetch the stats for missing jobs, that was stored in the jobs index
|
||||
void gatherStatsForClosedJobs(MlMetadata mlMetadata, GetJobsStatsAction.Request request, GetJobsStatsAction.Response response,
|
||||
void gatherStatsForClosedJobs(GetJobsStatsAction.Request request, GetJobsStatsAction.Response response,
|
||||
ActionListener<GetJobsStatsAction.Response> listener) {
|
||||
List<String> jobIds = determineNonDeletedJobIdsWithoutLiveStats(mlMetadata,
|
||||
request.getExpandedJobsIds(), response.getResponse().results());
|
||||
if (jobIds.isEmpty()) {
|
||||
List<String> closedJobIds = determineJobIdsWithoutLiveStats(request.getExpandedJobsIds(), response.getResponse().results());
|
||||
if (closedJobIds.isEmpty()) {
|
||||
listener.onResponse(response);
|
||||
return;
|
||||
}
|
||||
|
||||
AtomicInteger counter = new AtomicInteger(jobIds.size());
|
||||
AtomicArray<JobStats> jobStats = new AtomicArray<>(jobIds.size());
|
||||
AtomicInteger counter = new AtomicInteger(closedJobIds.size());
|
||||
AtomicArray<GetJobsStatsAction.Response.JobStats> jobStats = new AtomicArray<>(closedJobIds.size());
|
||||
PersistentTasksCustomMetaData tasks = clusterService.state().getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
for (int i = 0; i < jobIds.size(); i++) {
|
||||
for (int i = 0; i < closedJobIds.size(); i++) {
|
||||
int slot = i;
|
||||
String jobId = jobIds.get(i);
|
||||
String jobId = closedJobIds.get(i);
|
||||
gatherForecastStats(jobId, forecastStats -> {
|
||||
gatherDataCountsAndModelSizeStats(jobId, (dataCounts, modelSizeStats) -> {
|
||||
JobState jobState = MlTasks.getJobState(jobId, tasks);
|
||||
@ -169,11 +177,9 @@ public class TransportGetJobsStatsAction extends TransportTasksAction<TransportO
|
||||
}
|
||||
}
|
||||
|
||||
static List<String> determineNonDeletedJobIdsWithoutLiveStats(MlMetadata mlMetadata,
|
||||
List<String> requestedJobIds,
|
||||
List<JobStats> stats) {
|
||||
Set<String> excludeJobIds = stats.stream().map(JobStats::getJobId).collect(Collectors.toSet());
|
||||
return requestedJobIds.stream().filter(jobId -> !excludeJobIds.contains(jobId) &&
|
||||
!mlMetadata.isJobDeleting(jobId)).collect(Collectors.toList());
|
||||
static List<String> determineJobIdsWithoutLiveStats(List<String> requestedJobIds,
|
||||
List<GetJobsStatsAction.Response.JobStats> stats) {
|
||||
Set<String> excludeJobIds = stats.stream().map(GetJobsStatsAction.Response.JobStats::getJobId).collect(Collectors.toSet());
|
||||
return requestedJobIds.stream().filter(jobId -> !excludeJobIds.contains(jobId)).collect(Collectors.toList());
|
||||
}
|
||||
}
|
||||
|
@ -41,13 +41,17 @@ public class TransportGetModelSnapshotsAction extends HandledTransportAction<Get
|
||||
request.getJobId(), request.getSnapshotId(), request.getPageParams().getFrom(), request.getPageParams().getSize(),
|
||||
request.getStart(), request.getEnd(), request.getSort(), request.getDescOrder());
|
||||
|
||||
jobManager.getJobOrThrowIfUnknown(request.getJobId());
|
||||
|
||||
jobResultsProvider.modelSnapshots(request.getJobId(), request.getPageParams().getFrom(), request.getPageParams().getSize(),
|
||||
request.getStart(), request.getEnd(), request.getSort(), request.getDescOrder(), request.getSnapshotId(),
|
||||
page -> {
|
||||
listener.onResponse(new GetModelSnapshotsAction.Response(clearQuantiles(page)));
|
||||
}, listener::onFailure);
|
||||
jobManager.jobExists(request.getJobId(), ActionListener.wrap(
|
||||
ok -> {
|
||||
jobResultsProvider.modelSnapshots(request.getJobId(), request.getPageParams().getFrom(),
|
||||
request.getPageParams().getSize(), request.getStart(), request.getEnd(), request.getSort(),
|
||||
request.getDescOrder(), request.getSnapshotId(),
|
||||
page -> {
|
||||
listener.onResponse(new GetModelSnapshotsAction.Response(clearQuantiles(page)));
|
||||
}, listener::onFailure);
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
public static QueryPage<ModelSnapshot> clearQuantiles(QueryPage<ModelSnapshot> page) {
|
||||
|
@ -75,21 +75,25 @@ public class TransportGetOverallBucketsAction extends HandledTransportAction<Get
|
||||
@Override
|
||||
protected void doExecute(Task task, GetOverallBucketsAction.Request request,
|
||||
ActionListener<GetOverallBucketsAction.Response> listener) {
|
||||
QueryPage<Job> jobsPage = jobManager.expandJobs(request.getJobId(), request.allowNoJobs(), clusterService.state());
|
||||
if (jobsPage.count() == 0) {
|
||||
listener.onResponse(new GetOverallBucketsAction.Response());
|
||||
return;
|
||||
}
|
||||
jobManager.expandJobs(request.getJobId(), request.allowNoJobs(), ActionListener.wrap(
|
||||
jobPage -> {
|
||||
if (jobPage.count() == 0) {
|
||||
listener.onResponse(new GetOverallBucketsAction.Response());
|
||||
return;
|
||||
}
|
||||
|
||||
// As computing and potentially aggregating overall buckets might take a while,
|
||||
// we run in a different thread to avoid blocking the network thread.
|
||||
threadPool.executor(MachineLearning.UTILITY_THREAD_POOL_NAME).execute(() -> {
|
||||
try {
|
||||
getOverallBuckets(request, jobsPage.results(), listener);
|
||||
} catch (Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
});
|
||||
// As computing and potentially aggregating overall buckets might take a while,
|
||||
// we run in a different thread to avoid blocking the network thread.
|
||||
threadPool.executor(MachineLearning.UTILITY_THREAD_POOL_NAME).execute(() -> {
|
||||
try {
|
||||
getOverallBuckets(request, jobPage.results(), listener);
|
||||
} catch (Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
});
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
private void getOverallBuckets(GetOverallBucketsAction.Request request, List<Job> jobs,
|
||||
|
@ -37,18 +37,21 @@ public class TransportGetRecordsAction extends HandledTransportAction<GetRecords
|
||||
@Override
|
||||
protected void doExecute(Task task, GetRecordsAction.Request request, ActionListener<GetRecordsAction.Response> listener) {
|
||||
|
||||
jobManager.getJobOrThrowIfUnknown(request.getJobId());
|
||||
|
||||
RecordsQueryBuilder query = new RecordsQueryBuilder()
|
||||
.includeInterim(request.isExcludeInterim() == false)
|
||||
.epochStart(request.getStart())
|
||||
.epochEnd(request.getEnd())
|
||||
.from(request.getPageParams().getFrom())
|
||||
.size(request.getPageParams().getSize())
|
||||
.recordScore(request.getRecordScoreFilter())
|
||||
.sortField(request.getSort())
|
||||
.sortDescending(request.isDescending());
|
||||
jobResultsProvider.records(request.getJobId(), query, page ->
|
||||
listener.onResponse(new GetRecordsAction.Response(page)), listener::onFailure, client);
|
||||
jobManager.jobExists(request.getJobId(), ActionListener.wrap(
|
||||
jobExists -> {
|
||||
RecordsQueryBuilder query = new RecordsQueryBuilder()
|
||||
.includeInterim(request.isExcludeInterim() == false)
|
||||
.epochStart(request.getStart())
|
||||
.epochEnd(request.getEnd())
|
||||
.from(request.getPageParams().getFrom())
|
||||
.size(request.getPageParams().getSize())
|
||||
.recordScore(request.getRecordScoreFilter())
|
||||
.sortField(request.getSort())
|
||||
.sortDescending(request.isDescending());
|
||||
jobResultsProvider.records(request.getJobId(), query, page ->
|
||||
listener.onResponse(new GetRecordsAction.Response(page)), listener::onFailure, client);
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
}
|
||||
|
@ -11,7 +11,6 @@ import org.elasticsearch.action.TaskOperationFailure;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.tasks.BaseTasksResponse;
|
||||
import org.elasticsearch.action.support.tasks.TransportTasksAction;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.io.stream.Writeable;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
@ -20,7 +19,6 @@ import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.JobTaskRequest;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.job.JobManager;
|
||||
import org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager;
|
||||
|
||||
import java.util.List;
|
||||
@ -50,8 +48,6 @@ public abstract class TransportJobTaskAction<Request extends JobTaskRequest<Requ
|
||||
String jobId = request.getJobId();
|
||||
// We need to check whether there is at least an assigned task here, otherwise we cannot redirect to the
|
||||
// node running the job task.
|
||||
ClusterState state = clusterService.state();
|
||||
JobManager.getJobOrThrowIfUnknown(jobId, state);
|
||||
PersistentTasksCustomMetaData tasks = clusterService.state().getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
PersistentTasksCustomMetaData.PersistentTask<?> jobTask = MlTasks.getJobTask(jobId, tasks);
|
||||
if (jobTask == null || jobTask.isAssigned() == false) {
|
||||
|
@ -20,18 +20,17 @@ import org.elasticsearch.action.support.master.AcknowledgedResponse;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.ClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.AliasOrIndex;
|
||||
import org.elasticsearch.cluster.metadata.IndexMetaData;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.MappingMetaData;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.node.DiscoveryNode;
|
||||
import org.elasticsearch.cluster.routing.IndexRoutingTable;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.CheckedSupplier;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.collect.ImmutableOpenMap;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
@ -53,11 +52,12 @@ import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.XPackField;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetaIndex;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.FinalizeJobExecutionAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.OpenJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.PutJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.UpdateJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.AnalysisLimits;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DetectionRule;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
@ -67,8 +67,11 @@ import org.elasticsearch.xpack.core.ml.job.persistence.AnomalyDetectorsIndex;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.ElasticsearchMappings;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
import org.elasticsearch.xpack.ml.MlConfigMigrationEligibilityCheck;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider;
|
||||
import org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProcessManager;
|
||||
import org.elasticsearch.xpack.ml.process.MlMemoryTracker;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
@ -95,26 +98,33 @@ import static org.elasticsearch.xpack.ml.job.process.autodetect.AutodetectProces
|
||||
*/
|
||||
public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAction.Request, AcknowledgedResponse> {
|
||||
|
||||
private static final PersistentTasksCustomMetaData.Assignment AWAITING_LAZY_ASSIGNMENT =
|
||||
new PersistentTasksCustomMetaData.Assignment(null, "persistent task is awaiting node assignment.");
|
||||
|
||||
private final XPackLicenseState licenseState;
|
||||
private final PersistentTasksService persistentTasksService;
|
||||
private final Client client;
|
||||
private final JobResultsProvider jobResultsProvider;
|
||||
private static final PersistentTasksCustomMetaData.Assignment AWAITING_LAZY_ASSIGNMENT =
|
||||
new PersistentTasksCustomMetaData.Assignment(null, "persistent task is awaiting node assignment.");
|
||||
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
private final MlMemoryTracker memoryTracker;
|
||||
private final MlConfigMigrationEligibilityCheck migrationEligibilityCheck;
|
||||
|
||||
@Inject
|
||||
public TransportOpenJobAction(TransportService transportService, ThreadPool threadPool,
|
||||
public TransportOpenJobAction(Settings settings, TransportService transportService, ThreadPool threadPool,
|
||||
XPackLicenseState licenseState, ClusterService clusterService,
|
||||
PersistentTasksService persistentTasksService, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver, Client client,
|
||||
JobResultsProvider jobResultsProvider) {
|
||||
JobResultsProvider jobResultsProvider, JobConfigProvider jobConfigProvider,
|
||||
MlMemoryTracker memoryTracker) {
|
||||
super(OpenJobAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
OpenJobAction.Request::new);
|
||||
this.licenseState = licenseState;
|
||||
this.persistentTasksService = persistentTasksService;
|
||||
this.client = client;
|
||||
this.jobResultsProvider = jobResultsProvider;
|
||||
this.jobConfigProvider = jobConfigProvider;
|
||||
this.memoryTracker = memoryTracker;
|
||||
this.migrationEligibilityCheck = new MlConfigMigrationEligibilityCheck(settings, clusterService);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -125,8 +135,7 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
* <li>check job's version is supported</li>
|
||||
* </ul>
|
||||
*/
|
||||
static void validate(String jobId, MlMetadata mlMetadata) {
|
||||
Job job = (mlMetadata == null) ? null : mlMetadata.getJobs().get(jobId);
|
||||
static void validate(String jobId, Job job) {
|
||||
if (job == null) {
|
||||
throw ExceptionsHelper.missingJobException(jobId);
|
||||
}
|
||||
@ -139,11 +148,15 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
}
|
||||
}
|
||||
|
||||
static PersistentTasksCustomMetaData.Assignment selectLeastLoadedMlNode(String jobId, ClusterState clusterState,
|
||||
static PersistentTasksCustomMetaData.Assignment selectLeastLoadedMlNode(String jobId, @Nullable Job job,
|
||||
ClusterState clusterState,
|
||||
int maxConcurrentJobAllocations,
|
||||
int fallbackMaxNumberOfOpenJobs,
|
||||
int maxMachineMemoryPercent, Logger logger) {
|
||||
List<String> unavailableIndices = verifyIndicesPrimaryShardsAreActive(jobId, clusterState);
|
||||
int maxMachineMemoryPercent,
|
||||
MlMemoryTracker memoryTracker,
|
||||
Logger logger) {
|
||||
String resultsIndexName = job != null ? job.getResultsIndexName() : null;
|
||||
List<String> unavailableIndices = verifyIndicesPrimaryShardsAreActive(resultsIndexName, clusterState);
|
||||
if (unavailableIndices.size() != 0) {
|
||||
String reason = "Not opening job [" + jobId + "], because not all primary shards are active for the following indices [" +
|
||||
String.join(",", unavailableIndices) + "]";
|
||||
@ -151,14 +164,29 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
return new PersistentTasksCustomMetaData.Assignment(null, reason);
|
||||
}
|
||||
|
||||
// Try to allocate jobs according to memory usage, but if that's not possible (maybe due to a mixed version cluster or maybe
|
||||
// because of some weird OS problem) then fall back to the old mechanism of only considering numbers of assigned jobs
|
||||
boolean allocateByMemory = true;
|
||||
|
||||
if (memoryTracker.isRecentlyRefreshed() == false) {
|
||||
|
||||
boolean scheduledRefresh = memoryTracker.asyncRefresh();
|
||||
if (scheduledRefresh) {
|
||||
String reason = "Not opening job [" + jobId + "] because job memory requirements are stale - refresh requested";
|
||||
logger.debug(reason);
|
||||
return new PersistentTasksCustomMetaData.Assignment(null, reason);
|
||||
} else {
|
||||
allocateByMemory = false;
|
||||
logger.warn("Falling back to allocating job [{}] by job counts because a memory requirement refresh could not be scheduled",
|
||||
jobId);
|
||||
}
|
||||
}
|
||||
|
||||
List<String> reasons = new LinkedList<>();
|
||||
long maxAvailableCount = Long.MIN_VALUE;
|
||||
long maxAvailableMemory = Long.MIN_VALUE;
|
||||
DiscoveryNode minLoadedNodeByCount = null;
|
||||
DiscoveryNode minLoadedNodeByMemory = null;
|
||||
// Try to allocate jobs according to memory usage, but if that's not possible (maybe due to a mixed version cluster or maybe
|
||||
// because of some weird OS problem) then fall back to the old mechanism of only considering numbers of assigned jobs
|
||||
boolean allocateByMemory = true;
|
||||
PersistentTasksCustomMetaData persistentTasks = clusterState.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
for (DiscoveryNode node : clusterState.getNodes()) {
|
||||
Map<String, String> nodeAttributes = node.getAttributes();
|
||||
@ -171,17 +199,6 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
continue;
|
||||
}
|
||||
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
Job job = mlMetadata.getJobs().get(jobId);
|
||||
Set<String> compatibleJobTypes = Job.getCompatibleJobTypes(node.getVersion());
|
||||
if (compatibleJobTypes.contains(job.getJobType()) == false) {
|
||||
String reason = "Not opening job [" + jobId + "] on node [" + nodeNameAndVersion(node) +
|
||||
"], because this node does not support jobs of type [" + job.getJobType() + "]";
|
||||
logger.trace(reason);
|
||||
reasons.add(reason);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (nodeSupportsModelSnapshotVersion(node, job) == false) {
|
||||
String reason = "Not opening job [" + jobId + "] on node [" + nodeNameAndVersion(node)
|
||||
+ "], because the job's model snapshot requires a node of version ["
|
||||
@ -191,12 +208,23 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
continue;
|
||||
}
|
||||
|
||||
if (jobHasRules(job) && node.getVersion().before(DetectionRule.VERSION_INTRODUCED)) {
|
||||
String reason = "Not opening job [" + jobId + "] on node [" + nodeNameAndVersion(node) + "], because jobs using " +
|
||||
"custom_rules require a node of version [" + DetectionRule.VERSION_INTRODUCED + "] or higher";
|
||||
logger.trace(reason);
|
||||
reasons.add(reason);
|
||||
continue;
|
||||
if (job != null) {
|
||||
Set<String> compatibleJobTypes = Job.getCompatibleJobTypes(node.getVersion());
|
||||
if (compatibleJobTypes.contains(job.getJobType()) == false) {
|
||||
String reason = "Not opening job [" + jobId + "] on node [" + nodeNameAndVersion(node) +
|
||||
"], because this node does not support jobs of type [" + job.getJobType() + "]";
|
||||
logger.trace(reason);
|
||||
reasons.add(reason);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (jobHasRules(job) && node.getVersion().before(DetectionRule.VERSION_INTRODUCED)) {
|
||||
String reason = "Not opening job [" + jobId + "] on node [" + nodeNameAndVersion(node) + "], because jobs using " +
|
||||
"custom_rules require a node of version [" + DetectionRule.VERSION_INTRODUCED + "] or higher";
|
||||
logger.trace(reason);
|
||||
reasons.add(reason);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
long numberOfAssignedJobs = 0;
|
||||
@ -204,9 +232,8 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
long assignedJobMemory = 0;
|
||||
if (persistentTasks != null) {
|
||||
// find all the job tasks assigned to this node
|
||||
Collection<PersistentTasksCustomMetaData.PersistentTask<?>> assignedTasks =
|
||||
persistentTasks.findTasks(OpenJobAction.TASK_NAME,
|
||||
task -> node.getId().equals(task.getExecutorNode()));
|
||||
Collection<PersistentTasksCustomMetaData.PersistentTask<?>> assignedTasks = persistentTasks.findTasks(
|
||||
MlTasks.JOB_TASK_NAME, task -> node.getId().equals(task.getExecutorNode()));
|
||||
for (PersistentTasksCustomMetaData.PersistentTask<?> assignedTask : assignedTasks) {
|
||||
JobTaskState jobTaskState = (JobTaskState) assignedTask.getState();
|
||||
JobState jobState;
|
||||
@ -217,6 +244,7 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
} else {
|
||||
jobState = jobTaskState.getState();
|
||||
if (jobTaskState.isStatusStale(assignedTask)) {
|
||||
// the job is re-locating
|
||||
if (jobState == JobState.CLOSING) {
|
||||
// previous executor node failed while the job was closing - it won't
|
||||
// be reopened, so consider it CLOSED for resource usage purposes
|
||||
@ -229,13 +257,18 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
}
|
||||
}
|
||||
}
|
||||
// Don't count CLOSED or FAILED jobs, as they don't consume native memory
|
||||
if (jobState.isAnyOf(JobState.CLOSED, JobState.FAILED) == false) {
|
||||
// Don't count CLOSED or FAILED jobs, as they don't consume native memory
|
||||
++numberOfAssignedJobs;
|
||||
String assignedJobId = ((OpenJobAction.JobParams) assignedTask.getParams()).getJobId();
|
||||
Job assignedJob = mlMetadata.getJobs().get(assignedJobId);
|
||||
assert assignedJob != null;
|
||||
assignedJobMemory += assignedJob.estimateMemoryFootprint();
|
||||
OpenJobAction.JobParams params = (OpenJobAction.JobParams) assignedTask.getParams();
|
||||
Long jobMemoryRequirement = memoryTracker.getJobMemoryRequirement(params.getJobId());
|
||||
if (jobMemoryRequirement == null) {
|
||||
allocateByMemory = false;
|
||||
logger.debug("Falling back to allocating job [{}] by job counts because " +
|
||||
"the memory requirement for job [{}] was not available", jobId, params.getJobId());
|
||||
} else {
|
||||
assignedJobMemory += jobMemoryRequirement;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -285,7 +318,7 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
machineMemory = Long.parseLong(machineMemoryStr);
|
||||
} catch (NumberFormatException e) {
|
||||
String reason = "Not opening job [" + jobId + "] on node [" + nodeNameAndMlAttributes(node) + "], because " +
|
||||
MachineLearning.MACHINE_MEMORY_NODE_ATTR + " attribute [" + machineMemoryStr + "] is not a long";
|
||||
MachineLearning.MACHINE_MEMORY_NODE_ATTR + " attribute [" + machineMemoryStr + "] is not a long";
|
||||
logger.trace(reason);
|
||||
reasons.add(reason);
|
||||
continue;
|
||||
@ -295,28 +328,36 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
if (allocateByMemory) {
|
||||
if (machineMemory > 0) {
|
||||
long maxMlMemory = machineMemory * maxMachineMemoryPercent / 100;
|
||||
long estimatedMemoryFootprint = job.estimateMemoryFootprint();
|
||||
long availableMemory = maxMlMemory - assignedJobMemory;
|
||||
if (estimatedMemoryFootprint > availableMemory) {
|
||||
String reason = "Not opening job [" + jobId + "] on node [" + nodeNameAndMlAttributes(node) +
|
||||
Long estimatedMemoryFootprint = memoryTracker.getJobMemoryRequirement(jobId);
|
||||
if (estimatedMemoryFootprint != null) {
|
||||
long availableMemory = maxMlMemory - assignedJobMemory;
|
||||
if (estimatedMemoryFootprint > availableMemory) {
|
||||
String reason = "Not opening job [" + jobId + "] on node [" + nodeNameAndMlAttributes(node) +
|
||||
"], because this node has insufficient available memory. Available memory for ML [" + maxMlMemory +
|
||||
"], memory required by existing jobs [" + assignedJobMemory +
|
||||
"], estimated memory required for this job [" + estimatedMemoryFootprint + "]";
|
||||
logger.trace(reason);
|
||||
reasons.add(reason);
|
||||
continue;
|
||||
}
|
||||
logger.trace(reason);
|
||||
reasons.add(reason);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (maxAvailableMemory < availableMemory) {
|
||||
maxAvailableMemory = availableMemory;
|
||||
minLoadedNodeByMemory = node;
|
||||
if (maxAvailableMemory < availableMemory) {
|
||||
maxAvailableMemory = availableMemory;
|
||||
minLoadedNodeByMemory = node;
|
||||
}
|
||||
} else {
|
||||
// If we cannot get the job memory requirement,
|
||||
// fall back to simply allocating by job count
|
||||
allocateByMemory = false;
|
||||
logger.debug("Falling back to allocating job [{}] by job counts because its memory requirement was not available",
|
||||
jobId);
|
||||
}
|
||||
} else {
|
||||
// If we cannot get the available memory on any machine in
|
||||
// the cluster, fall back to simply allocating by job count
|
||||
allocateByMemory = false;
|
||||
logger.debug("Falling back to allocating job [{}] by job counts because machine memory was not available for node [{}]",
|
||||
jobId, nodeNameAndMlAttributes(node));
|
||||
jobId, nodeNameAndMlAttributes(node));
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -358,13 +399,15 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
return builder.toString();
|
||||
}
|
||||
|
||||
static String[] indicesOfInterest(ClusterState clusterState, String job) {
|
||||
String jobResultIndex = AnomalyDetectorsIndex.getPhysicalIndexFromState(clusterState, job);
|
||||
return new String[]{AnomalyDetectorsIndex.jobStateIndexName(), jobResultIndex, MlMetaIndex.INDEX_NAME};
|
||||
static String[] indicesOfInterest(String resultsIndex) {
|
||||
if (resultsIndex == null) {
|
||||
return new String[]{AnomalyDetectorsIndex.jobStateIndexName(), MlMetaIndex.INDEX_NAME};
|
||||
}
|
||||
return new String[]{AnomalyDetectorsIndex.jobStateIndexName(), resultsIndex, MlMetaIndex.INDEX_NAME};
|
||||
}
|
||||
|
||||
static List<String> verifyIndicesPrimaryShardsAreActive(String jobId, ClusterState clusterState) {
|
||||
String[] indices = indicesOfInterest(clusterState, jobId);
|
||||
static List<String> verifyIndicesPrimaryShardsAreActive(String resultsIndex, ClusterState clusterState) {
|
||||
String[] indices = indicesOfInterest(resultsIndex);
|
||||
List<String> unavailableIndices = new ArrayList<>(indices.length);
|
||||
for (String index : indices) {
|
||||
// Indices are created on demand from templates.
|
||||
@ -463,10 +506,15 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
|
||||
@Override
|
||||
protected void masterOperation(OpenJobAction.Request request, ClusterState state, ActionListener<AcknowledgedResponse> listener) {
|
||||
if (migrationEligibilityCheck.jobIsEligibleForMigration(request.getJobParams().getJobId(), state)) {
|
||||
listener.onFailure(ExceptionsHelper.configHasNotBeenMigrated("open job", request.getJobParams().getJobId()));
|
||||
return;
|
||||
}
|
||||
|
||||
OpenJobAction.JobParams jobParams = request.getJobParams();
|
||||
if (licenseState.isMachineLearningAllowed()) {
|
||||
|
||||
// Step 6. Clear job finished time once the job is started and respond
|
||||
// Clear job finished time once the job is started and respond
|
||||
ActionListener<AcknowledgedResponse> clearJobFinishTime = ActionListener.wrap(
|
||||
response -> {
|
||||
if (response.isAcknowledged()) {
|
||||
@ -478,7 +526,7 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
listener::onFailure
|
||||
);
|
||||
|
||||
// Step 5. Wait for job to be started
|
||||
// Wait for job to be started
|
||||
ActionListener<PersistentTasksCustomMetaData.PersistentTask<OpenJobAction.JobParams>> waitForJobToStart =
|
||||
new ActionListener<PersistentTasksCustomMetaData.PersistentTask<OpenJobAction.JobParams>>() {
|
||||
@Override
|
||||
@ -496,44 +544,54 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
}
|
||||
};
|
||||
|
||||
// Step 4. Start job task
|
||||
ActionListener<PutJobAction.Response> establishedMemoryUpdateListener = ActionListener.wrap(
|
||||
response -> persistentTasksService.sendStartRequest(MlTasks.jobTaskId(jobParams.getJobId()),
|
||||
OpenJobAction.TASK_NAME, jobParams, waitForJobToStart),
|
||||
listener::onFailure
|
||||
// Start job task
|
||||
ActionListener<Long> memoryRequirementRefreshListener = ActionListener.wrap(
|
||||
mem -> persistentTasksService.sendStartRequest(MlTasks.jobTaskId(jobParams.getJobId()), MlTasks.JOB_TASK_NAME, jobParams,
|
||||
waitForJobToStart),
|
||||
listener::onFailure
|
||||
);
|
||||
|
||||
// Step 3. Update established model memory for pre-6.1 jobs that haven't had it set
|
||||
// Tell the job tracker to refresh the memory requirement for this job and all other jobs that have persistent tasks
|
||||
ActionListener<PutJobAction.Response> jobUpdateListener = ActionListener.wrap(
|
||||
response -> memoryTracker.refreshJobMemoryAndAllOthers(jobParams.getJobId(), memoryRequirementRefreshListener),
|
||||
listener::onFailure
|
||||
);
|
||||
|
||||
// Increase the model memory limit for 6.1 - 6.3 jobs
|
||||
ActionListener<Boolean> missingMappingsListener = ActionListener.wrap(
|
||||
response -> {
|
||||
Job job = MlMetadata.getMlMetadata(clusterService.state()).getJobs().get(jobParams.getJobId());
|
||||
Job job = jobParams.getJob();
|
||||
if (job != null) {
|
||||
Version jobVersion = job.getJobVersion();
|
||||
Long jobEstablishedModelMemory = job.getEstablishedModelMemory();
|
||||
if ((jobVersion == null || jobVersion.before(Version.V_6_1_0))
|
||||
&& (jobEstablishedModelMemory == null || jobEstablishedModelMemory == 0)) {
|
||||
jobResultsProvider.getEstablishedMemoryUsage(job.getId(), null, null, establishedModelMemory -> {
|
||||
if (establishedModelMemory != null && establishedModelMemory > 0) {
|
||||
JobUpdate update = new JobUpdate.Builder(job.getId())
|
||||
.setEstablishedModelMemory(establishedModelMemory).build();
|
||||
UpdateJobAction.Request updateRequest = UpdateJobAction.Request.internal(job.getId(), update);
|
||||
if (jobVersion != null &&
|
||||
(jobVersion.onOrAfter(Version.V_6_1_0) && jobVersion.before(Version.V_6_3_0))) {
|
||||
// Increase model memory limit if < 512MB
|
||||
if (job.getAnalysisLimits() != null && job.getAnalysisLimits().getModelMemoryLimit() != null &&
|
||||
job.getAnalysisLimits().getModelMemoryLimit() < 512L) {
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, UpdateJobAction.INSTANCE, updateRequest,
|
||||
establishedMemoryUpdateListener);
|
||||
} else {
|
||||
establishedMemoryUpdateListener.onResponse(null);
|
||||
}
|
||||
}, listener::onFailure);
|
||||
} else {
|
||||
establishedMemoryUpdateListener.onResponse(null);
|
||||
long updatedModelMemoryLimit = (long) (job.getAnalysisLimits().getModelMemoryLimit() * 1.3);
|
||||
AnalysisLimits limits = new AnalysisLimits(updatedModelMemoryLimit,
|
||||
job.getAnalysisLimits().getCategorizationExamplesLimit());
|
||||
|
||||
JobUpdate update = new JobUpdate.Builder(job.getId()).setJobVersion(Version.CURRENT)
|
||||
.setAnalysisLimits(limits).build();
|
||||
UpdateJobAction.Request updateRequest = UpdateJobAction.Request.internal(job.getId(), update);
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, UpdateJobAction.INSTANCE, updateRequest,
|
||||
jobUpdateListener);
|
||||
} else {
|
||||
jobUpdateListener.onResponse(null);
|
||||
}
|
||||
}
|
||||
else {
|
||||
jobUpdateListener.onResponse(null);
|
||||
}
|
||||
} else {
|
||||
establishedMemoryUpdateListener.onResponse(null);
|
||||
jobUpdateListener.onResponse(null);
|
||||
}
|
||||
}, listener::onFailure
|
||||
);
|
||||
|
||||
// Step 2. Try adding state doc mapping
|
||||
// Try adding state doc mapping
|
||||
ActionListener<Boolean> resultsPutMappingHandler = ActionListener.wrap(
|
||||
response -> {
|
||||
addDocMappingIfMissing(AnomalyDetectorsIndex.jobStateIndexName(), ElasticsearchMappings::stateMapping,
|
||||
@ -541,9 +599,21 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
}, listener::onFailure
|
||||
);
|
||||
|
||||
// Step 1. Try adding results doc mapping
|
||||
addDocMappingIfMissing(AnomalyDetectorsIndex.jobResultsAliasedName(jobParams.getJobId()), ElasticsearchMappings::docMapping,
|
||||
state, resultsPutMappingHandler);
|
||||
// Get the job config
|
||||
jobConfigProvider.getJob(jobParams.getJobId(), ActionListener.wrap(
|
||||
builder -> {
|
||||
try {
|
||||
jobParams.setJob(builder.build());
|
||||
|
||||
// Try adding results doc mapping
|
||||
addDocMappingIfMissing(AnomalyDetectorsIndex.jobResultsAliasedName(jobParams.getJobId()),
|
||||
ElasticsearchMappings::resultsMapping, state, resultsPutMappingHandler);
|
||||
} catch (Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
} else {
|
||||
listener.onFailure(LicenseUtils.newComplianceException(XPackField.MACHINE_LEARNING));
|
||||
}
|
||||
@ -582,34 +652,18 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
}
|
||||
|
||||
private void clearJobFinishedTime(String jobId, ActionListener<AcknowledgedResponse> listener) {
|
||||
clusterService.submitStateUpdateTask("clearing-job-finish-time-for-" + jobId, new ClusterStateUpdateTask() {
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(currentState);
|
||||
MlMetadata.Builder mlMetadataBuilder = new MlMetadata.Builder(mlMetadata);
|
||||
Job.Builder jobBuilder = new Job.Builder(mlMetadata.getJobs().get(jobId));
|
||||
jobBuilder.setFinishedTime(null);
|
||||
JobUpdate update = new JobUpdate.Builder(jobId).setClearFinishTime(true).build();
|
||||
|
||||
mlMetadataBuilder.putJob(jobBuilder.build(), true);
|
||||
ClusterState.Builder builder = ClusterState.builder(currentState);
|
||||
return builder.metaData(new MetaData.Builder(currentState.metaData())
|
||||
.putCustom(MlMetadata.TYPE, mlMetadataBuilder.build()))
|
||||
.build();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(String source, Exception e) {
|
||||
logger.error("[" + jobId + "] Failed to clear finished_time; source [" + source + "]", e);
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clusterStateProcessed(String source, ClusterState oldState,
|
||||
ClusterState newState) {
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
}
|
||||
});
|
||||
jobConfigProvider.updateJob(jobId, update, null, ActionListener.wrap(
|
||||
job -> listener.onResponse(new AcknowledgedResponse(true)),
|
||||
e -> {
|
||||
logger.error("[" + jobId + "] Failed to clear finished_time", e);
|
||||
// Not a critical error so continue
|
||||
listener.onResponse(new AcknowledgedResponse(true));
|
||||
}
|
||||
));
|
||||
}
|
||||
|
||||
private void cancelJobStart(PersistentTasksCustomMetaData.PersistentTask<OpenJobAction.JobParams> persistentTask, Exception exception,
|
||||
ActionListener<AcknowledgedResponse> listener) {
|
||||
persistentTasksService.sendRemoveRequest(persistentTask.getId(),
|
||||
@ -678,6 +732,8 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
private static final Logger logger = LogManager.getLogger(OpenJobPersistentTasksExecutor.class);
|
||||
|
||||
private final AutodetectProcessManager autodetectProcessManager;
|
||||
private final MlMemoryTracker memoryTracker;
|
||||
private final Client client;
|
||||
|
||||
/**
|
||||
* The maximum number of open jobs can be different on each node. However, nodes on older versions
|
||||
@ -691,9 +747,12 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
private volatile int maxLazyMLNodes;
|
||||
|
||||
public OpenJobPersistentTasksExecutor(Settings settings, ClusterService clusterService,
|
||||
AutodetectProcessManager autodetectProcessManager) {
|
||||
super(OpenJobAction.TASK_NAME, MachineLearning.UTILITY_THREAD_POOL_NAME);
|
||||
AutodetectProcessManager autodetectProcessManager, MlMemoryTracker memoryTracker,
|
||||
Client client) {
|
||||
super(MlTasks.JOB_TASK_NAME, MachineLearning.UTILITY_THREAD_POOL_NAME);
|
||||
this.autodetectProcessManager = autodetectProcessManager;
|
||||
this.memoryTracker = memoryTracker;
|
||||
this.client = client;
|
||||
this.fallbackMaxNumberOfOpenJobs = AutodetectProcessManager.MAX_OPEN_JOBS_PER_NODE.get(settings);
|
||||
this.maxConcurrentJobAllocations = MachineLearning.CONCURRENT_JOB_ALLOCATIONS.get(settings);
|
||||
this.maxMachineMemoryPercent = MachineLearning.MAX_MACHINE_MEMORY_PERCENT.get(settings);
|
||||
@ -708,14 +767,16 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
@Override
|
||||
public PersistentTasksCustomMetaData.Assignment getAssignment(OpenJobAction.JobParams params, ClusterState clusterState) {
|
||||
PersistentTasksCustomMetaData.Assignment assignment = selectLeastLoadedMlNode(params.getJobId(),
|
||||
params.getJob(),
|
||||
clusterState,
|
||||
maxConcurrentJobAllocations,
|
||||
fallbackMaxNumberOfOpenJobs,
|
||||
maxMachineMemoryPercent,
|
||||
memoryTracker,
|
||||
logger);
|
||||
if (assignment.getExecutorNode() == null) {
|
||||
int numMlNodes = 0;
|
||||
for(DiscoveryNode node : clusterState.getNodes()) {
|
||||
for (DiscoveryNode node : clusterState.getNodes()) {
|
||||
if (Boolean.valueOf(node.getAttributes().get(MachineLearning.ML_ENABLED_NODE_ATTR))) {
|
||||
numMlNodes++;
|
||||
}
|
||||
@ -731,11 +792,10 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
@Override
|
||||
public void validate(OpenJobAction.JobParams params, ClusterState clusterState) {
|
||||
|
||||
TransportOpenJobAction.validate(params.getJobId(), MlMetadata.getMlMetadata(clusterState));
|
||||
TransportOpenJobAction.validate(params.getJobId(), params.getJob());
|
||||
|
||||
// If we already know that we can't find an ml node because all ml nodes are running at capacity or
|
||||
// simply because there are no ml nodes in the cluster then we fail quickly here:
|
||||
|
||||
PersistentTasksCustomMetaData.Assignment assignment = getAssignment(params, clusterState);
|
||||
if (assignment.getExecutorNode() == null && assignment.equals(AWAITING_LAZY_ASSIGNMENT) == false) {
|
||||
throw makeNoSuitableNodesException(logger, params.getJobId(), assignment.getExplanation());
|
||||
@ -754,9 +814,15 @@ public class TransportOpenJobAction extends TransportMasterNodeAction<OpenJobAct
|
||||
return;
|
||||
}
|
||||
|
||||
String jobId = jobTask.getJobId();
|
||||
autodetectProcessManager.openJob(jobTask, e2 -> {
|
||||
if (e2 == null) {
|
||||
task.markAsCompleted();
|
||||
FinalizeJobExecutionAction.Request finalizeRequest = new FinalizeJobExecutionAction.Request(new String[]{jobId});
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, FinalizeJobExecutionAction.INSTANCE, finalizeRequest,
|
||||
ActionListener.wrap(
|
||||
response -> task.markAsCompleted(),
|
||||
e -> logger.error("error finalizing job [" + jobId + "]", e)
|
||||
));
|
||||
} else {
|
||||
task.markAsFailed(e2);
|
||||
}
|
||||
|
@ -25,6 +25,7 @@ import org.elasticsearch.xpack.core.ml.action.PostCalendarEventsAction;
|
||||
import org.elasticsearch.xpack.core.ml.calendars.Calendar;
|
||||
import org.elasticsearch.xpack.core.ml.calendars.ScheduledEvent;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
import org.elasticsearch.xpack.ml.job.JobManager;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider;
|
||||
|
||||
@ -64,7 +65,7 @@ public class TransportPostCalendarEventsAction extends HandledTransportAction<Po
|
||||
IndexRequest indexRequest = new IndexRequest(MlMetaIndex.INDEX_NAME, MlMetaIndex.TYPE);
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
indexRequest.source(event.toXContent(builder,
|
||||
new ToXContent.MapParams(Collections.singletonMap(MlMetaIndex.INCLUDE_TYPE_KEY,
|
||||
new ToXContent.MapParams(Collections.singletonMap(ToXContentParams.INCLUDE_TYPE,
|
||||
"true"))));
|
||||
} catch (IOException e) {
|
||||
throw new IllegalStateException("Failed to serialise event", e);
|
||||
@ -78,8 +79,10 @@ public class TransportPostCalendarEventsAction extends HandledTransportAction<Po
|
||||
new ActionListener<BulkResponse>() {
|
||||
@Override
|
||||
public void onResponse(BulkResponse response) {
|
||||
jobManager.updateProcessOnCalendarChanged(calendar.getJobIds());
|
||||
listener.onResponse(new PostCalendarEventsAction.Response(events));
|
||||
jobManager.updateProcessOnCalendarChanged(calendar.getJobIds(), ActionListener.wrap(
|
||||
r -> listener.onResponse(new PostCalendarEventsAction.Response(events)),
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -9,21 +9,19 @@ import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.HandledTransportAction;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.bytes.BytesArray;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ClientHelper;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.action.PreviewDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.ChunkingConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.extractor.DataExtractor;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.datafeed.extractor.DataExtractorFactory;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
|
||||
import java.io.BufferedReader;
|
||||
import java.io.InputStream;
|
||||
@ -38,51 +36,56 @@ public class TransportPreviewDatafeedAction extends HandledTransportAction<Previ
|
||||
|
||||
private final ThreadPool threadPool;
|
||||
private final Client client;
|
||||
private final ClusterService clusterService;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
|
||||
@Inject
|
||||
public TransportPreviewDatafeedAction(ThreadPool threadPool, TransportService transportService, ActionFilters actionFilters,
|
||||
Client client, ClusterService clusterService) {
|
||||
public TransportPreviewDatafeedAction(ThreadPool threadPool, TransportService transportService,
|
||||
ActionFilters actionFilters, Client client, JobConfigProvider jobConfigProvider,
|
||||
DatafeedConfigProvider datafeedConfigProvider) {
|
||||
super(PreviewDatafeedAction.NAME, transportService, actionFilters,
|
||||
(Supplier<PreviewDatafeedAction.Request>) PreviewDatafeedAction.Request::new);
|
||||
this.threadPool = threadPool;
|
||||
this.client = client;
|
||||
this.clusterService = clusterService;
|
||||
this.jobConfigProvider = jobConfigProvider;
|
||||
this.datafeedConfigProvider = datafeedConfigProvider;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, PreviewDatafeedAction.Request request, ActionListener<PreviewDatafeedAction.Response> listener) {
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterService.state());
|
||||
DatafeedConfig datafeed = mlMetadata.getDatafeed(request.getDatafeedId());
|
||||
if (datafeed == null) {
|
||||
throw ExceptionsHelper.missingDatafeedException(request.getDatafeedId());
|
||||
}
|
||||
Job job = mlMetadata.getJobs().get(datafeed.getJobId());
|
||||
if (job == null) {
|
||||
throw ExceptionsHelper.missingJobException(datafeed.getJobId());
|
||||
}
|
||||
|
||||
DatafeedConfig.Builder previewDatafeed = buildPreviewDatafeed(datafeed);
|
||||
Map<String, String> headers = threadPool.getThreadContext().getHeaders().entrySet().stream()
|
||||
.filter(e -> ClientHelper.SECURITY_HEADER_FILTERS.contains(e.getKey()))
|
||||
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
|
||||
previewDatafeed.setHeaders(headers);
|
||||
// NB: this is using the client from the transport layer, NOT the internal client.
|
||||
// This is important because it means the datafeed search will fail if the user
|
||||
// requesting the preview doesn't have permission to search the relevant indices.
|
||||
DataExtractorFactory.create(client, previewDatafeed.build(), job, new ActionListener<DataExtractorFactory>() {
|
||||
@Override
|
||||
public void onResponse(DataExtractorFactory dataExtractorFactory) {
|
||||
DataExtractor dataExtractor = dataExtractorFactory.newExtractor(0, Long.MAX_VALUE);
|
||||
threadPool.generic().execute(() -> previewDatafeed(dataExtractor, listener));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
});
|
||||
datafeedConfigProvider.getDatafeedConfig(request.getDatafeedId(), ActionListener.wrap(
|
||||
datafeedConfigBuilder -> {
|
||||
DatafeedConfig datafeedConfig = datafeedConfigBuilder.build();
|
||||
jobConfigProvider.getJob(datafeedConfig.getJobId(), ActionListener.wrap(
|
||||
jobBuilder -> {
|
||||
DatafeedConfig.Builder previewDatafeed = buildPreviewDatafeed(datafeedConfig);
|
||||
Map<String, String> headers = threadPool.getThreadContext().getHeaders().entrySet().stream()
|
||||
.filter(e -> ClientHelper.SECURITY_HEADER_FILTERS.contains(e.getKey()))
|
||||
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
|
||||
previewDatafeed.setHeaders(headers);
|
||||
// NB: this is using the client from the transport layer, NOT the internal client.
|
||||
// This is important because it means the datafeed search will fail if the user
|
||||
// requesting the preview doesn't have permission to search the relevant indices.
|
||||
DataExtractorFactory.create(client, previewDatafeed.build(), jobBuilder.build(),
|
||||
new ActionListener<DataExtractorFactory>() {
|
||||
@Override
|
||||
public void onResponse(DataExtractorFactory dataExtractorFactory) {
|
||||
DataExtractor dataExtractor = dataExtractorFactory.newExtractor(0, Long.MAX_VALUE);
|
||||
threadPool.generic().execute(() -> previewDatafeed(dataExtractor, listener));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
});
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
/** Visible for testing */
|
||||
|
@ -25,6 +25,7 @@ import org.elasticsearch.xpack.core.ml.MlMetaIndex;
|
||||
import org.elasticsearch.xpack.core.ml.action.PutCalendarAction;
|
||||
import org.elasticsearch.xpack.core.ml.calendars.Calendar;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
@ -51,7 +52,7 @@ public class TransportPutCalendarAction extends HandledTransportAction<PutCalend
|
||||
IndexRequest indexRequest = new IndexRequest(MlMetaIndex.INDEX_NAME, MlMetaIndex.TYPE, calendar.documentId());
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
indexRequest.source(calendar.toXContent(builder,
|
||||
new ToXContent.MapParams(Collections.singletonMap(MlMetaIndex.INCLUDE_TYPE_KEY, "true"))));
|
||||
new ToXContent.MapParams(Collections.singletonMap(ToXContentParams.INCLUDE_TYPE, "true"))));
|
||||
} catch (IOException e) {
|
||||
throw new IllegalStateException("Failed to serialise calendar with id [" + calendar.getId() + "]", e);
|
||||
}
|
||||
|
@ -5,21 +5,23 @@
|
||||
*/
|
||||
package org.elasticsearch.xpack.ml.action;
|
||||
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.search.SearchAction;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.CheckedConsumer;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.json.JsonXContent;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
@ -29,10 +31,10 @@ import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.XPackField;
|
||||
import org.elasticsearch.xpack.core.XPackPlugin;
|
||||
import org.elasticsearch.xpack.core.XPackSettings;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.action.PutDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.rollup.action.GetRollupIndexCapsAction;
|
||||
import org.elasticsearch.xpack.core.rollup.action.RollupSearchAction;
|
||||
@ -42,8 +44,11 @@ import org.elasticsearch.xpack.core.security.action.user.HasPrivilegesRequest;
|
||||
import org.elasticsearch.xpack.core.security.action.user.HasPrivilegesResponse;
|
||||
import org.elasticsearch.xpack.core.security.authz.RoleDescriptor;
|
||||
import org.elasticsearch.xpack.core.security.support.Exceptions;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
import java.util.Map;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
@ -53,20 +58,24 @@ public class TransportPutDatafeedAction extends TransportMasterNodeAction<PutDat
|
||||
|
||||
private final XPackLicenseState licenseState;
|
||||
private final Client client;
|
||||
|
||||
private final SecurityContext securityContext;
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
|
||||
@Inject
|
||||
public TransportPutDatafeedAction(Settings settings, TransportService transportService,
|
||||
ClusterService clusterService, ThreadPool threadPool, Client client,
|
||||
XPackLicenseState licenseState, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
NamedXContentRegistry xContentRegistry) {
|
||||
super(PutDatafeedAction.NAME, transportService, clusterService, threadPool,
|
||||
actionFilters, indexNameExpressionResolver, PutDatafeedAction.Request::new);
|
||||
actionFilters, indexNameExpressionResolver, PutDatafeedAction.Request::new);
|
||||
this.licenseState = licenseState;
|
||||
this.client = client;
|
||||
this.securityContext = XPackSettings.SECURITY_ENABLED.get(settings) ?
|
||||
new SecurityContext(settings, threadPool.getThreadContext()) : null;
|
||||
this.datafeedConfigProvider = new DatafeedConfigProvider(client, xContentRegistry);
|
||||
this.jobConfigProvider = new JobConfigProvider(client);
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -155,34 +164,61 @@ public class TransportPutDatafeedAction extends TransportMasterNodeAction<PutDat
|
||||
private void putDatafeed(PutDatafeedAction.Request request, Map<String, String> headers,
|
||||
ActionListener<PutDatafeedAction.Response> listener) {
|
||||
|
||||
String datafeedId = request.getDatafeed().getId();
|
||||
String jobId = request.getDatafeed().getJobId();
|
||||
ElasticsearchException validationError = checkConfigsAreNotDefinedInClusterState(datafeedId, jobId);
|
||||
if (validationError != null) {
|
||||
listener.onFailure(validationError);
|
||||
return;
|
||||
}
|
||||
DatafeedConfig.validateAggregations(request.getDatafeed().getParsedAggregations());
|
||||
clusterService.submitStateUpdateTask(
|
||||
"put-datafeed-" + request.getDatafeed().getId(),
|
||||
new AckedClusterStateUpdateTask<PutDatafeedAction.Response>(request, listener) {
|
||||
|
||||
@Override
|
||||
protected PutDatafeedAction.Response newResponse(boolean acknowledged) {
|
||||
if (acknowledged) {
|
||||
logger.info("Created datafeed [{}]", request.getDatafeed().getId());
|
||||
}
|
||||
return new PutDatafeedAction.Response(request.getDatafeed());
|
||||
}
|
||||
CheckedConsumer<Boolean, Exception> validationOk = ok -> {
|
||||
datafeedConfigProvider.putDatafeedConfig(request.getDatafeed(), headers, ActionListener.wrap(
|
||||
indexResponse -> listener.onResponse(new PutDatafeedAction.Response(request.getDatafeed())),
|
||||
listener::onFailure
|
||||
));
|
||||
};
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
return putDatafeed(request, headers, currentState);
|
||||
}
|
||||
});
|
||||
CheckedConsumer<Boolean, Exception> jobOk = ok ->
|
||||
jobConfigProvider.validateDatafeedJob(request.getDatafeed(), ActionListener.wrap(validationOk, listener::onFailure));
|
||||
|
||||
checkJobDoesNotHaveADatafeed(jobId, ActionListener.wrap(jobOk, listener::onFailure));
|
||||
}
|
||||
|
||||
private ClusterState putDatafeed(PutDatafeedAction.Request request, Map<String, String> headers, ClusterState clusterState) {
|
||||
XPackPlugin.checkReadyForXPackCustomMetadata(clusterState);
|
||||
MlMetadata currentMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
MlMetadata newMetadata = new MlMetadata.Builder(currentMetadata)
|
||||
.putDatafeed(request.getDatafeed(), headers).build();
|
||||
return ClusterState.builder(clusterState).metaData(
|
||||
MetaData.builder(clusterState.getMetaData()).putCustom(MlMetadata.TYPE, newMetadata).build())
|
||||
.build();
|
||||
/**
|
||||
* Returns an exception if a datafeed with the same Id is defined in the
|
||||
* cluster state or the job is in the cluster state and already has a datafeed
|
||||
*/
|
||||
@Nullable
|
||||
private ElasticsearchException checkConfigsAreNotDefinedInClusterState(String datafeedId, String jobId) {
|
||||
ClusterState clusterState = clusterService.state();
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
|
||||
if (mlMetadata.getDatafeed(datafeedId) != null) {
|
||||
return ExceptionsHelper.datafeedAlreadyExists(datafeedId);
|
||||
}
|
||||
|
||||
if (mlMetadata.getDatafeedByJobId(jobId).isPresent()) {
|
||||
return ExceptionsHelper.conflictStatusException("Cannot create datafeed [" + datafeedId + "] as a " +
|
||||
"job [" + jobId + "] defined in the cluster state references a datafeed with the same Id");
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
private void checkJobDoesNotHaveADatafeed(String jobId, ActionListener<Boolean> listener) {
|
||||
datafeedConfigProvider.findDatafeedsForJobIds(Collections.singletonList(jobId), ActionListener.wrap(
|
||||
datafeedIds -> {
|
||||
if (datafeedIds.isEmpty()) {
|
||||
listener.onResponse(Boolean.TRUE);
|
||||
} else {
|
||||
listener.onFailure(ExceptionsHelper.conflictStatusException("A datafeed [" + datafeedIds.iterator().next()
|
||||
+ "] already exists for job [" + jobId + "]"));
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -26,6 +26,7 @@ import org.elasticsearch.xpack.core.ml.MlMetaIndex;
|
||||
import org.elasticsearch.xpack.core.ml.action.PutFilterAction;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.MlFilter;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.Collections;
|
||||
@ -51,7 +52,7 @@ public class TransportPutFilterAction extends HandledTransportAction<PutFilterAc
|
||||
indexRequest.opType(DocWriteRequest.OpType.CREATE);
|
||||
indexRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
ToXContent.MapParams params = new ToXContent.MapParams(Collections.singletonMap(MlMetaIndex.INCLUDE_TYPE_KEY, "true"));
|
||||
ToXContent.MapParams params = new ToXContent.MapParams(Collections.singletonMap(ToXContentParams.INCLUDE_TYPE, "true"));
|
||||
indexRequest.source(filter.toXContent(builder, params));
|
||||
} catch (IOException e) {
|
||||
throw new IllegalStateException("Failed to serialise filter with id [" + filter.getId() + "]", e);
|
||||
|
@ -16,16 +16,17 @@ import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.RevertModelSnapshotAction;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSnapshot;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.MlConfigMigrationEligibilityCheck;
|
||||
import org.elasticsearch.xpack.ml.job.JobManager;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobDataCountsPersister;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobDataDeleter;
|
||||
@ -41,9 +42,10 @@ public class TransportRevertModelSnapshotAction extends TransportMasterNodeActio
|
||||
private final JobManager jobManager;
|
||||
private final JobResultsProvider jobResultsProvider;
|
||||
private final JobDataCountsPersister jobDataCountsPersister;
|
||||
private final MlConfigMigrationEligibilityCheck migrationEligibilityCheck;
|
||||
|
||||
@Inject
|
||||
public TransportRevertModelSnapshotAction(ThreadPool threadPool, TransportService transportService,
|
||||
public TransportRevertModelSnapshotAction(Settings settings, ThreadPool threadPool, TransportService transportService,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
JobManager jobManager, JobResultsProvider jobResultsProvider,
|
||||
ClusterService clusterService, Client client, JobDataCountsPersister jobDataCountsPersister) {
|
||||
@ -53,6 +55,7 @@ public class TransportRevertModelSnapshotAction extends TransportMasterNodeActio
|
||||
this.jobManager = jobManager;
|
||||
this.jobResultsProvider = jobResultsProvider;
|
||||
this.jobDataCountsPersister = jobDataCountsPersister;
|
||||
this.migrationEligibilityCheck = new MlConfigMigrationEligibilityCheck(settings, clusterService);
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -68,25 +71,34 @@ public class TransportRevertModelSnapshotAction extends TransportMasterNodeActio
|
||||
@Override
|
||||
protected void masterOperation(RevertModelSnapshotAction.Request request, ClusterState state,
|
||||
ActionListener<RevertModelSnapshotAction.Response> listener) {
|
||||
if (migrationEligibilityCheck.jobIsEligibleForMigration(request.getJobId(), state)) {
|
||||
listener.onFailure(ExceptionsHelper.configHasNotBeenMigrated("revert model snapshot", request.getJobId()));
|
||||
return;
|
||||
}
|
||||
|
||||
logger.debug("Received request to revert to snapshot id '{}' for job '{}', deleting intervening results: {}",
|
||||
request.getSnapshotId(), request.getJobId(), request.getDeleteInterveningResults());
|
||||
|
||||
Job job = JobManager.getJobOrThrowIfUnknown(request.getJobId(), state);
|
||||
PersistentTasksCustomMetaData tasks = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
JobState jobState = MlTasks.getJobState(job.getId(), tasks);
|
||||
jobManager.jobExists(request.getJobId(), ActionListener.wrap(
|
||||
exists -> {
|
||||
PersistentTasksCustomMetaData tasks = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
JobState jobState = MlTasks.getJobState(request.getJobId(), tasks);
|
||||
|
||||
if (jobState.equals(JobState.CLOSED) == false) {
|
||||
throw ExceptionsHelper.conflictStatusException(Messages.getMessage(Messages.REST_JOB_NOT_CLOSED_REVERT));
|
||||
}
|
||||
if (jobState.equals(JobState.CLOSED) == false) {
|
||||
throw ExceptionsHelper.conflictStatusException(Messages.getMessage(Messages.REST_JOB_NOT_CLOSED_REVERT));
|
||||
}
|
||||
|
||||
getModelSnapshot(request, jobResultsProvider, modelSnapshot -> {
|
||||
ActionListener<RevertModelSnapshotAction.Response> wrappedListener = listener;
|
||||
if (request.getDeleteInterveningResults()) {
|
||||
wrappedListener = wrapDeleteOldDataListener(wrappedListener, modelSnapshot, request.getJobId());
|
||||
wrappedListener = wrapRevertDataCountsListener(wrappedListener, modelSnapshot, request.getJobId());
|
||||
}
|
||||
jobManager.revertSnapshot(request, wrappedListener, modelSnapshot);
|
||||
}, listener::onFailure);
|
||||
getModelSnapshot(request, jobResultsProvider, modelSnapshot -> {
|
||||
ActionListener<RevertModelSnapshotAction.Response> wrappedListener = listener;
|
||||
if (request.getDeleteInterveningResults()) {
|
||||
wrappedListener = wrapDeleteOldDataListener(wrappedListener, modelSnapshot, request.getJobId());
|
||||
wrappedListener = wrapRevertDataCountsListener(wrappedListener, modelSnapshot, request.getJobId());
|
||||
}
|
||||
jobManager.revertSnapshot(request, wrappedListener, modelSnapshot);
|
||||
}, listener::onFailure);
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
private void getModelSnapshot(RevertModelSnapshotAction.Request request, JobResultsProvider provider, Consumer<ModelSnapshot> handler,
|
||||
|
@ -5,7 +5,6 @@
|
||||
*/
|
||||
package org.elasticsearch.xpack.ml.action;
|
||||
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.ElasticsearchStatusException;
|
||||
import org.elasticsearch.ResourceAlreadyExistsException;
|
||||
@ -20,7 +19,9 @@ import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.license.LicenseUtils;
|
||||
import org.elasticsearch.license.RemoteClusterLicenseChecker;
|
||||
@ -35,7 +36,6 @@ import org.elasticsearch.tasks.TaskId;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.XPackField;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.StartDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
@ -45,15 +45,20 @@ import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
import org.elasticsearch.xpack.ml.MlConfigMigrationEligibilityCheck;
|
||||
import org.elasticsearch.xpack.ml.datafeed.DatafeedManager;
|
||||
import org.elasticsearch.xpack.ml.datafeed.DatafeedNodeSelector;
|
||||
import org.elasticsearch.xpack.ml.datafeed.extractor.DataExtractorFactory;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.notifications.Auditor;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
import java.util.Locale;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
import java.util.function.Consumer;
|
||||
import java.util.function.Predicate;
|
||||
|
||||
/* This class extends from TransportMasterNodeAction for cluster state observing purposes.
|
||||
@ -69,37 +74,36 @@ public class TransportStartDatafeedAction extends TransportMasterNodeAction<Star
|
||||
private final Client client;
|
||||
private final XPackLicenseState licenseState;
|
||||
private final PersistentTasksService persistentTasksService;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
private final Auditor auditor;
|
||||
private final MlConfigMigrationEligibilityCheck migrationEligibilityCheck;
|
||||
|
||||
@Inject
|
||||
public TransportStartDatafeedAction(TransportService transportService, ThreadPool threadPool,
|
||||
public TransportStartDatafeedAction(Settings settings, TransportService transportService, ThreadPool threadPool,
|
||||
ClusterService clusterService, XPackLicenseState licenseState,
|
||||
PersistentTasksService persistentTasksService,
|
||||
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
Client client, Auditor auditor) {
|
||||
Client client, JobConfigProvider jobConfigProvider, DatafeedConfigProvider datafeedConfigProvider,
|
||||
Auditor auditor) {
|
||||
super(StartDatafeedAction.NAME, transportService, clusterService, threadPool, actionFilters, indexNameExpressionResolver,
|
||||
StartDatafeedAction.Request::new);
|
||||
this.licenseState = licenseState;
|
||||
this.persistentTasksService = persistentTasksService;
|
||||
this.client = client;
|
||||
this.jobConfigProvider = jobConfigProvider;
|
||||
this.datafeedConfigProvider = datafeedConfigProvider;
|
||||
this.auditor = auditor;
|
||||
this.migrationEligibilityCheck = new MlConfigMigrationEligibilityCheck(settings, clusterService);
|
||||
}
|
||||
|
||||
static void validate(String datafeedId, MlMetadata mlMetadata, PersistentTasksCustomMetaData tasks) {
|
||||
DatafeedConfig datafeed = (mlMetadata == null) ? null : mlMetadata.getDatafeed(datafeedId);
|
||||
if (datafeed == null) {
|
||||
throw ExceptionsHelper.missingDatafeedException(datafeedId);
|
||||
}
|
||||
Job job = mlMetadata.getJobs().get(datafeed.getJobId());
|
||||
if (job == null) {
|
||||
throw ExceptionsHelper.missingJobException(datafeed.getJobId());
|
||||
}
|
||||
DatafeedJobValidator.validate(datafeed, job);
|
||||
DatafeedConfig.validateAggregations(datafeed.getParsedAggregations());
|
||||
JobState jobState = MlTasks.getJobState(datafeed.getJobId(), tasks);
|
||||
static void validate(Job job, DatafeedConfig datafeedConfig, PersistentTasksCustomMetaData tasks) {
|
||||
DatafeedJobValidator.validate(datafeedConfig, job);
|
||||
DatafeedConfig.validateAggregations(datafeedConfig.getParsedAggregations());
|
||||
JobState jobState = MlTasks.getJobState(datafeedConfig.getJobId(), tasks);
|
||||
if (jobState.isAnyOf(JobState.OPENING, JobState.OPENED) == false) {
|
||||
throw ExceptionsHelper.conflictStatusException("cannot start datafeed [" + datafeedId + "] because job [" + job.getId() +
|
||||
"] is " + jobState);
|
||||
throw ExceptionsHelper.conflictStatusException("cannot start datafeed [" + datafeedConfig.getId() +
|
||||
"] because job [" + job.getId() + "] is " + jobState);
|
||||
}
|
||||
}
|
||||
|
||||
@ -132,59 +136,94 @@ public class TransportStartDatafeedAction extends TransportMasterNodeAction<Star
|
||||
protected void masterOperation(StartDatafeedAction.Request request, ClusterState state,
|
||||
ActionListener<AcknowledgedResponse> listener) {
|
||||
StartDatafeedAction.DatafeedParams params = request.getParams();
|
||||
if (licenseState.isMachineLearningAllowed()) {
|
||||
|
||||
ActionListener<PersistentTasksCustomMetaData.PersistentTask<StartDatafeedAction.DatafeedParams>> waitForTaskListener =
|
||||
new ActionListener<PersistentTasksCustomMetaData.PersistentTask<StartDatafeedAction.DatafeedParams>>() {
|
||||
@Override
|
||||
public void onResponse(PersistentTasksCustomMetaData.PersistentTask<StartDatafeedAction.DatafeedParams>
|
||||
persistentTask) {
|
||||
waitForDatafeedStarted(persistentTask.getId(), params, listener);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
if (e instanceof ResourceAlreadyExistsException) {
|
||||
logger.debug("datafeed already started", e);
|
||||
e = new ElasticsearchStatusException("cannot start datafeed [" + params.getDatafeedId() +
|
||||
"] because it has already been started", RestStatus.CONFLICT);
|
||||
}
|
||||
listener.onFailure(e);
|
||||
}
|
||||
};
|
||||
|
||||
// Verify data extractor factory can be created, then start persistent task
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(state);
|
||||
PersistentTasksCustomMetaData tasks = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
validate(params.getDatafeedId(), mlMetadata, tasks);
|
||||
DatafeedConfig datafeed = mlMetadata.getDatafeed(params.getDatafeedId());
|
||||
Job job = mlMetadata.getJobs().get(datafeed.getJobId());
|
||||
|
||||
auditDeprecations(datafeed, job, auditor);
|
||||
|
||||
if (RemoteClusterLicenseChecker.containsRemoteIndex(datafeed.getIndices())) {
|
||||
final RemoteClusterLicenseChecker remoteClusterLicenseChecker =
|
||||
new RemoteClusterLicenseChecker(client, XPackLicenseState::isMachineLearningAllowedForOperationMode);
|
||||
remoteClusterLicenseChecker.checkRemoteClusterLicenses(
|
||||
RemoteClusterLicenseChecker.remoteClusterAliases(datafeed.getIndices()),
|
||||
ActionListener.wrap(
|
||||
response -> {
|
||||
if (response.isSuccess() == false) {
|
||||
listener.onFailure(createUnlicensedError(datafeed.getId(), response));
|
||||
} else {
|
||||
createDataExtractor(job, datafeed, params, waitForTaskListener);
|
||||
}
|
||||
},
|
||||
e -> listener.onFailure(
|
||||
createUnknownLicenseError(
|
||||
datafeed.getId(), RemoteClusterLicenseChecker.remoteIndices(datafeed.getIndices()), e))
|
||||
));
|
||||
} else {
|
||||
createDataExtractor(job, datafeed, params, waitForTaskListener);
|
||||
}
|
||||
} else {
|
||||
if (licenseState.isMachineLearningAllowed() == false) {
|
||||
listener.onFailure(LicenseUtils.newComplianceException(XPackField.MACHINE_LEARNING));
|
||||
return;
|
||||
}
|
||||
|
||||
if (migrationEligibilityCheck.datafeedIsEligibleForMigration(request.getParams().getDatafeedId(), state)) {
|
||||
listener.onFailure(ExceptionsHelper.configHasNotBeenMigrated("start datafeed", request.getParams().getDatafeedId()));
|
||||
return;
|
||||
}
|
||||
|
||||
AtomicReference<DatafeedConfig> datafeedConfigHolder = new AtomicReference<>();
|
||||
PersistentTasksCustomMetaData tasks = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
|
||||
ActionListener<PersistentTasksCustomMetaData.PersistentTask<StartDatafeedAction.DatafeedParams>> waitForTaskListener =
|
||||
new ActionListener<PersistentTasksCustomMetaData.PersistentTask<StartDatafeedAction.DatafeedParams>>() {
|
||||
@Override
|
||||
public void onResponse(PersistentTasksCustomMetaData.PersistentTask<StartDatafeedAction.DatafeedParams>
|
||||
persistentTask) {
|
||||
waitForDatafeedStarted(persistentTask.getId(), params, listener);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
if (e instanceof ResourceAlreadyExistsException) {
|
||||
logger.debug("datafeed already started", e);
|
||||
e = new ElasticsearchStatusException("cannot start datafeed [" + params.getDatafeedId() +
|
||||
"] because it has already been started", RestStatus.CONFLICT);
|
||||
}
|
||||
listener.onFailure(e);
|
||||
}
|
||||
};
|
||||
|
||||
// Verify data extractor factory can be created, then start persistent task
|
||||
Consumer<Job> createDataExtrator = job -> {
|
||||
if (RemoteClusterLicenseChecker.containsRemoteIndex(params.getDatafeedIndices())) {
|
||||
final RemoteClusterLicenseChecker remoteClusterLicenseChecker =
|
||||
new RemoteClusterLicenseChecker(client, XPackLicenseState::isMachineLearningAllowedForOperationMode);
|
||||
remoteClusterLicenseChecker.checkRemoteClusterLicenses(
|
||||
RemoteClusterLicenseChecker.remoteClusterAliases(params.getDatafeedIndices()),
|
||||
ActionListener.wrap(
|
||||
response -> {
|
||||
if (response.isSuccess() == false) {
|
||||
listener.onFailure(createUnlicensedError(params.getDatafeedId(), response));
|
||||
} else {
|
||||
createDataExtractor(job, datafeedConfigHolder.get(), params, waitForTaskListener);
|
||||
}
|
||||
},
|
||||
e -> listener.onFailure(
|
||||
createUnknownLicenseError(
|
||||
params.getDatafeedId(),
|
||||
RemoteClusterLicenseChecker.remoteIndices(params.getDatafeedIndices()), e))
|
||||
)
|
||||
);
|
||||
} else {
|
||||
createDataExtractor(job, datafeedConfigHolder.get(), params, waitForTaskListener);
|
||||
}
|
||||
};
|
||||
|
||||
ActionListener<Job.Builder> jobListener = ActionListener.wrap(
|
||||
jobBuilder -> {
|
||||
try {
|
||||
Job job = jobBuilder.build();
|
||||
validate(job, datafeedConfigHolder.get(), tasks);
|
||||
auditDeprecations(datafeedConfigHolder.get(), job, auditor);
|
||||
createDataExtrator.accept(job);
|
||||
} catch (Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
);
|
||||
|
||||
ActionListener<DatafeedConfig.Builder> datafeedListener = ActionListener.wrap(
|
||||
datafeedBuilder -> {
|
||||
try {
|
||||
DatafeedConfig datafeedConfig = datafeedBuilder.build();
|
||||
params.setDatafeedIndices(datafeedConfig.getIndices());
|
||||
params.setJobId(datafeedConfig.getJobId());
|
||||
datafeedConfigHolder.set(datafeedConfig);
|
||||
jobConfigProvider.getJob(datafeedConfig.getJobId(), jobListener);
|
||||
} catch (Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
);
|
||||
|
||||
datafeedConfigProvider.getDatafeedConfig(params.getDatafeedId(), datafeedListener);
|
||||
}
|
||||
|
||||
private void createDataExtractor(Job job, DatafeedConfig datafeed, StartDatafeedAction.DatafeedParams params,
|
||||
@ -193,7 +232,7 @@ public class TransportStartDatafeedAction extends TransportMasterNodeAction<Star
|
||||
DataExtractorFactory.create(client, datafeed, job, ActionListener.wrap(
|
||||
dataExtractorFactory ->
|
||||
persistentTasksService.sendStartRequest(MlTasks.datafeedTaskId(params.getDatafeedId()),
|
||||
StartDatafeedAction.TASK_NAME, params, listener)
|
||||
MlTasks.DATAFEED_TASK_NAME, params, listener)
|
||||
, listener::onFailure));
|
||||
}
|
||||
|
||||
@ -292,7 +331,7 @@ public class TransportStartDatafeedAction extends TransportMasterNodeAction<Star
|
||||
private final IndexNameExpressionResolver resolver;
|
||||
|
||||
public StartDatafeedPersistentTasksExecutor(DatafeedManager datafeedManager) {
|
||||
super(StartDatafeedAction.TASK_NAME, MachineLearning.UTILITY_THREAD_POOL_NAME);
|
||||
super(MlTasks.DATAFEED_TASK_NAME, MachineLearning.UTILITY_THREAD_POOL_NAME);
|
||||
this.datafeedManager = datafeedManager;
|
||||
this.resolver = new IndexNameExpressionResolver();
|
||||
}
|
||||
@ -300,14 +339,14 @@ public class TransportStartDatafeedAction extends TransportMasterNodeAction<Star
|
||||
@Override
|
||||
public PersistentTasksCustomMetaData.Assignment getAssignment(StartDatafeedAction.DatafeedParams params,
|
||||
ClusterState clusterState) {
|
||||
return new DatafeedNodeSelector(clusterState, resolver, params.getDatafeedId()).selectNode();
|
||||
return new DatafeedNodeSelector(clusterState, resolver, params.getDatafeedId(), params.getJobId(),
|
||||
params.getDatafeedIndices()).selectNode();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void validate(StartDatafeedAction.DatafeedParams params, ClusterState clusterState) {
|
||||
PersistentTasksCustomMetaData tasks = clusterState.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
TransportStartDatafeedAction.validate(params.getDatafeedId(), MlMetadata.getMlMetadata(clusterState), tasks);
|
||||
new DatafeedNodeSelector(clusterState, resolver, params.getDatafeedId()).checkDatafeedTaskCanBeCreated();
|
||||
new DatafeedNodeSelector(clusterState, resolver, params.getDatafeedId(), params.getJobId(), params.getDatafeedIndices())
|
||||
.checkDatafeedTaskCanBeCreated();
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -25,14 +25,12 @@ import org.elasticsearch.persistent.PersistentTasksService;
|
||||
import org.elasticsearch.tasks.Task;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.StopDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedState;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashSet;
|
||||
@ -47,34 +45,35 @@ public class TransportStopDatafeedAction extends TransportTasksAction<TransportS
|
||||
|
||||
private final ThreadPool threadPool;
|
||||
private final PersistentTasksService persistentTasksService;
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
|
||||
@Inject
|
||||
public TransportStopDatafeedAction(TransportService transportService, ThreadPool threadPool, ActionFilters actionFilters,
|
||||
ClusterService clusterService, PersistentTasksService persistentTasksService) {
|
||||
ClusterService clusterService, PersistentTasksService persistentTasksService,
|
||||
DatafeedConfigProvider datafeedConfigProvider) {
|
||||
super(StopDatafeedAction.NAME, clusterService, transportService, actionFilters, StopDatafeedAction.Request::new,
|
||||
StopDatafeedAction.Response::new, StopDatafeedAction.Response::new, MachineLearning.UTILITY_THREAD_POOL_NAME);
|
||||
this.threadPool = threadPool;
|
||||
this.persistentTasksService = persistentTasksService;
|
||||
this.datafeedConfigProvider = datafeedConfigProvider;
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve the requested datafeeds and add their IDs to one of the list
|
||||
* arguments depending on datafeed state.
|
||||
* Sort the datafeed IDs the their task state and add to one
|
||||
* of the list arguments depending on the state.
|
||||
*
|
||||
* @param request The stop datafeed request
|
||||
* @param mlMetadata ML Metadata
|
||||
* @param expandedDatafeedIds The expanded set of IDs
|
||||
* @param tasks Persistent task meta data
|
||||
* @param startedDatafeedIds Started datafeed ids are added to this list
|
||||
* @param stoppingDatafeedIds Stopping datafeed ids are added to this list
|
||||
*/
|
||||
static void resolveDataFeedIds(StopDatafeedAction.Request request, MlMetadata mlMetadata,
|
||||
PersistentTasksCustomMetaData tasks,
|
||||
List<String> startedDatafeedIds,
|
||||
List<String> stoppingDatafeedIds) {
|
||||
static void sortDatafeedIdsByTaskState(Set<String> expandedDatafeedIds,
|
||||
PersistentTasksCustomMetaData tasks,
|
||||
List<String> startedDatafeedIds,
|
||||
List<String> stoppingDatafeedIds) {
|
||||
|
||||
Set<String> expandedDatafeedIds = mlMetadata.expandDatafeedIds(request.getDatafeedId(), request.allowNoDatafeeds());
|
||||
for (String expandedDatafeedId : expandedDatafeedIds) {
|
||||
validateDatafeedTask(expandedDatafeedId, mlMetadata);
|
||||
addDatafeedTaskIdAccordingToState(expandedDatafeedId, MlTasks.getDatafeedState(expandedDatafeedId, tasks),
|
||||
startedDatafeedIds, stoppingDatafeedIds);
|
||||
}
|
||||
@ -98,20 +97,6 @@ public class TransportStopDatafeedAction extends TransportTasksAction<TransportS
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate the stop request.
|
||||
* Throws an {@code ResourceNotFoundException} if there is no datafeed
|
||||
* with id {@code datafeedId}
|
||||
* @param datafeedId The datafeed Id
|
||||
* @param mlMetadata ML meta data
|
||||
*/
|
||||
static void validateDatafeedTask(String datafeedId, MlMetadata mlMetadata) {
|
||||
DatafeedConfig datafeed = mlMetadata.getDatafeed(datafeedId);
|
||||
if (datafeed == null) {
|
||||
throw new ResourceNotFoundException(Messages.getMessage(Messages.DATAFEED_NOT_FOUND, datafeedId));
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doExecute(Task task, StopDatafeedAction.Request request, ActionListener<StopDatafeedAction.Response> listener) {
|
||||
final ClusterState state = clusterService.state();
|
||||
@ -126,23 +111,27 @@ public class TransportStopDatafeedAction extends TransportTasksAction<TransportS
|
||||
new ActionListenerResponseHandler<>(listener, StopDatafeedAction.Response::new));
|
||||
}
|
||||
} else {
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(state);
|
||||
PersistentTasksCustomMetaData tasks = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
datafeedConfigProvider.expandDatafeedIds(request.getDatafeedId(), request.allowNoDatafeeds(), ActionListener.wrap(
|
||||
expandedIds -> {
|
||||
PersistentTasksCustomMetaData tasks = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
|
||||
List<String> startedDatafeeds = new ArrayList<>();
|
||||
List<String> stoppingDatafeeds = new ArrayList<>();
|
||||
resolveDataFeedIds(request, mlMetadata, tasks, startedDatafeeds, stoppingDatafeeds);
|
||||
if (startedDatafeeds.isEmpty() && stoppingDatafeeds.isEmpty()) {
|
||||
listener.onResponse(new StopDatafeedAction.Response(true));
|
||||
return;
|
||||
}
|
||||
request.setResolvedStartedDatafeedIds(startedDatafeeds.toArray(new String[startedDatafeeds.size()]));
|
||||
List<String> startedDatafeeds = new ArrayList<>();
|
||||
List<String> stoppingDatafeeds = new ArrayList<>();
|
||||
sortDatafeedIdsByTaskState(expandedIds, tasks, startedDatafeeds, stoppingDatafeeds);
|
||||
if (startedDatafeeds.isEmpty() && stoppingDatafeeds.isEmpty()) {
|
||||
listener.onResponse(new StopDatafeedAction.Response(true));
|
||||
return;
|
||||
}
|
||||
request.setResolvedStartedDatafeedIds(startedDatafeeds.toArray(new String[startedDatafeeds.size()]));
|
||||
|
||||
if (request.isForce()) {
|
||||
forceStopDatafeed(request, listener, tasks, startedDatafeeds);
|
||||
} else {
|
||||
normalStopDatafeed(task, request, listener, tasks, startedDatafeeds, stoppingDatafeeds);
|
||||
}
|
||||
if (request.isForce()) {
|
||||
forceStopDatafeed(request, listener, tasks, startedDatafeeds);
|
||||
} else {
|
||||
normalStopDatafeed(task, request, listener, tasks, startedDatafeeds, stoppingDatafeeds);
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -39,8 +39,10 @@ public class TransportUpdateCalendarJobAction extends HandledTransportAction<Upd
|
||||
|
||||
jobResultsProvider.updateCalendar(request.getCalendarId(), jobIdsToAdd, jobIdsToRemove,
|
||||
c -> {
|
||||
jobManager.updateProcessOnCalendarChanged(c.getJobIds());
|
||||
listener.onResponse(new PutCalendarAction.Response(c));
|
||||
jobManager.updateProcessOnCalendarChanged(c.getJobIds(), ActionListener.wrap(
|
||||
r -> listener.onResponse(new PutCalendarAction.Response(c)),
|
||||
listener::onFailure
|
||||
));
|
||||
}, listener::onFailure);
|
||||
}
|
||||
}
|
||||
|
@ -8,33 +8,49 @@ package org.elasticsearch.xpack.ml.action;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.support.ActionFilters;
|
||||
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
|
||||
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockException;
|
||||
import org.elasticsearch.cluster.block.ClusterBlockLevel;
|
||||
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.CheckedConsumer;
|
||||
import org.elasticsearch.common.inject.Inject;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.transport.TransportService;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.PutDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.UpdateDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedUpdate;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedState;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.MlConfigMigrationEligibilityCheck;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.Map;
|
||||
|
||||
public class TransportUpdateDatafeedAction extends TransportMasterNodeAction<UpdateDatafeedAction.Request, PutDatafeedAction.Response> {
|
||||
|
||||
private final DatafeedConfigProvider datafeedConfigProvider;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
private final MlConfigMigrationEligibilityCheck migrationEligibilityCheck;
|
||||
|
||||
@Inject
|
||||
public TransportUpdateDatafeedAction(TransportService transportService, ClusterService clusterService,
|
||||
public TransportUpdateDatafeedAction(Settings settings, TransportService transportService, ClusterService clusterService,
|
||||
ThreadPool threadPool, ActionFilters actionFilters,
|
||||
IndexNameExpressionResolver indexNameExpressionResolver) {
|
||||
IndexNameExpressionResolver indexNameExpressionResolver,
|
||||
Client client, NamedXContentRegistry xContentRegistry) {
|
||||
super(UpdateDatafeedAction.NAME, transportService, clusterService, threadPool, actionFilters,
|
||||
indexNameExpressionResolver, UpdateDatafeedAction.Request::new);
|
||||
|
||||
datafeedConfigProvider = new DatafeedConfigProvider(client, xContentRegistry);
|
||||
jobConfigProvider = new JobConfigProvider(client);
|
||||
migrationEligibilityCheck = new MlConfigMigrationEligibilityCheck(settings, clusterService);
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -49,34 +65,66 @@ public class TransportUpdateDatafeedAction extends TransportMasterNodeAction<Upd
|
||||
|
||||
@Override
|
||||
protected void masterOperation(UpdateDatafeedAction.Request request, ClusterState state,
|
||||
ActionListener<PutDatafeedAction.Response> listener) {
|
||||
ActionListener<PutDatafeedAction.Response> listener) throws Exception {
|
||||
|
||||
if (migrationEligibilityCheck.datafeedIsEligibleForMigration(request.getUpdate().getId(), state)) {
|
||||
listener.onFailure(ExceptionsHelper.configHasNotBeenMigrated("update datafeed", request.getUpdate().getId()));
|
||||
return;
|
||||
}
|
||||
|
||||
final Map<String, String> headers = threadPool.getThreadContext().getHeaders();
|
||||
|
||||
clusterService.submitStateUpdateTask("update-datafeed-" + request.getUpdate().getId(),
|
||||
new AckedClusterStateUpdateTask<PutDatafeedAction.Response>(request, listener) {
|
||||
private volatile DatafeedConfig updatedDatafeed;
|
||||
// Check datafeed is stopped
|
||||
PersistentTasksCustomMetaData tasks = state.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
if (MlTasks.getDatafeedTask(request.getUpdate().getId(), tasks) != null) {
|
||||
listener.onFailure(ExceptionsHelper.conflictStatusException(
|
||||
Messages.getMessage(Messages.DATAFEED_CANNOT_UPDATE_IN_CURRENT_STATE,
|
||||
request.getUpdate().getId(), DatafeedState.STARTED)));
|
||||
return;
|
||||
}
|
||||
|
||||
@Override
|
||||
protected PutDatafeedAction.Response newResponse(boolean acknowledged) {
|
||||
if (acknowledged) {
|
||||
logger.info("Updated datafeed [{}]", request.getUpdate().getId());
|
||||
}
|
||||
return new PutDatafeedAction.Response(updatedDatafeed);
|
||||
}
|
||||
String datafeedId = request.getUpdate().getId();
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
DatafeedUpdate update = request.getUpdate();
|
||||
MlMetadata currentMetadata = MlMetadata.getMlMetadata(currentState);
|
||||
PersistentTasksCustomMetaData persistentTasks =
|
||||
currentState.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
MlMetadata newMetadata = new MlMetadata.Builder(currentMetadata)
|
||||
.updateDatafeed(update, persistentTasks, headers).build();
|
||||
updatedDatafeed = newMetadata.getDatafeed(update.getId());
|
||||
return ClusterState.builder(currentState).metaData(
|
||||
MetaData.builder(currentState.getMetaData()).putCustom(MlMetadata.TYPE, newMetadata).build()).build();
|
||||
CheckedConsumer<Boolean, Exception> updateConsumer = ok -> {
|
||||
datafeedConfigProvider.updateDatefeedConfig(request.getUpdate().getId(), request.getUpdate(), headers,
|
||||
jobConfigProvider::validateDatafeedJob,
|
||||
ActionListener.wrap(
|
||||
updatedConfig -> listener.onResponse(new PutDatafeedAction.Response(updatedConfig)),
|
||||
listener::onFailure
|
||||
));
|
||||
};
|
||||
|
||||
|
||||
if (request.getUpdate().getJobId() != null) {
|
||||
checkJobDoesNotHaveADifferentDatafeed(request.getUpdate().getJobId(), datafeedId,
|
||||
ActionListener.wrap(updateConsumer, listener::onFailure));
|
||||
} else {
|
||||
updateConsumer.accept(Boolean.TRUE);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* This is a check against changing the datafeed's jobId and that job
|
||||
* already having a datafeed.
|
||||
* The job the updated datafeed refers to should have no datafeed or
|
||||
* if it does have a datafeed it must be the one we are updating
|
||||
*/
|
||||
private void checkJobDoesNotHaveADifferentDatafeed(String jobId, String datafeedId, ActionListener<Boolean> listener) {
|
||||
datafeedConfigProvider.findDatafeedsForJobIds(Collections.singletonList(jobId), ActionListener.wrap(
|
||||
datafeedIds -> {
|
||||
if (datafeedIds.isEmpty()) {
|
||||
// Ok the job does not have a datafeed
|
||||
listener.onResponse(Boolean.TRUE);
|
||||
} else if (datafeedIds.size() == 1 && datafeedIds.contains(datafeedId)) {
|
||||
// Ok the job has the datafeed being updated
|
||||
listener.onResponse(Boolean.TRUE);
|
||||
} else {
|
||||
listener.onFailure(ExceptionsHelper.conflictStatusException("A datafeed [" + datafeedIds.iterator().next()
|
||||
+ "] already exists for job [" + jobId + "]"));
|
||||
}
|
||||
});
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -35,6 +35,7 @@ import org.elasticsearch.xpack.core.ml.action.UpdateFilterAction;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.MlFilter;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
import org.elasticsearch.xpack.ml.job.JobManager;
|
||||
|
||||
import java.io.IOException;
|
||||
@ -104,7 +105,7 @@ public class TransportUpdateFilterAction extends HandledTransportAction<UpdateFi
|
||||
indexRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
|
||||
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
ToXContent.MapParams params = new ToXContent.MapParams(Collections.singletonMap(MlMetaIndex.INCLUDE_TYPE_KEY, "true"));
|
||||
ToXContent.MapParams params = new ToXContent.MapParams(Collections.singletonMap(ToXContentParams.INCLUDE_TYPE, "true"));
|
||||
indexRequest.source(filter.toXContent(builder, params));
|
||||
} catch (IOException e) {
|
||||
throw new IllegalStateException("Failed to serialise filter with id [" + filter.getId() + "]", e);
|
||||
@ -113,8 +114,10 @@ public class TransportUpdateFilterAction extends HandledTransportAction<UpdateFi
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, IndexAction.INSTANCE, indexRequest, new ActionListener<IndexResponse>() {
|
||||
@Override
|
||||
public void onResponse(IndexResponse indexResponse) {
|
||||
jobManager.notifyFilterChanged(filter, request.getAddItems(), request.getRemoveItems());
|
||||
listener.onResponse(new PutFilterAction.Response(filter));
|
||||
jobManager.notifyFilterChanged(filter, request.getAddItems(), request.getRemoveItems(), ActionListener.wrap(
|
||||
response -> listener.onResponse(new PutFilterAction.Response(filter)),
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
|
@ -93,6 +93,10 @@ class DatafeedJob {
|
||||
return isIsolated;
|
||||
}
|
||||
|
||||
public String getJobId() {
|
||||
return jobId;
|
||||
}
|
||||
|
||||
Long runLookBack(long startTime, Long endTime) throws Exception {
|
||||
lookbackStartTimeMs = skipToStartTime(startTime);
|
||||
Optional<Long> endMs = Optional.ofNullable(endTime);
|
||||
|
@ -8,96 +8,158 @@ package org.elasticsearch.xpack.ml.datafeed;
|
||||
import org.elasticsearch.ResourceNotFoundException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.common.settings.Settings;
|
||||
import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.xpack.core.ml.action.util.QueryPage;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedJobValidator;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.DataDescription;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.ml.datafeed.delayeddatacheck.DelayedDataDetector;
|
||||
import org.elasticsearch.xpack.ml.datafeed.delayeddatacheck.DelayedDataDetectorFactory;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.BucketsQueryBuilder;
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.DataCounts;
|
||||
import org.elasticsearch.xpack.core.ml.job.results.Bucket;
|
||||
import org.elasticsearch.xpack.core.ml.job.results.Result;
|
||||
import org.elasticsearch.xpack.ml.datafeed.delayeddatacheck.DelayedDataDetector;
|
||||
import org.elasticsearch.xpack.ml.datafeed.delayeddatacheck.DelayedDataDetectorFactory;
|
||||
import org.elasticsearch.xpack.ml.datafeed.extractor.DataExtractorFactory;
|
||||
import org.elasticsearch.xpack.ml.datafeed.persistence.DatafeedConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.BucketsQueryBuilder;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider;
|
||||
import org.elasticsearch.xpack.ml.notifications.Auditor;
|
||||
|
||||
import java.util.Collections;
|
||||
import java.util.Objects;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
import java.util.function.Consumer;
|
||||
import java.util.function.Supplier;
|
||||
|
||||
public class DatafeedJobBuilder {
|
||||
|
||||
private final Client client;
|
||||
private final JobResultsProvider jobResultsProvider;
|
||||
private final Settings settings;
|
||||
private final NamedXContentRegistry xContentRegistry;
|
||||
private final Auditor auditor;
|
||||
private final Supplier<Long> currentTimeSupplier;
|
||||
|
||||
public DatafeedJobBuilder(Client client, JobResultsProvider jobResultsProvider, Auditor auditor, Supplier<Long> currentTimeSupplier) {
|
||||
public DatafeedJobBuilder(Client client, Settings settings, NamedXContentRegistry xContentRegistry,
|
||||
Auditor auditor, Supplier<Long> currentTimeSupplier) {
|
||||
this.client = client;
|
||||
this.jobResultsProvider = Objects.requireNonNull(jobResultsProvider);
|
||||
this.settings = Objects.requireNonNull(settings);
|
||||
this.xContentRegistry = Objects.requireNonNull(xContentRegistry);
|
||||
this.auditor = Objects.requireNonNull(auditor);
|
||||
this.currentTimeSupplier = Objects.requireNonNull(currentTimeSupplier);
|
||||
}
|
||||
|
||||
void build(Job job, DatafeedConfig datafeed, ActionListener<DatafeedJob> listener) {
|
||||
void build(String datafeedId, ActionListener<DatafeedJob> listener) {
|
||||
|
||||
JobResultsProvider jobResultsProvider = new JobResultsProvider(client, settings);
|
||||
JobConfigProvider jobConfigProvider = new JobConfigProvider(client);
|
||||
DatafeedConfigProvider datafeedConfigProvider = new DatafeedConfigProvider(client, xContentRegistry);
|
||||
|
||||
build(datafeedId, jobResultsProvider, jobConfigProvider, datafeedConfigProvider, listener);
|
||||
}
|
||||
|
||||
/**
|
||||
* For testing only.
|
||||
* Use {@link #build(String, ActionListener)} instead
|
||||
*/
|
||||
void build(String datafeedId, JobResultsProvider jobResultsProvider, JobConfigProvider jobConfigProvider,
|
||||
DatafeedConfigProvider datafeedConfigProvider, ActionListener<DatafeedJob> listener) {
|
||||
|
||||
AtomicReference<Job> jobHolder = new AtomicReference<>();
|
||||
AtomicReference<DatafeedConfig> datafeedConfigHolder = new AtomicReference<>();
|
||||
|
||||
// Step 5. Build datafeed job object
|
||||
Consumer<Context> contextHanlder = context -> {
|
||||
TimeValue frequency = getFrequencyOrDefault(datafeed, job);
|
||||
TimeValue queryDelay = datafeed.getQueryDelay();
|
||||
DelayedDataDetector delayedDataDetector = DelayedDataDetectorFactory.buildDetector(job, datafeed, client);
|
||||
DatafeedJob datafeedJob = new DatafeedJob(job.getId(), buildDataDescription(job), frequency.millis(), queryDelay.millis(),
|
||||
TimeValue frequency = getFrequencyOrDefault(datafeedConfigHolder.get(), jobHolder.get());
|
||||
TimeValue queryDelay = datafeedConfigHolder.get().getQueryDelay();
|
||||
DelayedDataDetector delayedDataDetector =
|
||||
DelayedDataDetectorFactory.buildDetector(jobHolder.get(), datafeedConfigHolder.get(), client);
|
||||
DatafeedJob datafeedJob = new DatafeedJob(jobHolder.get().getId(), buildDataDescription(jobHolder.get()),
|
||||
frequency.millis(), queryDelay.millis(),
|
||||
context.dataExtractorFactory, client, auditor, currentTimeSupplier, delayedDataDetector,
|
||||
context.latestFinalBucketEndMs, context.latestRecordTimeMs);
|
||||
|
||||
listener.onResponse(datafeedJob);
|
||||
};
|
||||
|
||||
final Context context = new Context();
|
||||
|
||||
// Step 4. Context building complete - invoke final listener
|
||||
// Context building complete - invoke final listener
|
||||
ActionListener<DataExtractorFactory> dataExtractorFactoryHandler = ActionListener.wrap(
|
||||
dataExtractorFactory -> {
|
||||
context.dataExtractorFactory = dataExtractorFactory;
|
||||
contextHanlder.accept(context);
|
||||
}, e -> {
|
||||
auditor.error(job.getId(), e.getMessage());
|
||||
auditor.error(jobHolder.get().getId(), e.getMessage());
|
||||
listener.onFailure(e);
|
||||
}
|
||||
);
|
||||
|
||||
// Step 3. Create data extractor factory
|
||||
// Create data extractor factory
|
||||
Consumer<DataCounts> dataCountsHandler = dataCounts -> {
|
||||
if (dataCounts.getLatestRecordTimeStamp() != null) {
|
||||
context.latestRecordTimeMs = dataCounts.getLatestRecordTimeStamp().getTime();
|
||||
}
|
||||
DataExtractorFactory.create(client, datafeed, job, dataExtractorFactoryHandler);
|
||||
DataExtractorFactory.create(client, datafeedConfigHolder.get(), jobHolder.get(), dataExtractorFactoryHandler);
|
||||
};
|
||||
|
||||
// Step 2. Collect data counts
|
||||
// Collect data counts
|
||||
Consumer<QueryPage<Bucket>> bucketsHandler = buckets -> {
|
||||
if (buckets.results().size() == 1) {
|
||||
TimeValue bucketSpan = job.getAnalysisConfig().getBucketSpan();
|
||||
TimeValue bucketSpan = jobHolder.get().getAnalysisConfig().getBucketSpan();
|
||||
context.latestFinalBucketEndMs = buckets.results().get(0).getTimestamp().getTime() + bucketSpan.millis() - 1;
|
||||
}
|
||||
jobResultsProvider.dataCounts(job.getId(), dataCountsHandler, listener::onFailure);
|
||||
jobResultsProvider.dataCounts(jobHolder.get().getId(), dataCountsHandler, listener::onFailure);
|
||||
};
|
||||
|
||||
// Step 1. Collect latest bucket
|
||||
BucketsQueryBuilder latestBucketQuery = new BucketsQueryBuilder()
|
||||
.sortField(Result.TIMESTAMP.getPreferredName())
|
||||
.sortDescending(true).size(1)
|
||||
.includeInterim(false);
|
||||
jobResultsProvider.bucketsViaInternalClient(job.getId(), latestBucketQuery, bucketsHandler, e -> {
|
||||
if (e instanceof ResourceNotFoundException) {
|
||||
QueryPage<Bucket> empty = new QueryPage<>(Collections.emptyList(), 0, Bucket.RESULT_TYPE_FIELD);
|
||||
bucketsHandler.accept(empty);
|
||||
} else {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
});
|
||||
// Collect latest bucket
|
||||
Consumer<String> jobIdConsumer = jobId -> {
|
||||
BucketsQueryBuilder latestBucketQuery = new BucketsQueryBuilder()
|
||||
.sortField(Result.TIMESTAMP.getPreferredName())
|
||||
.sortDescending(true).size(1)
|
||||
.includeInterim(false);
|
||||
jobResultsProvider.bucketsViaInternalClient(jobId, latestBucketQuery, bucketsHandler, e -> {
|
||||
if (e instanceof ResourceNotFoundException) {
|
||||
QueryPage<Bucket> empty = new QueryPage<>(Collections.emptyList(), 0, Bucket.RESULT_TYPE_FIELD);
|
||||
bucketsHandler.accept(empty);
|
||||
} else {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
// Get the job config and re-validate
|
||||
// Re-validation is required as the config has been re-read since
|
||||
// the previous validation
|
||||
ActionListener<Job.Builder> jobConfigListener = ActionListener.wrap(
|
||||
jobBuilder -> {
|
||||
try {
|
||||
jobHolder.set(jobBuilder.build());
|
||||
DatafeedJobValidator.validate(datafeedConfigHolder.get(), jobHolder.get());
|
||||
jobIdConsumer.accept(jobHolder.get().getId());
|
||||
} catch (Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
);
|
||||
|
||||
// Get the datafeed config
|
||||
ActionListener<DatafeedConfig.Builder> datafeedConfigListener = ActionListener.wrap(
|
||||
configBuilder -> {
|
||||
try {
|
||||
datafeedConfigHolder.set(configBuilder.build());
|
||||
jobConfigProvider.getJob(datafeedConfigHolder.get().getJobId(), jobConfigListener);
|
||||
} catch (Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
);
|
||||
|
||||
datafeedConfigProvider.getDatafeedConfig(datafeedId, datafeedConfigListener);
|
||||
}
|
||||
|
||||
private static TimeValue getFrequencyOrDefault(DatafeedConfig datafeed, Job job) {
|
||||
|
@ -18,19 +18,16 @@ import org.elasticsearch.common.unit.TimeValue;
|
||||
import org.elasticsearch.common.util.concurrent.AbstractRunnable;
|
||||
import org.elasticsearch.common.util.concurrent.FutureUtils;
|
||||
import org.elasticsearch.common.util.concurrent.ThreadContext;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData.PersistentTask;
|
||||
import org.elasticsearch.rest.RestStatus;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.action.CloseJobAction;
|
||||
import org.elasticsearch.xpack.core.ml.action.StartDatafeedAction;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.job.messages.Messages;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData.PersistentTask;
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
import org.elasticsearch.xpack.ml.action.TransportStartDatafeedAction;
|
||||
import org.elasticsearch.xpack.ml.notifications.Auditor;
|
||||
@ -48,9 +45,9 @@ import java.util.concurrent.locks.ReentrantLock;
|
||||
import java.util.function.Consumer;
|
||||
import java.util.function.Supplier;
|
||||
|
||||
import static org.elasticsearch.persistent.PersistentTasksService.WaitForPersistentTaskListener;
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin;
|
||||
import static org.elasticsearch.persistent.PersistentTasksService.WaitForPersistentTaskListener;
|
||||
|
||||
public class DatafeedManager {
|
||||
|
||||
@ -78,17 +75,14 @@ public class DatafeedManager {
|
||||
clusterService.addListener(taskRunner);
|
||||
}
|
||||
|
||||
public void run(TransportStartDatafeedAction.DatafeedTask task, Consumer<Exception> taskHandler) {
|
||||
String datafeedId = task.getDatafeedId();
|
||||
ClusterState state = clusterService.state();
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(state);
|
||||
|
||||
DatafeedConfig datafeed = mlMetadata.getDatafeed(datafeedId);
|
||||
Job job = mlMetadata.getJobs().get(datafeed.getJobId());
|
||||
public void run(TransportStartDatafeedAction.DatafeedTask task, Consumer<Exception> finishHandler) {
|
||||
String datafeedId = task.getDatafeedId();
|
||||
|
||||
ActionListener<DatafeedJob> datafeedJobHandler = ActionListener.wrap(
|
||||
datafeedJob -> {
|
||||
Holder holder = new Holder(task, datafeed, datafeedJob, new ProblemTracker(auditor, job.getId()), taskHandler);
|
||||
Holder holder = new Holder(task, datafeedId, datafeedJob,
|
||||
new ProblemTracker(auditor, datafeedJob.getJobId()), finishHandler);
|
||||
runningDatafeedsOnThisNode.put(task.getAllocationId(), holder);
|
||||
task.updatePersistentTaskState(DatafeedState.STARTED, new ActionListener<PersistentTask<?>>() {
|
||||
@Override
|
||||
@ -98,13 +92,13 @@ public class DatafeedManager {
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
taskHandler.accept(e);
|
||||
finishHandler.accept(e);
|
||||
}
|
||||
});
|
||||
}, taskHandler::accept
|
||||
}, finishHandler::accept
|
||||
);
|
||||
|
||||
datafeedJobBuilder.build(job, datafeed, datafeedJobHandler);
|
||||
datafeedJobBuilder.build(datafeedId, datafeedJobHandler);
|
||||
}
|
||||
|
||||
public void stopDatafeed(TransportStartDatafeedAction.DatafeedTask task, String reason, TimeValue timeout) {
|
||||
@ -159,7 +153,7 @@ public class DatafeedManager {
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
logger.error("Failed lookback import for job [" + holder.datafeed.getJobId() + "]", e);
|
||||
logger.error("Failed lookback import for job [" + holder.datafeedJob.getJobId() + "]", e);
|
||||
holder.stop("general_lookback_failure", TimeValue.timeValueSeconds(20), e);
|
||||
}
|
||||
|
||||
@ -189,17 +183,17 @@ public class DatafeedManager {
|
||||
} else {
|
||||
// Notify that a lookback-only run found no data
|
||||
String lookbackNoDataMsg = Messages.getMessage(Messages.JOB_AUDIT_DATAFEED_LOOKBACK_NO_DATA);
|
||||
logger.warn("[{}] {}", holder.datafeed.getJobId(), lookbackNoDataMsg);
|
||||
auditor.warning(holder.datafeed.getJobId(), lookbackNoDataMsg);
|
||||
logger.warn("[{}] {}", holder.datafeedJob.getJobId(), lookbackNoDataMsg);
|
||||
auditor.warning(holder.datafeedJob.getJobId(), lookbackNoDataMsg);
|
||||
}
|
||||
} catch (Exception e) {
|
||||
logger.error("Failed lookback import for job [" + holder.datafeed.getJobId() + "]", e);
|
||||
logger.error("Failed lookback import for job [" + holder.datafeedJob.getJobId() + "]", e);
|
||||
holder.stop("general_lookback_failure", TimeValue.timeValueSeconds(20), e);
|
||||
return;
|
||||
}
|
||||
if (isolated == false) {
|
||||
if (next != null) {
|
||||
doDatafeedRealtime(next, holder.datafeed.getJobId(), holder);
|
||||
doDatafeedRealtime(next, holder.datafeedJob.getJobId(), holder);
|
||||
} else {
|
||||
holder.stop("no_realtime", TimeValue.timeValueSeconds(20), null);
|
||||
holder.problemTracker.finishReport();
|
||||
@ -277,29 +271,29 @@ public class DatafeedManager {
|
||||
|
||||
private final TransportStartDatafeedAction.DatafeedTask task;
|
||||
private final long allocationId;
|
||||
private final DatafeedConfig datafeed;
|
||||
private final String datafeedId;
|
||||
// To ensure that we wait until loopback / realtime search has completed before we stop the datafeed
|
||||
private final ReentrantLock datafeedJobLock = new ReentrantLock(true);
|
||||
private final DatafeedJob datafeedJob;
|
||||
private final boolean autoCloseJob;
|
||||
private final ProblemTracker problemTracker;
|
||||
private final Consumer<Exception> handler;
|
||||
private final Consumer<Exception> finishHandler;
|
||||
volatile Future<?> future;
|
||||
private volatile boolean isRelocating;
|
||||
|
||||
Holder(TransportStartDatafeedAction.DatafeedTask task, DatafeedConfig datafeed, DatafeedJob datafeedJob,
|
||||
ProblemTracker problemTracker, Consumer<Exception> handler) {
|
||||
Holder(TransportStartDatafeedAction.DatafeedTask task, String datafeedId, DatafeedJob datafeedJob,
|
||||
ProblemTracker problemTracker, Consumer<Exception> finishHandler) {
|
||||
this.task = task;
|
||||
this.allocationId = task.getAllocationId();
|
||||
this.datafeed = datafeed;
|
||||
this.datafeedId = datafeedId;
|
||||
this.datafeedJob = datafeedJob;
|
||||
this.autoCloseJob = task.isLookbackOnly();
|
||||
this.problemTracker = problemTracker;
|
||||
this.handler = handler;
|
||||
this.finishHandler = finishHandler;
|
||||
}
|
||||
|
||||
String getJobId() {
|
||||
return datafeed.getJobId();
|
||||
return datafeedJob.getJobId();
|
||||
}
|
||||
|
||||
boolean isRunning() {
|
||||
@ -315,23 +309,23 @@ public class DatafeedManager {
|
||||
return;
|
||||
}
|
||||
|
||||
logger.info("[{}] attempt to stop datafeed [{}] for job [{}]", source, datafeed.getId(), datafeed.getJobId());
|
||||
logger.info("[{}] attempt to stop datafeed [{}] for job [{}]", source, datafeedId, datafeedJob.getJobId());
|
||||
if (datafeedJob.stop()) {
|
||||
boolean acquired = false;
|
||||
try {
|
||||
logger.info("[{}] try lock [{}] to stop datafeed [{}] for job [{}]...", source, timeout, datafeed.getId(),
|
||||
datafeed.getJobId());
|
||||
logger.info("[{}] try lock [{}] to stop datafeed [{}] for job [{}]...", source, timeout, datafeedId,
|
||||
datafeedJob.getJobId());
|
||||
acquired = datafeedJobLock.tryLock(timeout.millis(), TimeUnit.MILLISECONDS);
|
||||
} catch (InterruptedException e1) {
|
||||
Thread.currentThread().interrupt();
|
||||
} finally {
|
||||
logger.info("[{}] stopping datafeed [{}] for job [{}], acquired [{}]...", source, datafeed.getId(),
|
||||
datafeed.getJobId(), acquired);
|
||||
logger.info("[{}] stopping datafeed [{}] for job [{}], acquired [{}]...", source, datafeedId,
|
||||
datafeedJob.getJobId(), acquired);
|
||||
runningDatafeedsOnThisNode.remove(allocationId);
|
||||
FutureUtils.cancel(future);
|
||||
auditor.info(datafeed.getJobId(), Messages.getMessage(Messages.JOB_AUDIT_DATAFEED_STOPPED));
|
||||
handler.accept(e);
|
||||
logger.info("[{}] datafeed [{}] for job [{}] has been stopped{}", source, datafeed.getId(), datafeed.getJobId(),
|
||||
auditor.info(datafeedJob.getJobId(), Messages.getMessage(Messages.JOB_AUDIT_DATAFEED_STOPPED));
|
||||
finishHandler.accept(e);
|
||||
logger.info("[{}] datafeed [{}] for job [{}] has been stopped{}", source, datafeedId, datafeedJob.getJobId(),
|
||||
acquired ? "" : ", but there may be pending tasks as the timeout [" + timeout.getStringRep() + "] expired");
|
||||
if (autoCloseJob) {
|
||||
closeJob();
|
||||
@ -341,7 +335,7 @@ public class DatafeedManager {
|
||||
}
|
||||
}
|
||||
} else {
|
||||
logger.info("[{}] datafeed [{}] for job [{}] was already stopped", source, datafeed.getId(), datafeed.getJobId());
|
||||
logger.info("[{}] datafeed [{}] for job [{}] was already stopped", source, datafeedId, datafeedJob.getJobId());
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -14,9 +14,7 @@ import org.elasticsearch.cluster.routing.IndexRoutingTable;
|
||||
import org.elasticsearch.common.Nullable;
|
||||
import org.elasticsearch.license.RemoteClusterLicenseChecker;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobState;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobTaskState;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
@ -28,16 +26,20 @@ public class DatafeedNodeSelector {
|
||||
|
||||
private static final Logger LOGGER = LogManager.getLogger(DatafeedNodeSelector.class);
|
||||
|
||||
private final DatafeedConfig datafeed;
|
||||
private final String datafeedId;
|
||||
private final String jobId;
|
||||
private final List<String> datafeedIndices;
|
||||
private final PersistentTasksCustomMetaData.PersistentTask<?> jobTask;
|
||||
private final ClusterState clusterState;
|
||||
private final IndexNameExpressionResolver resolver;
|
||||
|
||||
public DatafeedNodeSelector(ClusterState clusterState, IndexNameExpressionResolver resolver, String datafeedId) {
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
public DatafeedNodeSelector(ClusterState clusterState, IndexNameExpressionResolver resolver, String datafeedId,
|
||||
String jobId, List<String> datafeedIndices) {
|
||||
PersistentTasksCustomMetaData tasks = clusterState.getMetaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
this.datafeed = mlMetadata.getDatafeed(datafeedId);
|
||||
this.jobTask = MlTasks.getJobTask(datafeed.getJobId(), tasks);
|
||||
this.datafeedId = datafeedId;
|
||||
this.jobId = jobId;
|
||||
this.datafeedIndices = datafeedIndices;
|
||||
this.jobTask = MlTasks.getJobTask(jobId, tasks);
|
||||
this.clusterState = Objects.requireNonNull(clusterState);
|
||||
this.resolver = Objects.requireNonNull(resolver);
|
||||
}
|
||||
@ -45,8 +47,8 @@ public class DatafeedNodeSelector {
|
||||
public void checkDatafeedTaskCanBeCreated() {
|
||||
AssignmentFailure assignmentFailure = checkAssignment();
|
||||
if (assignmentFailure != null && assignmentFailure.isCriticalForTaskCreation) {
|
||||
String msg = "No node found to start datafeed [" + datafeed.getId() + "], allocation explanation [" + assignmentFailure.reason
|
||||
+ "]";
|
||||
String msg = "No node found to start datafeed [" + datafeedId + "], " +
|
||||
"allocation explanation [" + assignmentFailure.reason + "]";
|
||||
LOGGER.debug(msg);
|
||||
throw ExceptionsHelper.conflictStatusException(msg);
|
||||
}
|
||||
@ -64,7 +66,7 @@ public class DatafeedNodeSelector {
|
||||
@Nullable
|
||||
private AssignmentFailure checkAssignment() {
|
||||
PriorityFailureCollector priorityFailureCollector = new PriorityFailureCollector();
|
||||
priorityFailureCollector.add(verifyIndicesActive(datafeed));
|
||||
priorityFailureCollector.add(verifyIndicesActive());
|
||||
|
||||
JobTaskState jobTaskState = null;
|
||||
JobState jobState = JobState.CLOSED;
|
||||
@ -75,13 +77,14 @@ public class DatafeedNodeSelector {
|
||||
|
||||
if (jobState.isAnyOf(JobState.OPENING, JobState.OPENED) == false) {
|
||||
// lets try again later when the job has been opened:
|
||||
String reason = "cannot start datafeed [" + datafeed.getId() + "], because job's [" + datafeed.getJobId() +
|
||||
"] state is [" + jobState + "] while state [" + JobState.OPENED + "] is required";
|
||||
String reason = "cannot start datafeed [" + datafeedId + "], because the job's [" + jobId
|
||||
+ "] state is [" + jobState + "] while state [" + JobState.OPENED + "] is required";
|
||||
priorityFailureCollector.add(new AssignmentFailure(reason, true));
|
||||
}
|
||||
|
||||
if (jobTaskState != null && jobTaskState.isStatusStale(jobTask)) {
|
||||
String reason = "cannot start datafeed [" + datafeed.getId() + "], job [" + datafeed.getJobId() + "] state is stale";
|
||||
String reason = "cannot start datafeed [" + datafeedId + "], because the job's [" + jobId
|
||||
+ "] state is stale";
|
||||
priorityFailureCollector.add(new AssignmentFailure(reason, true));
|
||||
}
|
||||
|
||||
@ -89,9 +92,8 @@ public class DatafeedNodeSelector {
|
||||
}
|
||||
|
||||
@Nullable
|
||||
private AssignmentFailure verifyIndicesActive(DatafeedConfig datafeed) {
|
||||
List<String> indices = datafeed.getIndices();
|
||||
for (String index : indices) {
|
||||
private AssignmentFailure verifyIndicesActive() {
|
||||
for (String index : datafeedIndices) {
|
||||
|
||||
if (RemoteClusterLicenseChecker.isRemoteIndex(index)) {
|
||||
// We cannot verify remote indices
|
||||
@ -99,7 +101,7 @@ public class DatafeedNodeSelector {
|
||||
}
|
||||
|
||||
String[] concreteIndices;
|
||||
String reason = "cannot start datafeed [" + datafeed.getId() + "] because index ["
|
||||
String reason = "cannot start datafeed [" + datafeedId + "] because index ["
|
||||
+ index + "] does not exist, is closed, or is still initializing.";
|
||||
|
||||
try {
|
||||
@ -115,7 +117,7 @@ public class DatafeedNodeSelector {
|
||||
for (String concreteIndex : concreteIndices) {
|
||||
IndexRoutingTable routingTable = clusterState.getRoutingTable().index(concreteIndex);
|
||||
if (routingTable == null || !routingTable.allPrimaryShardsActive()) {
|
||||
reason = "cannot start datafeed [" + datafeed.getId() + "] because index ["
|
||||
reason = "cannot start datafeed [" + datafeedId + "] because index ["
|
||||
+ concreteIndex + "] does not have all primary shards active yet.";
|
||||
return new AssignmentFailure(reason, false);
|
||||
}
|
||||
|
@ -0,0 +1,511 @@
|
||||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.ml.datafeed.persistence;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.ElasticsearchParseException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.DocWriteRequest;
|
||||
import org.elasticsearch.action.DocWriteResponse;
|
||||
import org.elasticsearch.action.delete.DeleteAction;
|
||||
import org.elasticsearch.action.delete.DeleteRequest;
|
||||
import org.elasticsearch.action.delete.DeleteResponse;
|
||||
import org.elasticsearch.action.get.GetAction;
|
||||
import org.elasticsearch.action.get.GetRequest;
|
||||
import org.elasticsearch.action.get.GetResponse;
|
||||
import org.elasticsearch.action.index.IndexAction;
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.index.IndexResponse;
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
||||
import org.elasticsearch.action.search.SearchResponse;
|
||||
import org.elasticsearch.action.support.IndicesOptions;
|
||||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.regex.Regex;
|
||||
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
import org.elasticsearch.index.engine.VersionConflictEngineException;
|
||||
import org.elasticsearch.index.query.BoolQueryBuilder;
|
||||
import org.elasticsearch.index.query.QueryBuilder;
|
||||
import org.elasticsearch.index.query.TermQueryBuilder;
|
||||
import org.elasticsearch.index.query.TermsQueryBuilder;
|
||||
import org.elasticsearch.index.query.WildcardQueryBuilder;
|
||||
import org.elasticsearch.search.SearchHit;
|
||||
import org.elasticsearch.search.builder.SearchSourceBuilder;
|
||||
import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;
|
||||
import org.elasticsearch.xpack.core.ClientHelper;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedUpdate;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.AnomalyDetectorsIndex;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.ElasticsearchMappings;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.ExpandedIdsMatcher;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.SortedSet;
|
||||
import java.util.TreeSet;
|
||||
import java.util.function.BiConsumer;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin;
|
||||
|
||||
public class DatafeedConfigProvider {
|
||||
|
||||
private static final Logger logger = LogManager.getLogger(DatafeedConfigProvider.class);
|
||||
private final Client client;
|
||||
private final NamedXContentRegistry xContentRegistry;
|
||||
|
||||
public static final Map<String, String> TO_XCONTENT_PARAMS;
|
||||
static {
|
||||
Map<String, String> modifiable = new HashMap<>();
|
||||
modifiable.put(ToXContentParams.FOR_INTERNAL_STORAGE, "true");
|
||||
modifiable.put(ToXContentParams.INCLUDE_TYPE, "true");
|
||||
TO_XCONTENT_PARAMS = Collections.unmodifiableMap(modifiable);
|
||||
}
|
||||
|
||||
/**
|
||||
* In most cases we expect 10s or 100s of datafeeds to be defined and
|
||||
* a search for all datafeeds should return all.
|
||||
* TODO this is a temporary fix
|
||||
*/
|
||||
public int searchSize = 1000;
|
||||
|
||||
public DatafeedConfigProvider(Client client, NamedXContentRegistry xContentRegistry) {
|
||||
this.client = client;
|
||||
this.xContentRegistry = xContentRegistry;
|
||||
}
|
||||
|
||||
/**
|
||||
* Persist the datafeed configuration to the config index.
|
||||
* It is an error if a datafeed with the same Id already exists -
|
||||
* the config will not be overwritten.
|
||||
*
|
||||
* @param config The datafeed configuration
|
||||
* @param listener Index response listener
|
||||
*/
|
||||
public void putDatafeedConfig(DatafeedConfig config, Map<String, String> headers, ActionListener<IndexResponse> listener) {
|
||||
|
||||
if (headers.isEmpty() == false) {
|
||||
// Filter any values in headers that aren't security fields
|
||||
DatafeedConfig.Builder builder = new DatafeedConfig.Builder(config);
|
||||
Map<String, String> securityHeaders = headers.entrySet().stream()
|
||||
.filter(e -> ClientHelper.SECURITY_HEADER_FILTERS.contains(e.getKey()))
|
||||
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
|
||||
builder.setHeaders(securityHeaders);
|
||||
config = builder.build();
|
||||
}
|
||||
|
||||
final String datafeedId = config.getId();
|
||||
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
XContentBuilder source = config.toXContent(builder, new ToXContent.MapParams(TO_XCONTENT_PARAMS));
|
||||
|
||||
IndexRequest indexRequest = client.prepareIndex(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, DatafeedConfig.documentId(datafeedId))
|
||||
.setSource(source)
|
||||
.setOpType(DocWriteRequest.OpType.CREATE)
|
||||
.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
|
||||
.request();
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, IndexAction.INSTANCE, indexRequest, ActionListener.wrap(
|
||||
listener::onResponse,
|
||||
e -> {
|
||||
if (e instanceof VersionConflictEngineException) {
|
||||
// the dafafeed already exists
|
||||
listener.onFailure(ExceptionsHelper.datafeedAlreadyExists(datafeedId));
|
||||
} else {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
}
|
||||
));
|
||||
|
||||
} catch (IOException e) {
|
||||
listener.onFailure(new ElasticsearchParseException("Failed to serialise datafeed config with id [" + config.getId() + "]", e));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the datafeed config specified by {@code datafeedId}.
|
||||
* If the datafeed document is missing a {@code ResourceNotFoundException}
|
||||
* is returned via the listener.
|
||||
*
|
||||
* If the .ml-config index does not exist it is treated as a missing datafeed
|
||||
* error.
|
||||
*
|
||||
* @param datafeedId The datafeed ID
|
||||
* @param datafeedConfigListener The config listener
|
||||
*/
|
||||
public void getDatafeedConfig(String datafeedId, ActionListener<DatafeedConfig.Builder> datafeedConfigListener) {
|
||||
GetRequest getRequest = new GetRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, DatafeedConfig.documentId(datafeedId));
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, GetAction.INSTANCE, getRequest, new ActionListener<GetResponse>() {
|
||||
@Override
|
||||
public void onResponse(GetResponse getResponse) {
|
||||
if (getResponse.isExists() == false) {
|
||||
datafeedConfigListener.onFailure(ExceptionsHelper.missingDatafeedException(datafeedId));
|
||||
return;
|
||||
}
|
||||
BytesReference source = getResponse.getSourceAsBytesRef();
|
||||
parseLenientlyFromSource(source, datafeedConfigListener);
|
||||
}
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
if (e.getClass() == IndexNotFoundException.class) {
|
||||
datafeedConfigListener.onFailure(ExceptionsHelper.missingDatafeedException(datafeedId));
|
||||
} else {
|
||||
datafeedConfigListener.onFailure(e);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Find any datafeeds that are used by jobs {@code jobIds} i.e. the
|
||||
* datafeeds that references any of the jobs in {@code jobIds}.
|
||||
*
|
||||
* In theory there should never be more than one datafeed referencing a
|
||||
* particular job.
|
||||
*
|
||||
* @param jobIds The jobs to find the datafeeds of
|
||||
* @param listener Datafeed Id listener
|
||||
*/
|
||||
public void findDatafeedsForJobIds(Collection<String> jobIds, ActionListener<Set<String>> listener) {
|
||||
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(buildDatafeedJobIdsQuery(jobIds));
|
||||
sourceBuilder.fetchSource(false);
|
||||
sourceBuilder.docValueField(DatafeedConfig.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);
|
||||
|
||||
SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setSize(jobIds.size())
|
||||
.setSource(sourceBuilder).request();
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, searchRequest,
|
||||
ActionListener.<SearchResponse>wrap(
|
||||
response -> {
|
||||
Set<String> datafeedIds = new HashSet<>();
|
||||
// There cannot be more than one datafeed per job
|
||||
assert response.getHits().getTotalHits().value <= jobIds.size();
|
||||
SearchHit[] hits = response.getHits().getHits();
|
||||
|
||||
for (SearchHit hit : hits) {
|
||||
datafeedIds.add(hit.field(DatafeedConfig.ID.getPreferredName()).getValue());
|
||||
}
|
||||
|
||||
listener.onResponse(datafeedIds);
|
||||
},
|
||||
listener::onFailure)
|
||||
, client::search);
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete the datafeed config document
|
||||
*
|
||||
* @param datafeedId The datafeed id
|
||||
* @param actionListener Deleted datafeed listener
|
||||
*/
|
||||
public void deleteDatafeedConfig(String datafeedId, ActionListener<DeleteResponse> actionListener) {
|
||||
DeleteRequest request = new DeleteRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, DatafeedConfig.documentId(datafeedId));
|
||||
request.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, DeleteAction.INSTANCE, request, new ActionListener<DeleteResponse>() {
|
||||
@Override
|
||||
public void onResponse(DeleteResponse deleteResponse) {
|
||||
if (deleteResponse.getResult() == DocWriteResponse.Result.NOT_FOUND) {
|
||||
actionListener.onFailure(ExceptionsHelper.missingDatafeedException(datafeedId));
|
||||
return;
|
||||
}
|
||||
assert deleteResponse.getResult() == DocWriteResponse.Result.DELETED;
|
||||
actionListener.onResponse(deleteResponse);
|
||||
}
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
actionListener.onFailure(e);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the datafeed config and apply the {@code update}
|
||||
* then index the modified config setting the version in the request.
|
||||
*
|
||||
* The {@code validator} consumer can be used to perform extra validation
|
||||
* but it must call the passed ActionListener. For example a no-op validator
|
||||
* would be {@code (updatedConfig, listener) -> listener.onResponse(Boolean.TRUE)}
|
||||
*
|
||||
* @param datafeedId The Id of the datafeed to update
|
||||
* @param update The update
|
||||
* @param headers Datafeed headers applied with the update
|
||||
* @param validator BiConsumer that accepts the updated config and can perform
|
||||
* extra validations. {@code validator} must call the passed listener
|
||||
* @param updatedConfigListener Updated datafeed config listener
|
||||
*/
|
||||
public void updateDatefeedConfig(String datafeedId, DatafeedUpdate update, Map<String, String> headers,
|
||||
BiConsumer<DatafeedConfig, ActionListener<Boolean>> validator,
|
||||
ActionListener<DatafeedConfig> updatedConfigListener) {
|
||||
GetRequest getRequest = new GetRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, DatafeedConfig.documentId(datafeedId));
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, GetAction.INSTANCE, getRequest, new ActionListener<GetResponse>() {
|
||||
@Override
|
||||
public void onResponse(GetResponse getResponse) {
|
||||
if (getResponse.isExists() == false) {
|
||||
updatedConfigListener.onFailure(ExceptionsHelper.missingDatafeedException(datafeedId));
|
||||
return;
|
||||
}
|
||||
long version = getResponse.getVersion();
|
||||
BytesReference source = getResponse.getSourceAsBytesRef();
|
||||
DatafeedConfig.Builder configBuilder;
|
||||
try {
|
||||
configBuilder = parseLenientlyFromSource(source);
|
||||
} catch (IOException e) {
|
||||
updatedConfigListener.onFailure(
|
||||
new ElasticsearchParseException("Failed to parse datafeed config [" + datafeedId + "]", e));
|
||||
return;
|
||||
}
|
||||
|
||||
DatafeedConfig updatedConfig;
|
||||
try {
|
||||
updatedConfig = update.apply(configBuilder.build(), headers);
|
||||
} catch (Exception e) {
|
||||
updatedConfigListener.onFailure(e);
|
||||
return;
|
||||
}
|
||||
|
||||
ActionListener<Boolean> validatedListener = ActionListener.wrap(
|
||||
ok -> {
|
||||
indexUpdatedConfig(updatedConfig, version, ActionListener.wrap(
|
||||
indexResponse -> {
|
||||
assert indexResponse.getResult() == DocWriteResponse.Result.UPDATED;
|
||||
updatedConfigListener.onResponse(updatedConfig);
|
||||
},
|
||||
updatedConfigListener::onFailure));
|
||||
},
|
||||
updatedConfigListener::onFailure
|
||||
);
|
||||
|
||||
validator.accept(updatedConfig, validatedListener);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
updatedConfigListener.onFailure(e);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private void indexUpdatedConfig(DatafeedConfig updatedConfig, long version, ActionListener<IndexResponse> listener) {
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
XContentBuilder updatedSource = updatedConfig.toXContent(builder, new ToXContent.MapParams(TO_XCONTENT_PARAMS));
|
||||
IndexRequest indexRequest = client.prepareIndex(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, DatafeedConfig.documentId(updatedConfig.getId()))
|
||||
.setSource(updatedSource)
|
||||
.setVersion(version)
|
||||
.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
|
||||
.request();
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, IndexAction.INSTANCE, indexRequest, listener);
|
||||
|
||||
} catch (IOException e) {
|
||||
listener.onFailure(
|
||||
new ElasticsearchParseException("Failed to serialise datafeed config with id [" + updatedConfig.getId() + "]", e));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Expands an expression into the set of matching names. {@code expresssion}
|
||||
* may be a wildcard, a datafeed ID or a list of those.
|
||||
* If {@code expression} == 'ALL', '*' or the empty string then all
|
||||
* datafeed IDs are returned.
|
||||
*
|
||||
* For example, given a set of names ["foo-1", "foo-2", "bar-1", bar-2"],
|
||||
* expressions resolve follows:
|
||||
* <ul>
|
||||
* <li>"foo-1" : ["foo-1"]</li>
|
||||
* <li>"bar-1" : ["bar-1"]</li>
|
||||
* <li>"foo-1,foo-2" : ["foo-1", "foo-2"]</li>
|
||||
* <li>"foo-*" : ["foo-1", "foo-2"]</li>
|
||||
* <li>"*-1" : ["bar-1", "foo-1"]</li>
|
||||
* <li>"*" : ["bar-1", "bar-2", "foo-1", "foo-2"]</li>
|
||||
* <li>"_all" : ["bar-1", "bar-2", "foo-1", "foo-2"]</li>
|
||||
* </ul>
|
||||
*
|
||||
* @param expression the expression to resolve
|
||||
* @param allowNoDatafeeds if {@code false}, an error is thrown when no name matches the {@code expression}.
|
||||
* This only applies to wild card expressions, if {@code expression} is not a
|
||||
* wildcard then setting this true will not suppress the exception
|
||||
* @param listener The expanded datafeed IDs listener
|
||||
*/
|
||||
public void expandDatafeedIds(String expression, boolean allowNoDatafeeds, ActionListener<SortedSet<String>> listener) {
|
||||
String [] tokens = ExpandedIdsMatcher.tokenizeExpression(expression);
|
||||
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(buildDatafeedIdQuery(tokens));
|
||||
sourceBuilder.sort(DatafeedConfig.ID.getPreferredName());
|
||||
sourceBuilder.fetchSource(false);
|
||||
sourceBuilder.docValueField(DatafeedConfig.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);
|
||||
|
||||
SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setSource(sourceBuilder).request();
|
||||
|
||||
ExpandedIdsMatcher requiredMatches = new ExpandedIdsMatcher(tokens, allowNoDatafeeds);
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, searchRequest,
|
||||
ActionListener.<SearchResponse>wrap(
|
||||
response -> {
|
||||
SortedSet<String> datafeedIds = new TreeSet<>();
|
||||
SearchHit[] hits = response.getHits().getHits();
|
||||
for (SearchHit hit : hits) {
|
||||
datafeedIds.add(hit.field(DatafeedConfig.ID.getPreferredName()).getValue());
|
||||
}
|
||||
|
||||
requiredMatches.filterMatchedIds(datafeedIds);
|
||||
if (requiredMatches.hasUnmatchedIds()) {
|
||||
// some required datafeeds were not found
|
||||
listener.onFailure(ExceptionsHelper.missingDatafeedException(requiredMatches.unmatchedIdsString()));
|
||||
return;
|
||||
}
|
||||
|
||||
listener.onResponse(datafeedIds);
|
||||
},
|
||||
listener::onFailure)
|
||||
, client::search);
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* The same logic as {@link #expandDatafeedIds(String, boolean, ActionListener)} but
|
||||
* the full datafeed configuration is returned.
|
||||
*
|
||||
* See {@link #expandDatafeedIds(String, boolean, ActionListener)}
|
||||
*
|
||||
* @param expression the expression to resolve
|
||||
* @param allowNoDatafeeds if {@code false}, an error is thrown when no name matches the {@code expression}.
|
||||
* This only applies to wild card expressions, if {@code expression} is not a
|
||||
* wildcard then setting this true will not suppress the exception
|
||||
* @param listener The expanded datafeed config listener
|
||||
*/
|
||||
// NORELEASE datafeed configs should be paged or have a mechanism to return all jobs if there are many of them
|
||||
public void expandDatafeedConfigs(String expression, boolean allowNoDatafeeds, ActionListener<List<DatafeedConfig.Builder>> listener) {
|
||||
String [] tokens = ExpandedIdsMatcher.tokenizeExpression(expression);
|
||||
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(buildDatafeedIdQuery(tokens));
|
||||
sourceBuilder.sort(DatafeedConfig.ID.getPreferredName());
|
||||
|
||||
SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setSource(sourceBuilder)
|
||||
.setSize(searchSize)
|
||||
.request();
|
||||
|
||||
ExpandedIdsMatcher requiredMatches = new ExpandedIdsMatcher(tokens, allowNoDatafeeds);
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, searchRequest,
|
||||
ActionListener.<SearchResponse>wrap(
|
||||
response -> {
|
||||
List<DatafeedConfig.Builder> datafeeds = new ArrayList<>();
|
||||
Set<String> datafeedIds = new HashSet<>();
|
||||
SearchHit[] hits = response.getHits().getHits();
|
||||
for (SearchHit hit : hits) {
|
||||
try {
|
||||
BytesReference source = hit.getSourceRef();
|
||||
DatafeedConfig.Builder datafeed = parseLenientlyFromSource(source);
|
||||
datafeeds.add(datafeed);
|
||||
datafeedIds.add(datafeed.getId());
|
||||
} catch (IOException e) {
|
||||
// TODO A better way to handle this rather than just ignoring the error?
|
||||
logger.error("Error parsing datafeed configuration [" + hit.getId() + "]", e);
|
||||
}
|
||||
}
|
||||
|
||||
requiredMatches.filterMatchedIds(datafeedIds);
|
||||
if (requiredMatches.hasUnmatchedIds()) {
|
||||
// some required datafeeds were not found
|
||||
listener.onFailure(ExceptionsHelper.missingDatafeedException(requiredMatches.unmatchedIdsString()));
|
||||
return;
|
||||
}
|
||||
|
||||
listener.onResponse(datafeeds);
|
||||
},
|
||||
listener::onFailure)
|
||||
, client::search);
|
||||
|
||||
}
|
||||
|
||||
private QueryBuilder buildDatafeedIdQuery(String [] tokens) {
|
||||
QueryBuilder datafeedQuery = new TermQueryBuilder(DatafeedConfig.CONFIG_TYPE.getPreferredName(), DatafeedConfig.TYPE);
|
||||
if (Strings.isAllOrWildcard(tokens)) {
|
||||
// match all
|
||||
return datafeedQuery;
|
||||
}
|
||||
|
||||
BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
|
||||
boolQueryBuilder.filter(datafeedQuery);
|
||||
BoolQueryBuilder shouldQueries = new BoolQueryBuilder();
|
||||
|
||||
List<String> terms = new ArrayList<>();
|
||||
for (String token : tokens) {
|
||||
if (Regex.isSimpleMatchPattern(token)) {
|
||||
shouldQueries.should(new WildcardQueryBuilder(DatafeedConfig.ID.getPreferredName(), token));
|
||||
} else {
|
||||
terms.add(token);
|
||||
}
|
||||
}
|
||||
|
||||
if (terms.isEmpty() == false) {
|
||||
shouldQueries.should(new TermsQueryBuilder(DatafeedConfig.ID.getPreferredName(), terms));
|
||||
}
|
||||
|
||||
if (shouldQueries.should().isEmpty() == false) {
|
||||
boolQueryBuilder.filter(shouldQueries);
|
||||
}
|
||||
|
||||
return boolQueryBuilder;
|
||||
}
|
||||
|
||||
private QueryBuilder buildDatafeedJobIdsQuery(Collection<String> jobIds) {
|
||||
BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
|
||||
boolQueryBuilder.filter(new TermQueryBuilder(DatafeedConfig.CONFIG_TYPE.getPreferredName(), DatafeedConfig.TYPE));
|
||||
boolQueryBuilder.filter(new TermsQueryBuilder(Job.ID.getPreferredName(), jobIds));
|
||||
return boolQueryBuilder;
|
||||
}
|
||||
|
||||
private void parseLenientlyFromSource(BytesReference source, ActionListener<DatafeedConfig.Builder> datafeedConfigListener) {
|
||||
try (InputStream stream = source.streamInput();
|
||||
XContentParser parser = XContentFactory.xContent(XContentType.JSON)
|
||||
.createParser(xContentRegistry, LoggingDeprecationHandler.INSTANCE, stream)) {
|
||||
datafeedConfigListener.onResponse(DatafeedConfig.LENIENT_PARSER.apply(parser, null));
|
||||
} catch (Exception e) {
|
||||
datafeedConfigListener.onFailure(e);
|
||||
}
|
||||
}
|
||||
|
||||
private DatafeedConfig.Builder parseLenientlyFromSource(BytesReference source) throws IOException {
|
||||
try (InputStream stream = source.streamInput();
|
||||
XContentParser parser = XContentFactory.xContent(XContentType.JSON)
|
||||
.createParser(xContentRegistry, LoggingDeprecationHandler.INSTANCE, stream)) {
|
||||
return DatafeedConfig.LENIENT_PARSER.apply(parser, null);
|
||||
}
|
||||
}
|
||||
}
|
@ -7,14 +7,13 @@ package org.elasticsearch.xpack.ml.job;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.elasticsearch.ResourceAlreadyExistsException;
|
||||
import org.elasticsearch.ResourceNotFoundException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.index.IndexResponse;
|
||||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.cluster.AckedClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.ClusterState;
|
||||
import org.elasticsearch.cluster.ClusterStateUpdateTask;
|
||||
import org.elasticsearch.cluster.metadata.MetaData;
|
||||
import org.elasticsearch.cluster.service.ClusterService;
|
||||
import org.elasticsearch.common.CheckedConsumer;
|
||||
@ -29,7 +28,7 @@ import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.env.Environment;
|
||||
import org.elasticsearch.index.analysis.AnalysisRegistry;
|
||||
import org.elasticsearch.persistent.PersistentTasksCustomMetaData;
|
||||
import org.elasticsearch.xpack.core.XPackPlugin;
|
||||
import org.elasticsearch.threadpool.ThreadPool;
|
||||
import org.elasticsearch.xpack.core.ml.MachineLearningField;
|
||||
import org.elasticsearch.xpack.core.ml.MlMetadata;
|
||||
import org.elasticsearch.xpack.core.ml.MlTasks;
|
||||
@ -49,7 +48,9 @@ import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSizeSta
|
||||
import org.elasticsearch.xpack.core.ml.job.process.autodetect.state.ModelSnapshot;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.ml.MachineLearning;
|
||||
import org.elasticsearch.xpack.ml.MlConfigMigrationEligibilityCheck;
|
||||
import org.elasticsearch.xpack.ml.job.categorization.CategorizationAnalyzer;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobConfigProvider;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsPersister;
|
||||
import org.elasticsearch.xpack.ml.job.persistence.JobResultsProvider;
|
||||
import org.elasticsearch.xpack.ml.job.process.autodetect.UpdateParams;
|
||||
@ -58,24 +59,24 @@ import org.elasticsearch.xpack.ml.utils.ChainTaskExecutor;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Collections;
|
||||
import java.util.Comparator;
|
||||
import java.util.Date;
|
||||
import java.util.HashSet;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Objects;
|
||||
import java.util.Set;
|
||||
import java.util.concurrent.atomic.AtomicReference;
|
||||
import java.util.function.Consumer;
|
||||
import java.util.regex.Matcher;
|
||||
import java.util.regex.Pattern;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
/**
|
||||
* Allows interactions with jobs. The managed interactions include:
|
||||
* <ul>
|
||||
* <li>creation</li>
|
||||
* <li>reading</li>
|
||||
* <li>deletion</li>
|
||||
* <li>updating</li>
|
||||
* <li>starting/stopping of datafeed jobs</li>
|
||||
* </ul>
|
||||
*/
|
||||
public class JobManager {
|
||||
@ -88,7 +89,10 @@ public class JobManager {
|
||||
private final ClusterService clusterService;
|
||||
private final Auditor auditor;
|
||||
private final Client client;
|
||||
private final ThreadPool threadPool;
|
||||
private final UpdateJobProcessNotifier updateJobProcessNotifier;
|
||||
private final JobConfigProvider jobConfigProvider;
|
||||
private final MlConfigMigrationEligibilityCheck migrationEligibilityCheck;
|
||||
|
||||
private volatile ByteSizeValue maxModelMemoryLimit;
|
||||
|
||||
@ -96,14 +100,17 @@ public class JobManager {
|
||||
* Create a JobManager
|
||||
*/
|
||||
public JobManager(Environment environment, Settings settings, JobResultsProvider jobResultsProvider,
|
||||
ClusterService clusterService, Auditor auditor,
|
||||
ClusterService clusterService, Auditor auditor, ThreadPool threadPool,
|
||||
Client client, UpdateJobProcessNotifier updateJobProcessNotifier) {
|
||||
this.environment = environment;
|
||||
this.jobResultsProvider = Objects.requireNonNull(jobResultsProvider);
|
||||
this.clusterService = Objects.requireNonNull(clusterService);
|
||||
this.auditor = Objects.requireNonNull(auditor);
|
||||
this.client = Objects.requireNonNull(client);
|
||||
this.threadPool = Objects.requireNonNull(threadPool);
|
||||
this.updateJobProcessNotifier = updateJobProcessNotifier;
|
||||
this.jobConfigProvider = new JobConfigProvider(client);
|
||||
this.migrationEligibilityCheck = new MlConfigMigrationEligibilityCheck(settings, clusterService);
|
||||
|
||||
maxModelMemoryLimit = MachineLearningField.MAX_MODEL_MEMORY_LIMIT.get(settings);
|
||||
clusterService.getClusterSettings()
|
||||
@ -114,35 +121,46 @@ public class JobManager {
|
||||
this.maxModelMemoryLimit = maxModelMemoryLimit;
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the job that matches the given {@code jobId}.
|
||||
*
|
||||
* @param jobId the jobId
|
||||
* @return The {@link Job} matching the given {code jobId}
|
||||
* @throws ResourceNotFoundException if no job matches {@code jobId}
|
||||
*/
|
||||
public Job getJobOrThrowIfUnknown(String jobId) {
|
||||
return getJobOrThrowIfUnknown(jobId, clusterService.state());
|
||||
public void jobExists(String jobId, ActionListener<Boolean> listener) {
|
||||
jobConfigProvider.jobExists(jobId, true, listener);
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets the job that matches the given {@code jobId}.
|
||||
*
|
||||
* @param jobId the jobId
|
||||
* @param clusterState the cluster state
|
||||
* @return The {@link Job} matching the given {code jobId}
|
||||
* @throws ResourceNotFoundException if no job matches {@code jobId}
|
||||
* @param jobListener the Job listener. If no job matches {@code jobId}
|
||||
* a ResourceNotFoundException is returned
|
||||
*/
|
||||
public static Job getJobOrThrowIfUnknown(String jobId, ClusterState clusterState) {
|
||||
Job job = MlMetadata.getMlMetadata(clusterState).getJobs().get(jobId);
|
||||
public void getJob(String jobId, ActionListener<Job> jobListener) {
|
||||
jobConfigProvider.getJob(jobId, ActionListener.wrap(
|
||||
r -> jobListener.onResponse(r.build()), // TODO JIndex we shouldn't be building the job here
|
||||
e -> {
|
||||
if (e instanceof ResourceNotFoundException) {
|
||||
// Try to get the job from the cluster state
|
||||
getJobFromClusterState(jobId, jobListener);
|
||||
} else {
|
||||
jobListener.onFailure(e);
|
||||
}
|
||||
}
|
||||
));
|
||||
}
|
||||
|
||||
/**
|
||||
* Read a job from the cluster state.
|
||||
* The job is returned on the same thread even though a listener is used.
|
||||
*
|
||||
* @param jobId the jobId
|
||||
* @param jobListener the Job listener. If no job matches {@code jobId}
|
||||
* a ResourceNotFoundException is returned
|
||||
*/
|
||||
private void getJobFromClusterState(String jobId, ActionListener<Job> jobListener) {
|
||||
Job job = MlMetadata.getMlMetadata(clusterService.state()).getJobs().get(jobId);
|
||||
if (job == null) {
|
||||
throw ExceptionsHelper.missingJobException(jobId);
|
||||
jobListener.onFailure(ExceptionsHelper.missingJobException(jobId));
|
||||
} else {
|
||||
jobListener.onResponse(job);
|
||||
}
|
||||
return job;
|
||||
}
|
||||
|
||||
private Set<String> expandJobIds(String expression, boolean allowNoJobs, ClusterState clusterState) {
|
||||
return MlMetadata.getMlMetadata(clusterState).expandJobIds(expression, allowNoJobs);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -150,19 +168,49 @@ public class JobManager {
|
||||
* Note that when the {@code jobId} is {@link MetaData#ALL} all jobs are returned.
|
||||
*
|
||||
* @param expression the jobId or an expression matching jobIds
|
||||
* @param clusterState the cluster state
|
||||
* @param allowNoJobs if {@code false}, an error is thrown when no job matches the {@code jobId}
|
||||
* @return A {@link QueryPage} containing the matching {@code Job}s
|
||||
* @param jobsListener The jobs listener
|
||||
*/
|
||||
public QueryPage<Job> expandJobs(String expression, boolean allowNoJobs, ClusterState clusterState) {
|
||||
Set<String> expandedJobIds = expandJobIds(expression, allowNoJobs, clusterState);
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
List<Job> jobs = new ArrayList<>();
|
||||
for (String expandedJobId : expandedJobIds) {
|
||||
jobs.add(mlMetadata.getJobs().get(expandedJobId));
|
||||
public void expandJobs(String expression, boolean allowNoJobs, ActionListener<QueryPage<Job>> jobsListener) {
|
||||
Map<String, Job> clusterStateJobs = expandJobsFromClusterState(expression, allowNoJobs, clusterService.state());
|
||||
|
||||
jobConfigProvider.expandJobs(expression, allowNoJobs, false, ActionListener.wrap(
|
||||
jobBuilders -> {
|
||||
// Check for duplicate jobs
|
||||
for (Job.Builder jb : jobBuilders) {
|
||||
if (clusterStateJobs.containsKey(jb.getId())) {
|
||||
jobsListener.onFailure(new IllegalStateException("Job [" + jb.getId() + "] configuration " +
|
||||
"exists in both clusterstate and index"));
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Merge cluster state and index jobs
|
||||
List<Job> jobs = new ArrayList<>();
|
||||
for (Job.Builder jb : jobBuilders) {
|
||||
jobs.add(jb.build());
|
||||
}
|
||||
|
||||
jobs.addAll(clusterStateJobs.values());
|
||||
Collections.sort(jobs, Comparator.comparing(Job::getId));
|
||||
jobsListener.onResponse(new QueryPage<>(jobs, jobs.size(), Job.RESULTS_FIELD));
|
||||
},
|
||||
jobsListener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
private Map<String, Job> expandJobsFromClusterState(String expression, boolean allowNoJobs, ClusterState clusterState) {
|
||||
Map<String, Job> jobIdToJob = new HashMap<>();
|
||||
try {
|
||||
Set<String> expandedJobIds = MlMetadata.getMlMetadata(clusterState).expandJobIds(expression, allowNoJobs);
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
for (String expandedJobId : expandedJobIds) {
|
||||
jobIdToJob.put(expandedJobId, mlMetadata.getJobs().get(expandedJobId));
|
||||
}
|
||||
} catch (Exception e) {
|
||||
// ignore
|
||||
}
|
||||
logger.debug("Returning jobs matching [" + expression + "]");
|
||||
return new QueryPage<>(jobs, jobs.size(), Job.RESULTS_FIELD);
|
||||
return jobIdToJob;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -182,7 +230,7 @@ public class JobManager {
|
||||
}
|
||||
|
||||
/**
|
||||
* Stores a job in the cluster state
|
||||
* Stores the anomaly job configuration
|
||||
*/
|
||||
public void putJob(PutJobAction.Request request, AnalysisRegistry analysisRegistry, ClusterState state,
|
||||
ActionListener<PutJobAction.Response> actionListener) throws IOException {
|
||||
@ -196,9 +244,7 @@ public class JobManager {
|
||||
deprecationLogger.deprecated("Creating jobs with delimited data format is deprecated. Please use xcontent instead.");
|
||||
}
|
||||
|
||||
// pre-flight check, not necessarily required, but avoids figuring this out while on the CS update thread
|
||||
XPackPlugin.checkReadyForXPackCustomMetadata(state);
|
||||
|
||||
// Check for the job in the cluster state first
|
||||
MlMetadata currentMlMetadata = MlMetadata.getMlMetadata(state);
|
||||
if (currentMlMetadata.getJobs().containsKey(job.getId())) {
|
||||
actionListener.onFailure(ExceptionsHelper.jobAlreadyExists(job.getId()));
|
||||
@ -209,19 +255,13 @@ public class JobManager {
|
||||
@Override
|
||||
public void onResponse(Boolean indicesCreated) {
|
||||
|
||||
clusterService.submitStateUpdateTask("put-job-" + job.getId(),
|
||||
new AckedClusterStateUpdateTask<PutJobAction.Response>(request, actionListener) {
|
||||
@Override
|
||||
protected PutJobAction.Response newResponse(boolean acknowledged) {
|
||||
auditor.info(job.getId(), Messages.getMessage(Messages.JOB_AUDIT_CREATED));
|
||||
return new PutJobAction.Response(job);
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
return updateClusterState(job, false, currentState);
|
||||
}
|
||||
});
|
||||
jobConfigProvider.putJob(job, ActionListener.wrap(
|
||||
response -> {
|
||||
auditor.info(job.getId(), Messages.getMessage(Messages.JOB_AUDIT_CREATED));
|
||||
actionListener.onResponse(new PutJobAction.Response(job));
|
||||
},
|
||||
actionListener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
@ -240,24 +280,123 @@ public class JobManager {
|
||||
}
|
||||
};
|
||||
|
||||
ActionListener<Boolean> checkForLeftOverDocs = ActionListener.wrap(
|
||||
response -> {
|
||||
jobResultsProvider.createJobResultIndex(job, state, putJobListener);
|
||||
ActionListener<List<String>> checkForLeftOverDocs = ActionListener.wrap(
|
||||
matchedIds -> {
|
||||
if (matchedIds.isEmpty()) {
|
||||
jobResultsProvider.createJobResultIndex(job, state, putJobListener);
|
||||
} else {
|
||||
// A job has the same Id as one of the group names
|
||||
// error with the first in the list
|
||||
actionListener.onFailure(new ResourceAlreadyExistsException(
|
||||
Messages.getMessage(Messages.JOB_AND_GROUP_NAMES_MUST_BE_UNIQUE, matchedIds.get(0))));
|
||||
}
|
||||
},
|
||||
actionListener::onFailure
|
||||
);
|
||||
|
||||
jobResultsProvider.checkForLeftOverDocuments(job, checkForLeftOverDocs);
|
||||
ActionListener<Boolean> checkNoJobsWithGroupId = ActionListener.wrap(
|
||||
groupExists -> {
|
||||
if (groupExists) {
|
||||
actionListener.onFailure(new ResourceAlreadyExistsException(
|
||||
Messages.getMessage(Messages.JOB_AND_GROUP_NAMES_MUST_BE_UNIQUE, job.getId())));
|
||||
return;
|
||||
}
|
||||
if (job.getGroups().isEmpty()) {
|
||||
checkForLeftOverDocs.onResponse(Collections.emptyList());
|
||||
} else {
|
||||
jobConfigProvider.jobIdMatches(job.getGroups(), checkForLeftOverDocs);
|
||||
}
|
||||
},
|
||||
actionListener::onFailure
|
||||
);
|
||||
|
||||
ActionListener<Boolean> checkNoGroupWithTheJobId = ActionListener.wrap(
|
||||
ok -> {
|
||||
jobConfigProvider.groupExists(job.getId(), checkNoJobsWithGroupId);
|
||||
},
|
||||
actionListener::onFailure
|
||||
);
|
||||
|
||||
jobConfigProvider.jobExists(job.getId(), false, ActionListener.wrap(
|
||||
jobExists -> {
|
||||
if (jobExists) {
|
||||
actionListener.onFailure(ExceptionsHelper.jobAlreadyExists(job.getId()));
|
||||
} else {
|
||||
jobResultsProvider.checkForLeftOverDocuments(job, checkNoGroupWithTheJobId);
|
||||
}
|
||||
},
|
||||
actionListener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
public void updateJob(UpdateJobAction.Request request, ActionListener<PutJobAction.Response> actionListener) {
|
||||
Job job = getJobOrThrowIfUnknown(request.getJobId());
|
||||
validate(request.getJobUpdate(), job, ActionListener.wrap(
|
||||
nullValue -> internalJobUpdate(request, actionListener),
|
||||
actionListener::onFailure));
|
||||
|
||||
Runnable doUpdate = () -> {
|
||||
jobConfigProvider.updateJobWithValidation(request.getJobId(), request.getJobUpdate(), maxModelMemoryLimit,
|
||||
this::validate, ActionListener.wrap(
|
||||
updatedJob -> postJobUpdate(request, updatedJob, actionListener),
|
||||
actionListener::onFailure
|
||||
));
|
||||
};
|
||||
|
||||
ClusterState clusterState = clusterService.state();
|
||||
if (migrationEligibilityCheck.jobIsEligibleForMigration(request.getJobId(), clusterState)) {
|
||||
actionListener.onFailure(ExceptionsHelper.configHasNotBeenMigrated("update job", request.getJobId()));
|
||||
return;
|
||||
}
|
||||
|
||||
if (request.getJobUpdate().getGroups() != null && request.getJobUpdate().getGroups().isEmpty() == false) {
|
||||
|
||||
// check the new groups are not job Ids
|
||||
jobConfigProvider.jobIdMatches(request.getJobUpdate().getGroups(), ActionListener.wrap(
|
||||
matchingIds -> {
|
||||
if (matchingIds.isEmpty()) {
|
||||
doUpdate.run();
|
||||
} else {
|
||||
actionListener.onFailure(new ResourceAlreadyExistsException(
|
||||
Messages.getMessage(Messages.JOB_AND_GROUP_NAMES_MUST_BE_UNIQUE, matchingIds.get(0))));
|
||||
}
|
||||
},
|
||||
actionListener::onFailure
|
||||
));
|
||||
} else {
|
||||
doUpdate.run();
|
||||
}
|
||||
}
|
||||
|
||||
private void validate(JobUpdate jobUpdate, Job job, ActionListener<Void> handler) {
|
||||
private void postJobUpdate(UpdateJobAction.Request request, Job updatedJob, ActionListener<PutJobAction.Response> actionListener) {
|
||||
// Autodetect must be updated if the fields that the C++ uses are changed
|
||||
if (request.getJobUpdate().isAutodetectProcessUpdate()) {
|
||||
JobUpdate jobUpdate = request.getJobUpdate();
|
||||
if (isJobOpen(clusterService.state(), request.getJobId())) {
|
||||
updateJobProcessNotifier.submitJobUpdate(UpdateParams.fromJobUpdate(jobUpdate), ActionListener.wrap(
|
||||
isUpdated -> {
|
||||
if (isUpdated) {
|
||||
auditJobUpdatedIfNotInternal(request);
|
||||
}
|
||||
}, e -> {
|
||||
// No need to do anything
|
||||
}
|
||||
));
|
||||
}
|
||||
} else {
|
||||
logger.debug("[{}] No process update required for job update: {}", () -> request.getJobId(), () -> {
|
||||
try {
|
||||
XContentBuilder jsonBuilder = XContentFactory.jsonBuilder();
|
||||
request.getJobUpdate().toXContent(jsonBuilder, ToXContent.EMPTY_PARAMS);
|
||||
return Strings.toString(jsonBuilder);
|
||||
} catch (IOException e) {
|
||||
return "(unprintable due to " + e.getMessage() + ")";
|
||||
}
|
||||
});
|
||||
|
||||
auditJobUpdatedIfNotInternal(request);
|
||||
}
|
||||
|
||||
actionListener.onResponse(new PutJobAction.Response(updatedJob));
|
||||
}
|
||||
|
||||
private void validate(Job job, JobUpdate jobUpdate, ActionListener<Void> handler) {
|
||||
ChainTaskExecutor chainTaskExecutor = new ChainTaskExecutor(client.threadPool().executor(
|
||||
MachineLearning.UTILITY_THREAD_POOL_NAME), true);
|
||||
validateModelSnapshotIdUpdate(job, jobUpdate.getModelSnapshotId(), chainTaskExecutor);
|
||||
@ -316,86 +455,6 @@ public class JobManager {
|
||||
});
|
||||
}
|
||||
|
||||
private void internalJobUpdate(UpdateJobAction.Request request, ActionListener<PutJobAction.Response> actionListener) {
|
||||
if (request.isWaitForAck()) {
|
||||
// Use the ack cluster state update
|
||||
clusterService.submitStateUpdateTask("update-job-" + request.getJobId(),
|
||||
new AckedClusterStateUpdateTask<PutJobAction.Response>(request, actionListener) {
|
||||
private AtomicReference<Job> updatedJob = new AtomicReference<>();
|
||||
|
||||
@Override
|
||||
protected PutJobAction.Response newResponse(boolean acknowledged) {
|
||||
return new PutJobAction.Response(updatedJob.get());
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
Job job = getJobOrThrowIfUnknown(request.getJobId(), currentState);
|
||||
updatedJob.set(request.getJobUpdate().mergeWithJob(job, maxModelMemoryLimit));
|
||||
return updateClusterState(updatedJob.get(), true, currentState);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {
|
||||
afterClusterStateUpdate(newState, request);
|
||||
}
|
||||
});
|
||||
} else {
|
||||
clusterService.submitStateUpdateTask("update-job-" + request.getJobId(), new ClusterStateUpdateTask() {
|
||||
private AtomicReference<Job> updatedJob = new AtomicReference<>();
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) throws Exception {
|
||||
Job job = getJobOrThrowIfUnknown(request.getJobId(), currentState);
|
||||
updatedJob.set(request.getJobUpdate().mergeWithJob(job, maxModelMemoryLimit));
|
||||
return updateClusterState(updatedJob.get(), true, currentState);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(String source, Exception e) {
|
||||
actionListener.onFailure(e);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {
|
||||
afterClusterStateUpdate(newState, request);
|
||||
actionListener.onResponse(new PutJobAction.Response(updatedJob.get()));
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
private void afterClusterStateUpdate(ClusterState newState, UpdateJobAction.Request request) {
|
||||
JobUpdate jobUpdate = request.getJobUpdate();
|
||||
|
||||
// Change is required if the fields that the C++ uses are being updated
|
||||
boolean processUpdateRequired = jobUpdate.isAutodetectProcessUpdate();
|
||||
|
||||
if (processUpdateRequired && isJobOpen(newState, request.getJobId())) {
|
||||
updateJobProcessNotifier.submitJobUpdate(UpdateParams.fromJobUpdate(jobUpdate), ActionListener.wrap(
|
||||
isUpdated -> {
|
||||
if (isUpdated) {
|
||||
auditJobUpdatedIfNotInternal(request);
|
||||
}
|
||||
}, e -> {
|
||||
// No need to do anything
|
||||
}
|
||||
));
|
||||
} else {
|
||||
logger.debug("[{}] No process update required for job update: {}", () -> request.getJobId(), () -> {
|
||||
try {
|
||||
XContentBuilder jsonBuilder = XContentFactory.jsonBuilder();
|
||||
jobUpdate.toXContent(jsonBuilder, ToXContent.EMPTY_PARAMS);
|
||||
return Strings.toString(jsonBuilder);
|
||||
} catch (IOException e) {
|
||||
return "(unprintable due to " + e.getMessage() + ")";
|
||||
}
|
||||
});
|
||||
|
||||
auditJobUpdatedIfNotInternal(request);
|
||||
}
|
||||
}
|
||||
|
||||
private void auditJobUpdatedIfNotInternal(UpdateJobAction.Request request) {
|
||||
if (request.isInternal() == false) {
|
||||
auditor.info(request.getJobId(), Messages.getMessage(Messages.JOB_AUDIT_UPDATED, request.getJobUpdate().getUpdateFields()));
|
||||
@ -408,32 +467,42 @@ public class JobManager {
|
||||
return jobState == JobState.OPENED;
|
||||
}
|
||||
|
||||
private ClusterState updateClusterState(Job job, boolean overwrite, ClusterState currentState) {
|
||||
MlMetadata.Builder builder = createMlMetadataBuilder(currentState);
|
||||
builder.putJob(job, overwrite);
|
||||
return buildNewClusterState(currentState, builder);
|
||||
private Set<String> openJobIds(ClusterState clusterState) {
|
||||
PersistentTasksCustomMetaData persistentTasks = clusterState.metaData().custom(PersistentTasksCustomMetaData.TYPE);
|
||||
return MlTasks.openJobIds(persistentTasks);
|
||||
}
|
||||
|
||||
public void notifyFilterChanged(MlFilter filter, Set<String> addedItems, Set<String> removedItems) {
|
||||
public void notifyFilterChanged(MlFilter filter, Set<String> addedItems, Set<String> removedItems,
|
||||
ActionListener<Boolean> updatedListener) {
|
||||
if (addedItems.isEmpty() && removedItems.isEmpty()) {
|
||||
updatedListener.onResponse(Boolean.TRUE);
|
||||
return;
|
||||
}
|
||||
|
||||
ClusterState clusterState = clusterService.state();
|
||||
QueryPage<Job> jobs = expandJobs("*", true, clusterService.state());
|
||||
for (Job job : jobs.results()) {
|
||||
Set<String> jobFilters = job.getAnalysisConfig().extractReferencedFilters();
|
||||
if (jobFilters.contains(filter.getId())) {
|
||||
if (isJobOpen(clusterState, job.getId())) {
|
||||
updateJobProcessNotifier.submitJobUpdate(UpdateParams.filterUpdate(job.getId(), filter),
|
||||
ActionListener.wrap(isUpdated -> {
|
||||
auditFilterChanges(job.getId(), filter.getId(), addedItems, removedItems);
|
||||
}, e -> {}));
|
||||
} else {
|
||||
auditFilterChanges(job.getId(), filter.getId(), addedItems, removedItems);
|
||||
}
|
||||
}
|
||||
}
|
||||
jobConfigProvider.findJobsWithCustomRules(ActionListener.wrap(
|
||||
jobBuilders -> {
|
||||
threadPool.executor(MachineLearning.UTILITY_THREAD_POOL_NAME).execute(() -> {
|
||||
for (Job job: jobBuilders) {
|
||||
Set<String> jobFilters = job.getAnalysisConfig().extractReferencedFilters();
|
||||
ClusterState clusterState = clusterService.state();
|
||||
if (jobFilters.contains(filter.getId())) {
|
||||
if (isJobOpen(clusterState, job.getId())) {
|
||||
updateJobProcessNotifier.submitJobUpdate(UpdateParams.filterUpdate(job.getId(), filter),
|
||||
ActionListener.wrap(isUpdated -> {
|
||||
auditFilterChanges(job.getId(), filter.getId(), addedItems, removedItems);
|
||||
}, e -> {
|
||||
}));
|
||||
} else {
|
||||
auditFilterChanges(job.getId(), filter.getId(), addedItems, removedItems);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
updatedListener.onResponse(Boolean.TRUE);
|
||||
});
|
||||
},
|
||||
updatedListener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
private void auditFilterChanges(String jobId, String filterId, Set<String> addedItems, Set<String> removedItems) {
|
||||
@ -463,26 +532,40 @@ public class JobManager {
|
||||
sb.append("]");
|
||||
}
|
||||
|
||||
public void updateProcessOnCalendarChanged(List<String> calendarJobIds) {
|
||||
public void updateProcessOnCalendarChanged(List<String> calendarJobIds, ActionListener<Boolean> updateListener) {
|
||||
ClusterState clusterState = clusterService.state();
|
||||
MlMetadata mlMetadata = MlMetadata.getMlMetadata(clusterState);
|
||||
|
||||
List<String> existingJobsOrGroups =
|
||||
calendarJobIds.stream().filter(mlMetadata::isGroupOrJob).collect(Collectors.toList());
|
||||
|
||||
Set<String> expandedJobIds = new HashSet<>();
|
||||
existingJobsOrGroups.forEach(jobId -> expandedJobIds.addAll(expandJobIds(jobId, true, clusterState)));
|
||||
for (String jobId : expandedJobIds) {
|
||||
if (isJobOpen(clusterState, jobId)) {
|
||||
updateJobProcessNotifier.submitJobUpdate(UpdateParams.scheduledEventsUpdate(jobId), ActionListener.wrap(
|
||||
isUpdated -> {
|
||||
if (isUpdated) {
|
||||
auditor.info(jobId, Messages.getMessage(Messages.JOB_AUDIT_CALENDARS_UPDATED_ON_PROCESS));
|
||||
}
|
||||
}, e -> {}
|
||||
));
|
||||
}
|
||||
Set<String> openJobIds = openJobIds(clusterState);
|
||||
if (openJobIds.isEmpty()) {
|
||||
updateListener.onResponse(Boolean.TRUE);
|
||||
return;
|
||||
}
|
||||
|
||||
// calendarJobIds may be a group or job
|
||||
jobConfigProvider.expandGroupIds(calendarJobIds, ActionListener.wrap(
|
||||
expandedIds -> {
|
||||
threadPool.executor(MachineLearning.UTILITY_THREAD_POOL_NAME).execute(() -> {
|
||||
// Merge the expended group members with the request Ids.
|
||||
// Ids that aren't jobs will be filtered by isJobOpen()
|
||||
expandedIds.addAll(calendarJobIds);
|
||||
|
||||
for (String jobId : expandedIds) {
|
||||
if (isJobOpen(clusterState, jobId)) {
|
||||
updateJobProcessNotifier.submitJobUpdate(UpdateParams.scheduledEventsUpdate(jobId), ActionListener.wrap(
|
||||
isUpdated -> {
|
||||
if (isUpdated) {
|
||||
auditor.info(jobId, Messages.getMessage(Messages.JOB_AUDIT_CALENDARS_UPDATED_ON_PROCESS));
|
||||
}
|
||||
},
|
||||
e -> logger.error("[" + jobId + "] failed submitting process update on calendar change", e)
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
updateListener.onResponse(Boolean.TRUE);
|
||||
});
|
||||
},
|
||||
updateListener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
public void revertSnapshot(RevertModelSnapshotAction.Request request, ActionListener<RevertModelSnapshotAction.Response> actionListener,
|
||||
@ -514,46 +597,19 @@ public class JobManager {
|
||||
}
|
||||
};
|
||||
|
||||
// Step 1. Do the cluster state update
|
||||
// Step 1. update the job
|
||||
// -------
|
||||
Consumer<Long> clusterStateHandler = response -> clusterService.submitStateUpdateTask("revert-snapshot-" + request.getJobId(),
|
||||
new AckedClusterStateUpdateTask<Boolean>(request, ActionListener.wrap(updateHandler, actionListener::onFailure)) {
|
||||
JobUpdate update = new JobUpdate.Builder(request.getJobId())
|
||||
.setModelSnapshotId(modelSnapshot.getSnapshotId())
|
||||
.build();
|
||||
|
||||
@Override
|
||||
protected Boolean newResponse(boolean acknowledged) {
|
||||
if (acknowledged) {
|
||||
auditor.info(request.getJobId(), Messages.getMessage(Messages.JOB_AUDIT_REVERTED, modelSnapshot.getDescription()));
|
||||
return true;
|
||||
}
|
||||
actionListener.onFailure(new IllegalStateException("Could not revert modelSnapshot on job ["
|
||||
+ request.getJobId() + "], not acknowledged by master."));
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public ClusterState execute(ClusterState currentState) {
|
||||
Job job = getJobOrThrowIfUnknown(request.getJobId(), currentState);
|
||||
Job.Builder builder = new Job.Builder(job);
|
||||
builder.setModelSnapshotId(modelSnapshot.getSnapshotId());
|
||||
builder.setEstablishedModelMemory(response);
|
||||
return updateClusterState(builder.build(), true, currentState);
|
||||
}
|
||||
});
|
||||
|
||||
// Step 0. Find the appropriate established model memory for the reverted job
|
||||
// -------
|
||||
jobResultsProvider.getEstablishedMemoryUsage(request.getJobId(), modelSizeStats.getTimestamp(), modelSizeStats, clusterStateHandler,
|
||||
actionListener::onFailure);
|
||||
}
|
||||
|
||||
private static MlMetadata.Builder createMlMetadataBuilder(ClusterState currentState) {
|
||||
return new MlMetadata.Builder(MlMetadata.getMlMetadata(currentState));
|
||||
}
|
||||
|
||||
private static ClusterState buildNewClusterState(ClusterState currentState, MlMetadata.Builder builder) {
|
||||
XPackPlugin.checkReadyForXPackCustomMetadata(currentState);
|
||||
ClusterState.Builder newState = ClusterState.builder(currentState);
|
||||
newState.metaData(MetaData.builder(currentState.getMetaData()).putCustom(MlMetadata.TYPE, builder.build()).build());
|
||||
return newState.build();
|
||||
jobConfigProvider.updateJob(request.getJobId(), update, maxModelMemoryLimit, ActionListener.wrap(
|
||||
job -> {
|
||||
auditor.info(request.getJobId(),
|
||||
Messages.getMessage(Messages.JOB_AUDIT_REVERTED, modelSnapshot.getDescription()));
|
||||
updateHandler.accept(Boolean.TRUE);
|
||||
},
|
||||
actionListener::onFailure
|
||||
));
|
||||
}
|
||||
}
|
||||
|
@ -0,0 +1,44 @@
|
||||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.ml.job.persistence;
|
||||
|
||||
import org.elasticsearch.ElasticsearchParseException;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.index.query.QueryBuilder;
|
||||
import org.elasticsearch.index.query.TermQueryBuilder;
|
||||
import org.elasticsearch.search.SearchHit;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
|
||||
public class BatchedJobsIterator extends BatchedDocumentsIterator<Job.Builder> {
|
||||
|
||||
public BatchedJobsIterator(Client client, String index) {
|
||||
super(client, index);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected QueryBuilder getQuery() {
|
||||
return new TermQueryBuilder(Job.JOB_TYPE.getPreferredName(), Job.ANOMALY_DETECTOR_JOB_TYPE);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected Job.Builder map(SearchHit hit) {
|
||||
try (InputStream stream = hit.getSourceRef().streamInput();
|
||||
XContentParser parser = XContentFactory.xContent(XContentType.JSON)
|
||||
.createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, stream)) {
|
||||
return Job.LENIENT_PARSER.apply(parser, null);
|
||||
} catch (IOException e) {
|
||||
throw new ElasticsearchParseException("failed to parse job document [" + hit.getId() + "]", e);
|
||||
}
|
||||
}
|
||||
}
|
@ -0,0 +1,158 @@
|
||||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.ml.job.persistence;
|
||||
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.regex.Regex;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.Iterator;
|
||||
import java.util.LinkedList;
|
||||
import java.util.List;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
/**
|
||||
* Class for tracking the set of Ids returned from some
|
||||
* function a satisfy the required Ids as defined by an
|
||||
* expression that may contain wildcards.
|
||||
*
|
||||
* For example, given a set of Ids ["foo-1", "foo-2", "bar-1", bar-2"]:
|
||||
* <ul>
|
||||
* <li>The expression foo* would be satisfied by foo-1 and foo-2</li>
|
||||
* <li>The expression bar-1 would be satisfied by bar-1</li>
|
||||
* <li>The expression bar-1,car-1 would leave car-1 unmatched</li>
|
||||
* <li>The expression * would be satisfied by anything or nothing depending on the
|
||||
* value of {@code allowNoMatchForWildcards}</li>
|
||||
* </ul>
|
||||
*/
|
||||
public final class ExpandedIdsMatcher {
|
||||
|
||||
public static String ALL = "_all";
|
||||
|
||||
/**
|
||||
* Split {@code expression} into tokens separated by a ','
|
||||
*
|
||||
* @param expression Expression containing zero or more ','s
|
||||
* @return Array of tokens
|
||||
*/
|
||||
public static String [] tokenizeExpression(String expression) {
|
||||
return Strings.tokenizeToStringArray(expression, ",");
|
||||
}
|
||||
|
||||
private final LinkedList<IdMatcher> requiredMatches;
|
||||
|
||||
/**
|
||||
* Generate the list of required matches from the expressions in {@code tokens}
|
||||
* and initialize.
|
||||
*
|
||||
* @param tokens List of expressions that may be wildcards or full Ids
|
||||
* @param allowNoMatchForWildcards If true then it is not required for wildcard
|
||||
* expressions to match an Id meaning they are
|
||||
* not returned in the list of required matches
|
||||
*/
|
||||
public ExpandedIdsMatcher(String [] tokens, boolean allowNoMatchForWildcards) {
|
||||
requiredMatches = new LinkedList<>();
|
||||
|
||||
if (Strings.isAllOrWildcard(tokens)) {
|
||||
// if allowNoJobForWildcards == true then any number
|
||||
// of jobs with any id is ok. Therefore no matches
|
||||
// are required
|
||||
|
||||
if (allowNoMatchForWildcards == false) {
|
||||
// require something, anything to match
|
||||
requiredMatches.add(new WildcardMatcher("*"));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (allowNoMatchForWildcards) {
|
||||
// matches are not required for wildcards but
|
||||
// specific job Ids are
|
||||
for (String token : tokens) {
|
||||
if (Regex.isSimpleMatchPattern(token) == false) {
|
||||
requiredMatches.add(new EqualsIdMatcher(token));
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Matches are required for wildcards
|
||||
for (String token : tokens) {
|
||||
if (Regex.isSimpleMatchPattern(token)) {
|
||||
requiredMatches.add(new WildcardMatcher(token));
|
||||
} else {
|
||||
requiredMatches.add(new EqualsIdMatcher(token));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* For each {@code requiredMatchers} check there is an element
|
||||
* present in {@code ids} that matches. Once a match is made the
|
||||
* matcher is removed from {@code requiredMatchers}.
|
||||
*/
|
||||
public void filterMatchedIds(Collection<String> ids) {
|
||||
for (String id: ids) {
|
||||
Iterator<IdMatcher> itr = requiredMatches.iterator();
|
||||
if (itr.hasNext() == false) {
|
||||
break;
|
||||
}
|
||||
while (itr.hasNext()) {
|
||||
if (itr.next().matches(id)) {
|
||||
itr.remove();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public boolean hasUnmatchedIds() {
|
||||
return requiredMatches.isEmpty() == false;
|
||||
}
|
||||
|
||||
public List<String> unmatchedIds() {
|
||||
return requiredMatches.stream().map(IdMatcher::getId).collect(Collectors.toList());
|
||||
}
|
||||
|
||||
public String unmatchedIdsString() {
|
||||
return requiredMatches.stream().map(IdMatcher::getId).collect(Collectors.joining(","));
|
||||
}
|
||||
|
||||
|
||||
private abstract static class IdMatcher {
|
||||
protected final String id;
|
||||
|
||||
IdMatcher(String id) {
|
||||
this.id = id;
|
||||
}
|
||||
|
||||
public String getId() {
|
||||
return id;
|
||||
}
|
||||
|
||||
public abstract boolean matches(String jobId);
|
||||
}
|
||||
|
||||
private static class EqualsIdMatcher extends IdMatcher {
|
||||
EqualsIdMatcher(String id) {
|
||||
super(id);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean matches(String id) {
|
||||
return this.id.equals(id);
|
||||
}
|
||||
}
|
||||
|
||||
private static class WildcardMatcher extends IdMatcher {
|
||||
WildcardMatcher(String id) {
|
||||
super(id);
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean matches(String id) {
|
||||
return Regex.simpleMatch(this.id, id);
|
||||
}
|
||||
}
|
||||
}
|
@ -0,0 +1,849 @@
|
||||
/*
|
||||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
|
||||
* or more contributor license agreements. Licensed under the Elastic License;
|
||||
* you may not use this file except in compliance with the Elastic License.
|
||||
*/
|
||||
package org.elasticsearch.xpack.ml.job.persistence;
|
||||
|
||||
import org.apache.logging.log4j.LogManager;
|
||||
import org.apache.logging.log4j.Logger;
|
||||
import org.apache.lucene.search.join.ScoreMode;
|
||||
import org.elasticsearch.ElasticsearchException;
|
||||
import org.elasticsearch.ElasticsearchParseException;
|
||||
import org.elasticsearch.action.ActionListener;
|
||||
import org.elasticsearch.action.DocWriteRequest;
|
||||
import org.elasticsearch.action.DocWriteResponse;
|
||||
import org.elasticsearch.action.delete.DeleteAction;
|
||||
import org.elasticsearch.action.delete.DeleteRequest;
|
||||
import org.elasticsearch.action.delete.DeleteResponse;
|
||||
import org.elasticsearch.action.get.GetAction;
|
||||
import org.elasticsearch.action.get.GetRequest;
|
||||
import org.elasticsearch.action.get.GetResponse;
|
||||
import org.elasticsearch.action.get.MultiGetItemResponse;
|
||||
import org.elasticsearch.action.get.MultiGetRequest;
|
||||
import org.elasticsearch.action.get.MultiGetResponse;
|
||||
import org.elasticsearch.action.index.IndexAction;
|
||||
import org.elasticsearch.action.index.IndexRequest;
|
||||
import org.elasticsearch.action.index.IndexResponse;
|
||||
import org.elasticsearch.action.search.SearchRequest;
|
||||
import org.elasticsearch.action.search.SearchResponse;
|
||||
import org.elasticsearch.action.support.IndicesOptions;
|
||||
import org.elasticsearch.action.support.WriteRequest;
|
||||
import org.elasticsearch.action.update.UpdateAction;
|
||||
import org.elasticsearch.action.update.UpdateRequest;
|
||||
import org.elasticsearch.client.Client;
|
||||
import org.elasticsearch.common.Strings;
|
||||
import org.elasticsearch.common.bytes.BytesReference;
|
||||
import org.elasticsearch.common.regex.Regex;
|
||||
import org.elasticsearch.common.unit.ByteSizeValue;
|
||||
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
|
||||
import org.elasticsearch.common.xcontent.NamedXContentRegistry;
|
||||
import org.elasticsearch.common.xcontent.ToXContent;
|
||||
import org.elasticsearch.common.xcontent.XContentBuilder;
|
||||
import org.elasticsearch.common.xcontent.XContentFactory;
|
||||
import org.elasticsearch.common.xcontent.XContentParser;
|
||||
import org.elasticsearch.common.xcontent.XContentType;
|
||||
import org.elasticsearch.index.IndexNotFoundException;
|
||||
import org.elasticsearch.index.engine.DocumentMissingException;
|
||||
import org.elasticsearch.index.engine.VersionConflictEngineException;
|
||||
import org.elasticsearch.index.query.BoolQueryBuilder;
|
||||
import org.elasticsearch.index.query.ExistsQueryBuilder;
|
||||
import org.elasticsearch.index.query.QueryBuilder;
|
||||
import org.elasticsearch.index.query.QueryBuilders;
|
||||
import org.elasticsearch.index.query.TermQueryBuilder;
|
||||
import org.elasticsearch.index.query.TermsQueryBuilder;
|
||||
import org.elasticsearch.index.query.WildcardQueryBuilder;
|
||||
import org.elasticsearch.search.SearchHit;
|
||||
import org.elasticsearch.search.builder.SearchSourceBuilder;
|
||||
import org.elasticsearch.search.fetch.subphase.DocValueFieldsContext;
|
||||
import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
|
||||
import org.elasticsearch.search.sort.SortOrder;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedConfig;
|
||||
import org.elasticsearch.xpack.core.ml.datafeed.DatafeedJobValidator;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.AnalysisConfig;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Detector;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.Job;
|
||||
import org.elasticsearch.xpack.core.ml.job.config.JobUpdate;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.AnomalyDetectorsIndex;
|
||||
import org.elasticsearch.xpack.core.ml.job.persistence.ElasticsearchMappings;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ExceptionsHelper;
|
||||
import org.elasticsearch.xpack.core.ml.utils.ToXContentParams;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.InputStream;
|
||||
import java.util.ArrayList;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.HashMap;
|
||||
import java.util.HashSet;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Set;
|
||||
import java.util.SortedSet;
|
||||
import java.util.TreeSet;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.ML_ORIGIN;
|
||||
import static org.elasticsearch.xpack.core.ClientHelper.executeAsyncWithOrigin;
|
||||
|
||||
/**
|
||||
* This class implements CRUD operation for the
|
||||
* anomaly detector job configuration document
|
||||
*/
|
||||
public class JobConfigProvider {
|
||||
|
||||
private static final Logger logger = LogManager.getLogger(JobConfigProvider.class);
|
||||
|
||||
public static final Map<String, String> TO_XCONTENT_PARAMS;
|
||||
static {
|
||||
Map<String, String> modifiable = new HashMap<>();
|
||||
modifiable.put(ToXContentParams.FOR_INTERNAL_STORAGE, "true");
|
||||
TO_XCONTENT_PARAMS = Collections.unmodifiableMap(modifiable);
|
||||
}
|
||||
|
||||
/**
|
||||
* In most cases we expect 10s or 100s of jobs to be defined and
|
||||
* a search for all jobs should return all.
|
||||
* TODO this is a temporary fix
|
||||
*/
|
||||
private int searchSize = 1000;
|
||||
|
||||
private final Client client;
|
||||
|
||||
public JobConfigProvider(Client client) {
|
||||
this.client = client;
|
||||
}
|
||||
|
||||
/**
|
||||
* Persist the anomaly detector job configuration to the configuration index.
|
||||
* It is an error if an job with the same Id already exists - the config will
|
||||
* not be overwritten.
|
||||
*
|
||||
* @param job The anomaly detector job configuration
|
||||
* @param listener Index response listener
|
||||
*/
|
||||
public void putJob(Job job, ActionListener<IndexResponse> listener) {
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
XContentBuilder source = job.toXContent(builder, new ToXContent.MapParams(TO_XCONTENT_PARAMS));
|
||||
IndexRequest indexRequest = client.prepareIndex(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(job.getId()))
|
||||
.setSource(source)
|
||||
.setOpType(DocWriteRequest.OpType.CREATE)
|
||||
.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
|
||||
.request();
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, IndexAction.INSTANCE, indexRequest, ActionListener.wrap(
|
||||
listener::onResponse,
|
||||
e -> {
|
||||
if (e instanceof VersionConflictEngineException) {
|
||||
// the job already exists
|
||||
listener.onFailure(ExceptionsHelper.jobAlreadyExists(job.getId()));
|
||||
} else {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
}));
|
||||
|
||||
} catch (IOException e) {
|
||||
listener.onFailure(new ElasticsearchParseException("Failed to serialise job with id [" + job.getId() + "]", e));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the anomaly detector job specified by {@code jobId}.
|
||||
* If the job is missing a {@code ResourceNotFoundException} is returned
|
||||
* via the listener.
|
||||
*
|
||||
* If the .ml-config index does not exist it is treated as a missing job
|
||||
* error.
|
||||
*
|
||||
* @param jobId The job ID
|
||||
* @param jobListener Job listener
|
||||
*/
|
||||
public void getJob(String jobId, ActionListener<Job.Builder> jobListener) {
|
||||
GetRequest getRequest = new GetRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(jobId));
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, getRequest, new ActionListener<GetResponse>() {
|
||||
@Override
|
||||
public void onResponse(GetResponse getResponse) {
|
||||
if (getResponse.isExists() == false) {
|
||||
jobListener.onFailure(ExceptionsHelper.missingJobException(jobId));
|
||||
return;
|
||||
}
|
||||
|
||||
BytesReference source = getResponse.getSourceAsBytesRef();
|
||||
parseJobLenientlyFromSource(source, jobListener);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
if (e.getClass() == IndexNotFoundException.class) {
|
||||
jobListener.onFailure(ExceptionsHelper.missingJobException(jobId));
|
||||
} else {
|
||||
jobListener.onFailure(e);
|
||||
}
|
||||
}
|
||||
}, client::get);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the list anomaly detector jobs specified by {@code jobIds}.
|
||||
*
|
||||
* WARNING: errors are silently ignored, if a job is not found a
|
||||
* {@code ResourceNotFoundException} is not thrown. Only found
|
||||
* jobs are returned, this size of the returned jobs list could
|
||||
* be different to the size of the requested ids list.
|
||||
*
|
||||
* @param jobIds The jobs to get
|
||||
* @param listener Jobs listener
|
||||
*/
|
||||
public void getJobs(List<String> jobIds, ActionListener<List<Job.Builder>> listener) {
|
||||
MultiGetRequest multiGetRequest = new MultiGetRequest();
|
||||
jobIds.forEach(jobId -> multiGetRequest.add(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(jobId)));
|
||||
|
||||
List<Job.Builder> jobs = new ArrayList<>();
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, multiGetRequest, new ActionListener<MultiGetResponse>() {
|
||||
@Override
|
||||
public void onResponse(MultiGetResponse multiGetResponse) {
|
||||
|
||||
MultiGetItemResponse[] responses = multiGetResponse.getResponses();
|
||||
for (MultiGetItemResponse response : responses) {
|
||||
GetResponse getResponse = response.getResponse();
|
||||
if (getResponse.isExists()) {
|
||||
BytesReference source = getResponse.getSourceAsBytesRef();
|
||||
try {
|
||||
Job.Builder job = parseJobLenientlyFromSource(source);
|
||||
jobs.add(job);
|
||||
} catch (IOException e) {
|
||||
logger.error("Error parsing job configuration [" + response.getId() + "]");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
listener.onResponse(jobs);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
}, client::multiGet);
|
||||
}
|
||||
|
||||
/**
|
||||
* Delete the anomaly detector job config document.
|
||||
* {@code errorIfMissing} controls whether or not an error is returned
|
||||
* if the document does not exist.
|
||||
*
|
||||
* @param jobId The job id
|
||||
* @param errorIfMissing If the job document does not exist and this is true
|
||||
* listener fails with a ResourceNotFoundException else
|
||||
* the DeleteResponse is always return.
|
||||
* @param actionListener Deleted job listener
|
||||
*/
|
||||
public void deleteJob(String jobId, boolean errorIfMissing, ActionListener<DeleteResponse> actionListener) {
|
||||
DeleteRequest request = new DeleteRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(jobId));
|
||||
request.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, DeleteAction.INSTANCE, request, new ActionListener<DeleteResponse>() {
|
||||
@Override
|
||||
public void onResponse(DeleteResponse deleteResponse) {
|
||||
if (errorIfMissing) {
|
||||
if (deleteResponse.getResult() == DocWriteResponse.Result.NOT_FOUND) {
|
||||
actionListener.onFailure(ExceptionsHelper.missingJobException(jobId));
|
||||
return;
|
||||
}
|
||||
assert deleteResponse.getResult() == DocWriteResponse.Result.DELETED;
|
||||
}
|
||||
actionListener.onResponse(deleteResponse);
|
||||
}
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
actionListener.onFailure(e);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the job and update it by applying {@code update} then index the changed job
|
||||
* setting the version in the request. Applying the update may cause a validation error
|
||||
* which is returned via {@code updatedJobListener}
|
||||
*
|
||||
* @param jobId The Id of the job to update
|
||||
* @param update The job update
|
||||
* @param maxModelMemoryLimit The maximum model memory allowed. This can be {@code null}
|
||||
* if the job's {@link org.elasticsearch.xpack.core.ml.job.config.AnalysisLimits}
|
||||
* are not changed.
|
||||
* @param updatedJobListener Updated job listener
|
||||
*/
|
||||
public void updateJob(String jobId, JobUpdate update, ByteSizeValue maxModelMemoryLimit, ActionListener<Job> updatedJobListener) {
|
||||
GetRequest getRequest = new GetRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(jobId));
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, GetAction.INSTANCE, getRequest, new ActionListener<GetResponse>() {
|
||||
@Override
|
||||
public void onResponse(GetResponse getResponse) {
|
||||
if (getResponse.isExists() == false) {
|
||||
updatedJobListener.onFailure(ExceptionsHelper.missingJobException(jobId));
|
||||
return;
|
||||
}
|
||||
|
||||
long version = getResponse.getVersion();
|
||||
BytesReference source = getResponse.getSourceAsBytesRef();
|
||||
Job.Builder jobBuilder;
|
||||
try {
|
||||
jobBuilder = parseJobLenientlyFromSource(source);
|
||||
} catch (IOException e) {
|
||||
updatedJobListener.onFailure(
|
||||
new ElasticsearchParseException("Failed to parse job configuration [" + jobId + "]", e));
|
||||
return;
|
||||
}
|
||||
|
||||
Job updatedJob;
|
||||
try {
|
||||
// Applying the update may result in a validation error
|
||||
updatedJob = update.mergeWithJob(jobBuilder.build(), maxModelMemoryLimit);
|
||||
} catch (Exception e) {
|
||||
updatedJobListener.onFailure(e);
|
||||
return;
|
||||
}
|
||||
|
||||
indexUpdatedJob(updatedJob, version, updatedJobListener);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
updatedJobListener.onFailure(e);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Job update validation function.
|
||||
* {@code updatedListener} must be called by implementations reporting
|
||||
* either an validation error or success.
|
||||
*/
|
||||
@FunctionalInterface
|
||||
public interface UpdateValidator {
|
||||
void validate(Job job, JobUpdate update, ActionListener<Void> updatedListener);
|
||||
}
|
||||
|
||||
/**
|
||||
* Similar to {@link #updateJob(String, JobUpdate, ByteSizeValue, ActionListener)} but
|
||||
* with an extra validation step which is called before the updated is applied.
|
||||
*
|
||||
* @param jobId The Id of the job to update
|
||||
* @param update The job update
|
||||
* @param maxModelMemoryLimit The maximum model memory allowed
|
||||
* @param validator The job update validator
|
||||
* @param updatedJobListener Updated job listener
|
||||
*/
|
||||
public void updateJobWithValidation(String jobId, JobUpdate update, ByteSizeValue maxModelMemoryLimit,
|
||||
UpdateValidator validator, ActionListener<Job> updatedJobListener) {
|
||||
GetRequest getRequest = new GetRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(jobId));
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, GetAction.INSTANCE, getRequest, new ActionListener<GetResponse>() {
|
||||
@Override
|
||||
public void onResponse(GetResponse getResponse) {
|
||||
if (getResponse.isExists() == false) {
|
||||
updatedJobListener.onFailure(ExceptionsHelper.missingJobException(jobId));
|
||||
return;
|
||||
}
|
||||
|
||||
long version = getResponse.getVersion();
|
||||
BytesReference source = getResponse.getSourceAsBytesRef();
|
||||
Job originalJob;
|
||||
try {
|
||||
originalJob = parseJobLenientlyFromSource(source).build();
|
||||
} catch (Exception e) {
|
||||
updatedJobListener.onFailure(
|
||||
new ElasticsearchParseException("Failed to parse job configuration [" + jobId + "]", e));
|
||||
return;
|
||||
}
|
||||
|
||||
validator.validate(originalJob, update, ActionListener.wrap(
|
||||
validated -> {
|
||||
Job updatedJob;
|
||||
try {
|
||||
// Applying the update may result in a validation error
|
||||
updatedJob = update.mergeWithJob(originalJob, maxModelMemoryLimit);
|
||||
} catch (Exception e) {
|
||||
updatedJobListener.onFailure(e);
|
||||
return;
|
||||
}
|
||||
|
||||
indexUpdatedJob(updatedJob, version, updatedJobListener);
|
||||
},
|
||||
updatedJobListener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
updatedJobListener.onFailure(e);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private void indexUpdatedJob(Job updatedJob, long version, ActionListener<Job> updatedJobListener) {
|
||||
try (XContentBuilder builder = XContentFactory.jsonBuilder()) {
|
||||
XContentBuilder updatedSource = updatedJob.toXContent(builder, ToXContent.EMPTY_PARAMS);
|
||||
IndexRequest indexRequest = client.prepareIndex(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(updatedJob.getId()))
|
||||
.setSource(updatedSource)
|
||||
.setVersion(version)
|
||||
.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE)
|
||||
.request();
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, IndexAction.INSTANCE, indexRequest, ActionListener.wrap(
|
||||
indexResponse -> {
|
||||
assert indexResponse.getResult() == DocWriteResponse.Result.UPDATED;
|
||||
updatedJobListener.onResponse(updatedJob);
|
||||
},
|
||||
updatedJobListener::onFailure
|
||||
));
|
||||
|
||||
} catch (IOException e) {
|
||||
updatedJobListener.onFailure(
|
||||
new ElasticsearchParseException("Failed to serialise job with id [" + updatedJob.getId() + "]", e));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check a job exists. A job exists if it has a configuration document.
|
||||
* If the .ml-config index does not exist it is treated as a missing job
|
||||
* error.
|
||||
*
|
||||
* Depending on the value of {@code errorIfMissing} if the job does not
|
||||
* exist a ResourceNotFoundException is returned to the listener,
|
||||
* otherwise false is returned in the response.
|
||||
*
|
||||
* @param jobId The jobId to check
|
||||
* @param errorIfMissing If true and the job is missing the listener fails with
|
||||
* a ResourceNotFoundException else false is returned.
|
||||
* @param listener Exists listener
|
||||
*/
|
||||
public void jobExists(String jobId, boolean errorIfMissing, ActionListener<Boolean> listener) {
|
||||
GetRequest getRequest = new GetRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(jobId));
|
||||
getRequest.fetchSourceContext(FetchSourceContext.DO_NOT_FETCH_SOURCE);
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, GetAction.INSTANCE, getRequest, new ActionListener<GetResponse>() {
|
||||
@Override
|
||||
public void onResponse(GetResponse getResponse) {
|
||||
if (getResponse.isExists() == false) {
|
||||
if (errorIfMissing) {
|
||||
listener.onFailure(ExceptionsHelper.missingJobException(jobId));
|
||||
} else {
|
||||
listener.onResponse(Boolean.FALSE);
|
||||
}
|
||||
} else {
|
||||
listener.onResponse(Boolean.TRUE);
|
||||
}
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
if (e.getClass() == IndexNotFoundException.class) {
|
||||
if (errorIfMissing) {
|
||||
listener.onFailure(ExceptionsHelper.missingJobException(jobId));
|
||||
} else {
|
||||
listener.onResponse(Boolean.FALSE);
|
||||
}
|
||||
} else {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* For the list of job Ids find all that match existing jobs Ids.
|
||||
* The repsonse is all the job Ids in {@code ids} that match an existing
|
||||
* job Id.
|
||||
* @param ids Job Ids to find
|
||||
* @param listener The matched Ids listener
|
||||
*/
|
||||
public void jobIdMatches(List<String> ids, ActionListener<List<String>> listener) {
|
||||
BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
|
||||
boolQueryBuilder.filter(new TermQueryBuilder(Job.JOB_TYPE.getPreferredName(), Job.ANOMALY_DETECTOR_JOB_TYPE));
|
||||
boolQueryBuilder.filter(new TermsQueryBuilder(Job.ID.getPreferredName(), ids));
|
||||
|
||||
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(boolQueryBuilder);
|
||||
sourceBuilder.fetchSource(false);
|
||||
sourceBuilder.docValueField(Job.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);
|
||||
|
||||
SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setSource(sourceBuilder)
|
||||
.setSize(ids.size())
|
||||
.request();
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, searchRequest,
|
||||
ActionListener.<SearchResponse>wrap(
|
||||
response -> {
|
||||
SearchHit[] hits = response.getHits().getHits();
|
||||
List<String> matchedIds = new ArrayList<>();
|
||||
for (SearchHit hit : hits) {
|
||||
matchedIds.add(hit.field(Job.ID.getPreferredName()).getValue());
|
||||
}
|
||||
listener.onResponse(matchedIds);
|
||||
},
|
||||
listener::onFailure)
|
||||
, client::search);
|
||||
}
|
||||
|
||||
/**
|
||||
* Sets the job's {@code deleting} field to true
|
||||
* @param jobId The job to mark as deleting
|
||||
* @param listener Responds with true if successful else an error
|
||||
*/
|
||||
public void markJobAsDeleting(String jobId, ActionListener<Boolean> listener) {
|
||||
UpdateRequest updateRequest = new UpdateRequest(AnomalyDetectorsIndex.configIndexName(),
|
||||
ElasticsearchMappings.DOC_TYPE, Job.documentId(jobId));
|
||||
updateRequest.retryOnConflict(3);
|
||||
updateRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
|
||||
updateRequest.doc(Collections.singletonMap(Job.DELETING.getPreferredName(), Boolean.TRUE));
|
||||
|
||||
executeAsyncWithOrigin(client, ML_ORIGIN, UpdateAction.INSTANCE, updateRequest, ActionListener.wrap(
|
||||
response -> {
|
||||
assert (response.getResult() == DocWriteResponse.Result.UPDATED) ||
|
||||
(response.getResult() == DocWriteResponse.Result.NOOP);
|
||||
listener.onResponse(Boolean.TRUE);
|
||||
},
|
||||
e -> {
|
||||
ElasticsearchException[] causes = ElasticsearchException.guessRootCauses(e);
|
||||
if (causes[0] instanceof DocumentMissingException) {
|
||||
listener.onFailure(ExceptionsHelper.missingJobException(jobId));
|
||||
} else {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
}
|
||||
));
|
||||
}
|
||||
|
||||
/**
|
||||
* Expands an expression into the set of matching names. {@code expresssion}
|
||||
* may be a wildcard, a job group, a job Id or a list of those.
|
||||
* If {@code expression} == 'ALL', '*' or the empty string then all
|
||||
* job Ids are returned.
|
||||
* Job groups are expanded to all the jobs Ids in that group.
|
||||
*
|
||||
* If {@code expression} contains a job Id or a Group name then it
|
||||
* is an error if the job or group do not exist.
|
||||
*
|
||||
* For example, given a set of names ["foo-1", "foo-2", "bar-1", bar-2"],
|
||||
* expressions resolve follows:
|
||||
* <ul>
|
||||
* <li>"foo-1" : ["foo-1"]</li>
|
||||
* <li>"bar-1" : ["bar-1"]</li>
|
||||
* <li>"foo-1,foo-2" : ["foo-1", "foo-2"]</li>
|
||||
* <li>"foo-*" : ["foo-1", "foo-2"]</li>
|
||||
* <li>"*-1" : ["bar-1", "foo-1"]</li>
|
||||
* <li>"*" : ["bar-1", "bar-2", "foo-1", "foo-2"]</li>
|
||||
* <li>"_all" : ["bar-1", "bar-2", "foo-1", "foo-2"]</li>
|
||||
* </ul>
|
||||
*
|
||||
* @param expression the expression to resolve
|
||||
* @param allowNoJobs if {@code false}, an error is thrown when no name matches the {@code expression}.
|
||||
* This only applies to wild card expressions, if {@code expression} is not a
|
||||
* wildcard then setting this true will not suppress the exception
|
||||
* @param excludeDeleting If true exclude jobs marked as deleting
|
||||
* @param listener The expanded job Ids listener
|
||||
*/
|
||||
public void expandJobsIds(String expression, boolean allowNoJobs, boolean excludeDeleting, ActionListener<SortedSet<String>> listener) {
|
||||
String [] tokens = ExpandedIdsMatcher.tokenizeExpression(expression);
|
||||
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(buildQuery(tokens, excludeDeleting));
|
||||
sourceBuilder.sort(Job.ID.getPreferredName());
|
||||
sourceBuilder.fetchSource(false);
|
||||
sourceBuilder.docValueField(Job.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);
|
||||
sourceBuilder.docValueField(Job.GROUPS.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);
|
||||
|
||||
SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setSource(sourceBuilder)
|
||||
.setSize(searchSize)
|
||||
.request();
|
||||
|
||||
ExpandedIdsMatcher requiredMatches = new ExpandedIdsMatcher(tokens, allowNoJobs);
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, searchRequest,
|
||||
ActionListener.<SearchResponse>wrap(
|
||||
response -> {
|
||||
SortedSet<String> jobIds = new TreeSet<>();
|
||||
SortedSet<String> groupsIds = new TreeSet<>();
|
||||
SearchHit[] hits = response.getHits().getHits();
|
||||
for (SearchHit hit : hits) {
|
||||
jobIds.add(hit.field(Job.ID.getPreferredName()).getValue());
|
||||
List<Object> groups = hit.field(Job.GROUPS.getPreferredName()).getValues();
|
||||
if (groups != null) {
|
||||
groupsIds.addAll(groups.stream().map(Object::toString).collect(Collectors.toList()));
|
||||
}
|
||||
}
|
||||
|
||||
groupsIds.addAll(jobIds);
|
||||
requiredMatches.filterMatchedIds(groupsIds);
|
||||
if (requiredMatches.hasUnmatchedIds()) {
|
||||
// some required jobs were not found
|
||||
listener.onFailure(ExceptionsHelper.missingJobException(requiredMatches.unmatchedIdsString()));
|
||||
return;
|
||||
}
|
||||
|
||||
listener.onResponse(jobIds);
|
||||
},
|
||||
listener::onFailure)
|
||||
, client::search);
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* The same logic as {@link #expandJobsIds(String, boolean, boolean, ActionListener)} but
|
||||
* the full anomaly detector job configuration is returned.
|
||||
*
|
||||
* See {@link #expandJobsIds(String, boolean, boolean, ActionListener)}
|
||||
*
|
||||
* @param expression the expression to resolve
|
||||
* @param allowNoJobs if {@code false}, an error is thrown when no name matches the {@code expression}.
|
||||
* This only applies to wild card expressions, if {@code expression} is not a
|
||||
* wildcard then setting this true will not suppress the exception
|
||||
* @param excludeDeleting If true exclude jobs marked as deleting
|
||||
* @param listener The expanded jobs listener
|
||||
*/
|
||||
// NORELEASE jobs should be paged or have a mechanism to return all jobs if there are many of them
|
||||
public void expandJobs(String expression, boolean allowNoJobs, boolean excludeDeleting, ActionListener<List<Job.Builder>> listener) {
|
||||
String [] tokens = ExpandedIdsMatcher.tokenizeExpression(expression);
|
||||
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder().query(buildQuery(tokens, excludeDeleting));
|
||||
sourceBuilder.sort(Job.ID.getPreferredName());
|
||||
|
||||
SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setSource(sourceBuilder)
|
||||
.setSize(searchSize)
|
||||
.request();
|
||||
|
||||
ExpandedIdsMatcher requiredMatches = new ExpandedIdsMatcher(tokens, allowNoJobs);
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, searchRequest,
|
||||
ActionListener.<SearchResponse>wrap(
|
||||
response -> {
|
||||
List<Job.Builder> jobs = new ArrayList<>();
|
||||
Set<String> jobAndGroupIds = new HashSet<>();
|
||||
|
||||
SearchHit[] hits = response.getHits().getHits();
|
||||
for (SearchHit hit : hits) {
|
||||
try {
|
||||
BytesReference source = hit.getSourceRef();
|
||||
Job.Builder job = parseJobLenientlyFromSource(source);
|
||||
jobs.add(job);
|
||||
jobAndGroupIds.add(job.getId());
|
||||
jobAndGroupIds.addAll(job.getGroups());
|
||||
} catch (IOException e) {
|
||||
// TODO A better way to handle this rather than just ignoring the error?
|
||||
logger.error("Error parsing anomaly detector job configuration [" + hit.getId() + "]", e);
|
||||
}
|
||||
}
|
||||
|
||||
requiredMatches.filterMatchedIds(jobAndGroupIds);
|
||||
if (requiredMatches.hasUnmatchedIds()) {
|
||||
// some required jobs were not found
|
||||
listener.onFailure(ExceptionsHelper.missingJobException(requiredMatches.unmatchedIdsString()));
|
||||
return;
|
||||
}
|
||||
|
||||
listener.onResponse(jobs);
|
||||
},
|
||||
listener::onFailure)
|
||||
, client::search);
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Expands the list of job group Ids to the set of jobs which are members of the groups.
|
||||
* Unlike {@link #expandJobsIds(String, boolean, boolean, ActionListener)} it is not an error
|
||||
* if a group Id does not exist.
|
||||
* Wildcard expansion of group Ids is not supported.
|
||||
*
|
||||
* @param groupIds Group Ids to expand
|
||||
* @param listener Expanded job Ids listener
|
||||
*/
|
||||
public void expandGroupIds(List<String> groupIds, ActionListener<SortedSet<String>> listener) {
|
||||
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder()
|
||||
.query(new TermsQueryBuilder(Job.GROUPS.getPreferredName(), groupIds));
|
||||
sourceBuilder.sort(Job.ID.getPreferredName(), SortOrder.DESC);
|
||||
sourceBuilder.fetchSource(false);
|
||||
sourceBuilder.docValueField(Job.ID.getPreferredName(), DocValueFieldsContext.USE_DEFAULT_FORMAT);
|
||||
|
||||
SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setSource(sourceBuilder)
|
||||
.setSize(searchSize)
|
||||
.request();
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, searchRequest,
|
||||
ActionListener.<SearchResponse>wrap(
|
||||
response -> {
|
||||
SortedSet<String> jobIds = new TreeSet<>();
|
||||
SearchHit[] hits = response.getHits().getHits();
|
||||
for (SearchHit hit : hits) {
|
||||
jobIds.add(hit.field(Job.ID.getPreferredName()).getValue());
|
||||
}
|
||||
|
||||
listener.onResponse(jobIds);
|
||||
},
|
||||
listener::onFailure)
|
||||
, client::search);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a group exists, that is there exists a job that is a member of
|
||||
* the group. If there are one or more jobs that define the group then
|
||||
* the listener responds with true else false.
|
||||
*
|
||||
* @param groupId The group Id
|
||||
* @param listener Returns true, false or a failure
|
||||
*/
|
||||
public void groupExists(String groupId, ActionListener<Boolean> listener) {
|
||||
BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
|
||||
boolQueryBuilder.filter(new TermQueryBuilder(Job.JOB_TYPE.getPreferredName(), Job.ANOMALY_DETECTOR_JOB_TYPE));
|
||||
boolQueryBuilder.filter(new TermQueryBuilder(Job.GROUPS.getPreferredName(), groupId));
|
||||
|
||||
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder()
|
||||
.query(boolQueryBuilder);
|
||||
sourceBuilder.fetchSource(false);
|
||||
|
||||
SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())
|
||||
.setSize(0)
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setSource(sourceBuilder).request();
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, searchRequest,
|
||||
ActionListener.<SearchResponse>wrap(
|
||||
response -> {
|
||||
listener.onResponse(response.getHits().getTotalHits().value > 0);
|
||||
},
|
||||
listener::onFailure)
|
||||
, client::search);
|
||||
}
|
||||
|
||||
/**
|
||||
* Find jobs with custom rules defined.
|
||||
* @param listener Jobs listener
|
||||
*/
|
||||
public void findJobsWithCustomRules(ActionListener<List<Job>> listener) {
|
||||
String customRulesPath = Strings.collectionToDelimitedString(Arrays.asList(Job.ANALYSIS_CONFIG.getPreferredName(),
|
||||
AnalysisConfig.DETECTORS.getPreferredName(), Detector.CUSTOM_RULES_FIELD.getPreferredName()), ".");
|
||||
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder()
|
||||
.query(QueryBuilders.nestedQuery(customRulesPath, QueryBuilders.existsQuery(customRulesPath), ScoreMode.None));
|
||||
|
||||
SearchRequest searchRequest = client.prepareSearch(AnomalyDetectorsIndex.configIndexName())
|
||||
.setIndicesOptions(IndicesOptions.lenientExpandOpen())
|
||||
.setSource(sourceBuilder)
|
||||
.setSize(searchSize)
|
||||
.request();
|
||||
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, searchRequest,
|
||||
ActionListener.<SearchResponse>wrap(
|
||||
response -> {
|
||||
List<Job> jobs = new ArrayList<>();
|
||||
|
||||
SearchHit[] hits = response.getHits().getHits();
|
||||
for (SearchHit hit : hits) {
|
||||
try {
|
||||
BytesReference source = hit.getSourceRef();
|
||||
Job job = parseJobLenientlyFromSource(source).build();
|
||||
jobs.add(job);
|
||||
} catch (IOException e) {
|
||||
// TODO A better way to handle this rather than just ignoring the error?
|
||||
logger.error("Error parsing anomaly detector job configuration [" + hit.getId() + "]", e);
|
||||
}
|
||||
}
|
||||
|
||||
listener.onResponse(jobs);
|
||||
},
|
||||
listener::onFailure)
|
||||
, client::search);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the job reference by the datafeed and validate the datafeed config against it
|
||||
* @param config Datafeed config
|
||||
* @param listener Validation listener
|
||||
*/
|
||||
public void validateDatafeedJob(DatafeedConfig config, ActionListener<Boolean> listener) {
|
||||
getJob(config.getJobId(), ActionListener.wrap(
|
||||
jobBuilder -> {
|
||||
try {
|
||||
DatafeedJobValidator.validate(config, jobBuilder.build());
|
||||
listener.onResponse(Boolean.TRUE);
|
||||
} catch (Exception e) {
|
||||
listener.onFailure(e);
|
||||
}
|
||||
},
|
||||
listener::onFailure
|
||||
));
|
||||
}
|
||||
|
||||
private void parseJobLenientlyFromSource(BytesReference source, ActionListener<Job.Builder> jobListener) {
|
||||
try (InputStream stream = source.streamInput();
|
||||
XContentParser parser = XContentFactory.xContent(XContentType.JSON)
|
||||
.createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, stream)) {
|
||||
jobListener.onResponse(Job.LENIENT_PARSER.apply(parser, null));
|
||||
} catch (Exception e) {
|
||||
jobListener.onFailure(e);
|
||||
}
|
||||
}
|
||||
|
||||
private Job.Builder parseJobLenientlyFromSource(BytesReference source) throws IOException {
|
||||
try (InputStream stream = source.streamInput();
|
||||
XContentParser parser = XContentFactory.xContent(XContentType.JSON)
|
||||
.createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, stream)) {
|
||||
return Job.LENIENT_PARSER.apply(parser, null);
|
||||
}
|
||||
}
|
||||
|
||||
private QueryBuilder buildQuery(String [] tokens, boolean excludeDeleting) {
|
||||
QueryBuilder jobQuery = new TermQueryBuilder(Job.JOB_TYPE.getPreferredName(), Job.ANOMALY_DETECTOR_JOB_TYPE);
|
||||
if (Strings.isAllOrWildcard(tokens) && excludeDeleting == false) {
|
||||
// match all
|
||||
return jobQuery;
|
||||
}
|
||||
|
||||
BoolQueryBuilder boolQueryBuilder = new BoolQueryBuilder();
|
||||
boolQueryBuilder.filter(jobQuery);
|
||||
BoolQueryBuilder shouldQueries = new BoolQueryBuilder();
|
||||
|
||||
if (excludeDeleting) {
|
||||
// field exists only when the job is marked as deleting
|
||||
shouldQueries.mustNot(new ExistsQueryBuilder(Job.DELETING.getPreferredName()));
|
||||
|
||||
if (Strings.isAllOrWildcard(tokens)) {
|
||||
boolQueryBuilder.filter(shouldQueries);
|
||||
return boolQueryBuilder;
|
||||
}
|
||||
}
|
||||
|
||||
List<String> terms = new ArrayList<>();
|
||||
for (String token : tokens) {
|
||||
if (Regex.isSimpleMatchPattern(token)) {
|
||||
shouldQueries.should(new WildcardQueryBuilder(Job.ID.getPreferredName(), token));
|
||||
shouldQueries.should(new WildcardQueryBuilder(Job.GROUPS.getPreferredName(), token));
|
||||
} else {
|
||||
terms.add(token);
|
||||
}
|
||||
}
|
||||
|
||||
if (terms.isEmpty() == false) {
|
||||
shouldQueries.should(new TermsQueryBuilder(Job.ID.getPreferredName(), terms));
|
||||
shouldQueries.should(new TermsQueryBuilder(Job.GROUPS.getPreferredName(), terms));
|
||||
}
|
||||
|
||||
if (shouldQueries.should().isEmpty() == false) {
|
||||
boolQueryBuilder.filter(shouldQueries);
|
||||
}
|
||||
|
||||
return boolQueryBuilder;
|
||||
}
|
||||
}
|
@ -243,10 +243,10 @@ public class JobResultsPersister {
|
||||
/**
|
||||
* Persist a model snapshot description
|
||||
*/
|
||||
public void persistModelSnapshot(ModelSnapshot modelSnapshot, WriteRequest.RefreshPolicy refreshPolicy) {
|
||||
public IndexResponse persistModelSnapshot(ModelSnapshot modelSnapshot, WriteRequest.RefreshPolicy refreshPolicy) {
|
||||
Persistable persistable = new Persistable(modelSnapshot.getJobId(), modelSnapshot, ModelSnapshot.documentId(modelSnapshot));
|
||||
persistable.setRefreshPolicy(refreshPolicy);
|
||||
persistable.persist(AnomalyDetectorsIndex.resultsWriteAlias(modelSnapshot.getJobId())).actionGet();
|
||||
return persistable.persist(AnomalyDetectorsIndex.resultsWriteAlias(modelSnapshot.getJobId())).actionGet();
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -336,7 +336,7 @@ public class JobResultsProvider {
|
||||
|
||||
private void updateIndexMappingWithTermFields(String indexName, Collection<String> termFields, ActionListener<Boolean> listener) {
|
||||
// Put the whole "doc" mapping, not just the term fields, otherwise we'll wipe the _meta section of the mapping
|
||||
try (XContentBuilder termFieldsMapping = ElasticsearchMappings.docMapping(termFields)) {
|
||||
try (XContentBuilder termFieldsMapping = ElasticsearchMappings.resultsMapping(termFields)) {
|
||||
final PutMappingRequest request = client.admin().indices().preparePutMapping(indexName).setType(ElasticsearchMappings.DOC_TYPE)
|
||||
.setSource(termFieldsMapping).request();
|
||||
executeAsyncWithOrigin(client.threadPool().getThreadContext(), ML_ORIGIN, request, new ActionListener<AcknowledgedResponse>() {
|
||||
|
@ -361,10 +361,20 @@ public class AutodetectProcessManager {
|
||||
updateProcessMessage.setFilter(filter);
|
||||
|
||||
if (updateParams.isUpdateScheduledEvents()) {
|
||||
Job job = jobManager.getJobOrThrowIfUnknown(jobTask.getJobId());
|
||||
DataCounts dataCounts = getStatistics(jobTask).get().v1();
|
||||
ScheduledEventsQueryBuilder query = new ScheduledEventsQueryBuilder().start(job.earliestValidTimestamp(dataCounts));
|
||||
jobResultsProvider.scheduledEventsForJob(jobTask.getJobId(), job.getGroups(), query, eventsListener);
|
||||
jobManager.getJob(jobTask.getJobId(), new ActionListener<Job>() {
|
||||
@Override
|
||||
public void onResponse(Job job) {
|
||||
DataCounts dataCounts = getStatistics(jobTask).get().v1();
|
||||
ScheduledEventsQueryBuilder query = new ScheduledEventsQueryBuilder()
|
||||
.start(job.earliestValidTimestamp(dataCounts));
|
||||
jobResultsProvider.scheduledEventsForJob(jobTask.getJobId(), job.getGroups(), query, eventsListener);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
handler.accept(e);
|
||||
}
|
||||
});
|
||||
} else {
|
||||
eventsListener.onResponse(null);
|
||||
}
|
||||
@ -393,71 +403,79 @@ public class AutodetectProcessManager {
|
||||
}
|
||||
}
|
||||
|
||||
public void openJob(JobTask jobTask, Consumer<Exception> handler) {
|
||||
public void openJob(JobTask jobTask, Consumer<Exception> closeHandler) {
|
||||
String jobId = jobTask.getJobId();
|
||||
Job job = jobManager.getJobOrThrowIfUnknown(jobId);
|
||||
|
||||
if (job.getJobVersion() == null) {
|
||||
handler.accept(ExceptionsHelper.badRequestException("Cannot open job [" + jobId
|
||||
+ "] because jobs created prior to version 5.5 are not supported"));
|
||||
return;
|
||||
}
|
||||
|
||||
logger.info("Opening job [{}]", jobId);
|
||||
processByAllocation.putIfAbsent(jobTask.getAllocationId(), new ProcessContext(jobTask));
|
||||
jobResultsProvider.getAutodetectParams(job, params -> {
|
||||
// We need to fork, otherwise we restore model state from a network thread (several GET api calls):
|
||||
threadPool.executor(MachineLearning.UTILITY_THREAD_POOL_NAME).execute(new AbstractRunnable() {
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
handler.accept(e);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doRun() throws Exception {
|
||||
ProcessContext processContext = processByAllocation.get(jobTask.getAllocationId());
|
||||
if (processContext == null) {
|
||||
logger.debug("Aborted opening job [{}] as it has been closed", jobId);
|
||||
return;
|
||||
}
|
||||
if (processContext.getState() != ProcessContext.ProcessStateName.NOT_RUNNING) {
|
||||
logger.debug("Cannot open job [{}] when its state is [{}]", jobId, processContext.getState().getClass().getName());
|
||||
jobManager.getJob(jobId, ActionListener.wrap(
|
||||
// NORELEASE JIndex. Should not be doing this work on the network thread
|
||||
job -> {
|
||||
if (job.getJobVersion() == null) {
|
||||
closeHandler.accept(ExceptionsHelper.badRequestException("Cannot open job [" + jobId
|
||||
+ "] because jobs created prior to version 5.5 are not supported"));
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
createProcessAndSetRunning(processContext, params, handler);
|
||||
processContext.getAutodetectCommunicator().init(params.modelSnapshot());
|
||||
setJobState(jobTask, JobState.OPENED);
|
||||
} catch (Exception e1) {
|
||||
// No need to log here as the persistent task framework will log it
|
||||
try {
|
||||
// Don't leave a partially initialised process hanging around
|
||||
processContext.newKillBuilder()
|
||||
.setAwaitCompletion(false)
|
||||
.setFinish(false)
|
||||
.kill();
|
||||
processByAllocation.remove(jobTask.getAllocationId());
|
||||
} finally {
|
||||
setJobState(jobTask, JobState.FAILED, e2 -> handler.accept(e1));
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}, e1 -> {
|
||||
logger.warn("Failed to gather information required to open job [" + jobId + "]", e1);
|
||||
setJobState(jobTask, JobState.FAILED, e2 -> handler.accept(e1));
|
||||
});
|
||||
|
||||
processByAllocation.putIfAbsent(jobTask.getAllocationId(), new ProcessContext(jobTask));
|
||||
jobResultsProvider.getAutodetectParams(job, params -> {
|
||||
// We need to fork, otherwise we restore model state from a network thread (several GET api calls):
|
||||
threadPool.executor(MachineLearning.UTILITY_THREAD_POOL_NAME).execute(new AbstractRunnable() {
|
||||
@Override
|
||||
public void onFailure(Exception e) {
|
||||
closeHandler.accept(e);
|
||||
}
|
||||
|
||||
@Override
|
||||
protected void doRun() {
|
||||
ProcessContext processContext = processByAllocation.get(jobTask.getAllocationId());
|
||||
if (processContext == null) {
|
||||
logger.debug("Aborted opening job [{}] as it has been closed", jobId);
|
||||
return;
|
||||
}
|
||||
if (processContext.getState() != ProcessContext.ProcessStateName.NOT_RUNNING) {
|
||||
logger.debug("Cannot open job [{}] when its state is [{}]",
|
||||
jobId, processContext.getState().getClass().getName());
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
createProcessAndSetRunning(processContext, job, params, closeHandler);
|
||||
processContext.getAutodetectCommunicator().init(params.modelSnapshot());
|
||||
setJobState(jobTask, JobState.OPENED);
|
||||
} catch (Exception e1) {
|
||||
// No need to log here as the persistent task framework will log it
|
||||
try {
|
||||
// Don't leave a partially initialised process hanging around
|
||||
processContext.newKillBuilder()
|
||||
.setAwaitCompletion(false)
|
||||
.setFinish(false)
|
||||
.kill();
|
||||
processByAllocation.remove(jobTask.getAllocationId());
|
||||
} finally {
|
||||
setJobState(jobTask, JobState.FAILED, e2 -> closeHandler.accept(e1));
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}, e1 -> {
|
||||
logger.warn("Failed to gather information required to open job [" + jobId + "]", e1);
|
||||
setJobState(jobTask, JobState.FAILED, e2 -> closeHandler.accept(e1));
|
||||
});
|
||||
},
|
||||
closeHandler
|
||||
));
|
||||
|
||||
}
|
||||
|
||||
private void createProcessAndSetRunning(ProcessContext processContext, AutodetectParams params, Consumer<Exception> handler) {
|
||||
private void createProcessAndSetRunning(ProcessContext processContext, Job job, AutodetectParams params, Consumer<Exception> handler) {
|
||||
// At this point we lock the process context until the process has been started.
|
||||
// The reason behind this is to ensure closing the job does not happen before
|
||||
// the process is started as that can result to the job getting seemingly closed
|
||||
// but the actual process is hanging alive.
|
||||
processContext.tryLock();
|
||||
try {
|
||||
AutodetectCommunicator communicator = create(processContext.getJobTask(), params, handler);
|
||||
AutodetectCommunicator communicator = create(processContext.getJobTask(), job, params, handler);
|
||||
processContext.setRunning(communicator);
|
||||
} finally {
|
||||
// Now that the process is running and we have updated its state we can unlock.
|
||||
@ -467,7 +485,7 @@ public class AutodetectProcessManager {
|
||||
}
|
||||
}
|
||||
|
||||
AutodetectCommunicator create(JobTask jobTask, AutodetectParams autodetectParams, Consumer<Exception> handler) {
|
||||
AutodetectCommunicator create(JobTask jobTask, Job job, AutodetectParams autodetectParams, Consumer<Exception> handler) {
|
||||
// Closing jobs can still be using some or all threads in MachineLearning.AUTODETECT_THREAD_POOL_NAME
|
||||
// that an open job uses, so include them too when considering if enough threads are available.
|
||||
int currentRunningJobs = processByAllocation.size();
|
||||
@ -492,7 +510,6 @@ public class AutodetectProcessManager {
|
||||
}
|
||||
}
|
||||
|
||||
Job job = jobManager.getJobOrThrowIfUnknown(jobId);
|
||||
// A TP with no queue, so that we fail immediately if there are no threads available
|
||||
ExecutorService autoDetectExecutorService = threadPool.executor(MachineLearning.AUTODETECT_THREAD_POOL_NAME);
|
||||
DataCountsReporter dataCountsReporter = new DataCountsReporter(job, autodetectParams.dataCounts(), jobDataCountsPersister);
|
||||
@ -505,8 +522,7 @@ public class AutodetectProcessManager {
|
||||
AutodetectProcess process = autodetectProcessFactory.createAutodetectProcess(job, autodetectParams, autoDetectExecutorService,
|
||||
onProcessCrash(jobTask));
|
||||
AutoDetectResultProcessor processor = new AutoDetectResultProcessor(
|
||||
client, auditor, jobId, renormalizer, jobResultsPersister, jobResultsProvider, autodetectParams.modelSizeStats(),
|
||||
autodetectParams.modelSnapshot() != null);
|
||||
client, auditor, jobId, renormalizer, jobResultsPersister, autodetectParams.modelSizeStats());
|
||||
ExecutorService autodetectWorkerExecutor;
|
||||
try (ThreadContext.StoredContext ignore = threadPool.getThreadContext().stashContext()) {
|
||||
autodetectWorkerExecutor = createAutodetectExecutorService(autoDetectExecutorService);
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user