38cee807b9
* Parse all multi_match options from request Rather than hardcoding predicates that require more methods for each newly added options, this encapsulates the state into a collection of appliers that can accumulate what's needed in the QueryBuilder itself. For example, the following: ```json GET /_xpack/sql/translate { "query": "SELECT foo,baz from i WHERE MATCH('baz', 'should clause', 'operator=AND;type=cross_fields')" } ``` Then generates the following: ```json { "size" : 1000, "query" : { "multi_match" : { "query" : "should clause", "fields" : [ "baz^1.0" ], "type" : "cross_fields", "operator" : "AND", "slop" : 0, "prefix_length" : 0, "max_expansions" : 50, "zero_terms_query" : "NONE", "auto_generate_synonyms_phrase_query" : true, "fuzzy_transpositions" : true, "boost" : 1.0 } }, "_source" : { "includes" : [ "baz" ], "excludes" : [ ] }, "docvalue_fields" : [ "foo" ] } ``` And when an invalid field value is used: ```json GET /_xpack/sql { "query": "SELECT foo,baz from i WHERE MATCH('baz', 'should clause', 'operator=AND;minimum_should_match=potato')" } ``` We get what ES would usually send back: ```json { "error" : { "root_cause" : [ { "type" : "query_shard_exception", "reason" : "failed to create query: {\n \"multi_match\" : {\n \"query\" : \"should clause\",\n \"fields\" : [\n \"baz^1.0\"\n ],\n \"type\" : \"best_fields\",\n \"operator\" : \"AND\",\n \"slop\" : 0,\n \"prefix_length\" : 0,\n \"max_expansions\" : 50,\n \"minimum_should_match\" : \"potato\",\n \"zero_terms_query\" : \"NONE\",\n \"auto_generate_synonyms_phrase_query\" : true,\n \"fuzzy_transpositions\" : true,\n \"boost\" : 1.0\n }\n}", "index_uuid" : "ef3MWf8FTUe2Qjz2FLbhoQ", "index" : "i" } ], "type" : "search_phase_execution_exception", "reason" : "all shards failed", "phase" : "query", "grouped" : true, "failed_shards" : [ { "shard" : 0, "index" : "i", "node" : "VfG9zfk9TDWdWvEZu0a4Rw", "reason" : { "type" : "query_shard_exception", "reason" : "failed to create query: {\n \"multi_match\" : {\n \"query\" : \"should clause\",\n \"fields\" : [\n \"baz^1.0\"\n ],\n \"type\" : \"best_fields\",\n \"operator\" : \"AND\",\n \"slop\" : 0,\n \"prefix_length\" : 0,\n \"max_expansions\" : 50,\n \"minimum_should_match\" : \"potato\",\n \"zero_terms_query\" : \"NONE\",\n \"auto_generate_synonyms_phrase_query\" : true,\n \"fuzzy_transpositions\" : true,\n \"boost\" : 1.0\n }\n}", "index_uuid" : "ef3MWf8FTUe2Qjz2FLbhoQ", "index" : "i", "caused_by" : { "type" : "number_format_exception", "reason" : "For input string: \"potato\"" } } } ] }, "status" : 400 } ``` It even includes the validation that ES already does for things like `type`: ```json GET /_xpack/sql { "query": "SELECT foo,baz from i WHERE MATCH('baz', 'should clause', 'operator=AND;type=eggplant')" } ``` ```json { "error" : { "root_cause" : [ { "type" : "parse_exception", "reason" : "failed to parse [multi_match] query type [eggplant]. unknown type." } ], "type" : "parse_exception", "reason" : "failed to parse [multi_match] query type [eggplant]. unknown type." }, "status" : 400 } ``` Resolves elastic/x-pack-elasticsearch#3257 Original commit: elastic/x-pack-elasticsearch@59f518af4a |
||
---|---|---|
.github | ||
buildSrc | ||
dev-tools | ||
docs | ||
license-tools | ||
migrate | ||
plugin | ||
qa | ||
sql | ||
test | ||
transport-client | ||
.dir-locals.el | ||
.gitignore | ||
.projectile | ||
GRADLE.CHEATSHEET.asciidoc | ||
LICENSE.txt | ||
NOTICE.txt | ||
README.asciidoc | ||
build.gradle | ||
gradle.properties | ||
migrate-issues.py | ||
migrate-plugins.sh | ||
settings.gradle |
README.asciidoc
= Elasticsearch X-Pack A set of Elastic's commercial plugins for Elasticsearch: - License - Security - Watcher - Monitoring - Machine Learning - Graph = Setup You must checkout `x-pack-elasticsearch` and `elasticsearch` with a specific directory structure. The `elasticsearch` checkout will be used when building `x-pack-elasticsearch`. The structure is: - /path/to/elastic/elasticsearch - /path/to/elastic/elasticsearch-extra/x-pack-elasticsearch == Vault Secret The build requires a Vault Secret ID. You can use a GitHub token by following these steps: 1. Go to https://github.com/settings/tokens 2. Click *Generate new token* 3. Set permissions to `read:org` 4. Copy the token into `~/.elastic/github.token` 5. Set the token's file permissions to `600` ``` $ mkdir ~/.elastic $ vi ~/.elastic/github.token # Add your_token exactly as it is into the file and save it $ chmod 600 ~/.elastic/github.token ``` If you do not create the token, then you will see something along the lines of this as the failure when trying to build X-Pack: ``` * What went wrong: Missing ~/.elastic/github.token file or VAULT_SECRET_ID environment variable, needed to authenticate with vault for secrets ``` === Offline Mode When running the build in offline mode (`--offline`), it will not required to have the vault secret setup. == Native Code **This is mandatory as tests depend on it** Machine Learning requires platform specific binaries, build from https://github.com/elastic/machine-learning-cpp via CI servers. The native artifacts are stored in S3. To retrieve them infra's team Vault service is utilized, which requires a github token. Please setup a github token as documented: https://github.com/elastic/infra/blob/master/docs/vault.md#github-auth The github token has to be put into ~/.elastic/github.token, while the file rights must be set to 0600. = Build - Run unit tests: + [source, txt] ----- gradle clean test ----- - Run all tests: + [source, txt] ----- gradle clean check ----- - Run integration tests: + [source, txt] ----- gradle clean integTest ----- - Package X-Pack (without running tests) + [source, txt] ----- gradle clean assemble ----- - Install X-Pack (without running tests) + [source, txt] ----- gradle clean install ----- = Building documentation The source files in this repository can be included in either the X-Pack Reference or the Elasticsearch Reference. NOTE: In 5.5 and later, the Elasticsearch Reference includes X-Pack-specific content when it is built from this repo. To build the Elasticsearch Reference on your local machine: * Use the `index.asciidoc` file in the docs/en directory. * Specify the location of the `elasticsearch/docs` directory with the `--resource` option when you run `build_docs.pl`. For example: [source, txt] ----- ./docs/build_docs.pl --doc elasticsearch-extra/x-pack-elasticsearch/docs/en/index.asciidoc --resource=elasticsearch/docs --chunk 1 ----- For information about building the X-Pack Reference, see the README in the x-pack repo. To build a release notes page for the pull requests in this repository: * Use the dev-tools/xes-release-notes.pl script to pull PRs from the x-pack-elasticsearch repo. Alternatively, use the dev-tools/xescpp_release_notes.pl script to pull PRs from both the x-pack-elasticsearch and machine-learning-cpp repos. * Specify the version label for which you want the release notes. * Redirect the output to a new local file. NOTE: You must have a personal access token called ~/.github_auth with "repo" scope. Use steps similar to "Vault Secret" to create this file. For example: [source, txt] ----- ./dev-tools/xes_release_notes.pl v5.5.2 > ~/tmp/5.5.2.asciidoc ----- == Adding Images When you include an image in the documentation, specify the path relative to the location of the asciidoc file. By convention, we put images in an `images` subdirectory. For example to insert `watcher-ui-edit-watch.png` in `watcher/limitations.asciidoc`: . Add an `images` subdirectory to the watcher directory if it doesn't already exist. . In `limitations.asciidoc` specify: + [source, txt] ----- image::images/watcher-ui-edit-watch.png["Editing a watch"] ----- Please note that image names and anchor IDs must be unique within the book, so do not use generic identifiers.