Merge branch 'main' into migration
This commit is contained in:
commit
5a093b8eb1
|
@ -20,7 +20,7 @@ baseurl: "" # the subpath of your site, e.g. /blog
|
|||
url: "https://docs-beta.opensearch.org" # the base hostname & protocol for your site, e.g. http://example.com
|
||||
permalink: pretty
|
||||
|
||||
opensearch_version: 1.0.0-beta1
|
||||
opensearch_version: 1.0.0-rc1
|
||||
opensearch_major_minor_version: 1.0
|
||||
|
||||
# Build settings
|
||||
|
|
|
@ -142,7 +142,7 @@ layout: table_wrappers
|
|||
{% if site.heading_anchors != false %}
|
||||
{% include vendor/anchor_headings.html html=content beforeHeading="true" anchorBody="<svg viewBox=\"0 0 16 16\" aria-hidden=\"true\"><use xlink:href=\"#svg-link\"></use></svg>" anchorClass="anchor-heading" anchorAttrs="aria-labelledby=\"%html_id%\"" %}
|
||||
{% else %}
|
||||
<p class="warning" style="margin-top: 0">Like OpenSearch itself, this documentation is a beta. It has content gaps and might contain bugs.</p>
|
||||
<p class="warning" style="margin-top: 0">This documentation remains in a beta state. It has content gaps and might contain bugs.</p>
|
||||
{{ content }}
|
||||
{% endif %}
|
||||
|
||||
|
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
layout: default
|
||||
title: Agents and ingestion tools
|
||||
nav_order: 100
|
||||
has_children: false
|
||||
has_toc: false
|
||||
---
|
||||
|
||||
# Agents and ingestion tools
|
||||
|
||||
Historically, many multiple popular agents and ingestion tools have worked with Elasticsearch OSS, such as Beats, Logstash, Fluentd, FluentBit, and OpenTelemetry. OpenSearch aims to continue to support a broad set of agents and ingestion tools, but not all have been tested or have explicitly added OpenSearch compatibility.
|
||||
|
||||
As an intermediate solution, we are adding a [version value](https://github.com/opensearch-project/OpenSearch/issues/693) to `opensearch.yml`. This change will let you set OpenSearch 1.x clusters to report version 7.10.2 (or any other arbitrary value). By reporting 7.10.2, the cluster will be able to connect with tools that check for a particular version number.
|
||||
|
||||
For a longer term solution, we plan to create an OpenSearch output plugin for Logstash. This plugin *does not exist yet*, but we've included it in the compatibility matrices below based on its expected behavior.
|
||||
|
||||
|
||||
## Compatibility Matrices
|
||||
|
||||
*Italicized* cells are untested, but indicate what a value theoretically should be based on existing information.
|
||||
|
||||
|
||||
### Compatibility Matrix for Logstash
|
||||
|
||||
| | Logstash OSS 7.x to 7.11.x | Logstash OSS 7.12.x\* | Logstash 7.13.x without OpenSearch output plugin | Logstash 7.13.x with OpenSearch output plugin\*\* |
|
||||
| :---| :--- | :--- | :--- | :--- |
|
||||
| Elasticsearch OSS v7.x to v7.9.x | *Yes* | *Yes* | *No* | *Yes* |
|
||||
| Elasticsearch OSS v7.10.2 | *Yes* | *Yes* | *No* | *Yes* |
|
||||
| ODFE OSS v1.x to 1.12 | *Yes* | *Yes* | *No* | *Yes* |
|
||||
| ODFE 1.13 | *Yes* | *Yes* | *No* | *Yes* |
|
||||
| OpenSearch 1.0 | [Yes via version setting](https://github.com/opensearch-project/OpenSearch/issues/693) | [Yes via version setting](https://github.com/opensearch-project/OpenSearch/issues/693) | *No* | *Yes* |
|
||||
|
||||
\* Most current compatible version with Elasticsearch OSS.
|
||||
|
||||
\*\* Planning to build.
|
||||
|
||||
|
||||
### Compatibility Matrix for Beats
|
||||
|
||||
| | Beats OSS 7.x to 7.11.x\*\* | Beats OSS 7.12.x\* | Beats 7.13.x |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| Elasticsearch OSS v7.x to v7.9.x | *Yes* | *Yes* | No |
|
||||
| Elasticsearch OSS v7.10.2 | *Yes* | *Yes* | No |
|
||||
| ODFE OSS v1.x to 1.12 | *Yes* | *Yes* | No |
|
||||
| ODFE 1.13 | *Yes* | *Yes* | No |
|
||||
| OpenSearch 1.0 | [Yes via version setting](https://github.com/opensearch-project/OpenSearch/issues/693) | [Yes via version setting](https://github.com/opensearch-project/OpenSearch/issues/693) | No |
|
||||
| Logstash OSS 7.x to 7.11.x | *Yes* | *Yes* | *Yes* |
|
||||
| Logstash OSS 7.12.x\* | *Yes* | *Yes* | *Yes* |
|
||||
| Logstash 7.13.x with OpenSearch output plugin | *Yes* | *Yes* | *Yes* |
|
||||
|
||||
\* Most current compatible version with Elasticsearch OSS.
|
||||
|
||||
\*\* Beats OSS includes all Apache 2.0 Beats agents (i.e. Filebeat, Metricbeat, Auditbeat, Heartbeat, Winlogbeat, Packetbeat).
|
|
@ -460,7 +460,7 @@ GET _plugins/_alerting/<node-id>/stats/<metric>
|
|||
"failed": 0
|
||||
},
|
||||
"cluster_name": "475300751431:alerting65-dont-delete",
|
||||
"opensearch.scheduled_jobs.enabled": true,
|
||||
"plugins.scheduled_jobs.enabled": true,
|
||||
"scheduled_job_index_exists": true,
|
||||
"scheduled_job_index_status": "green",
|
||||
"nodes_on_schedule": 9,
|
||||
|
|
|
@ -78,8 +78,8 @@ You can enter individual email addresses or an email group in the **Recipients**
|
|||
If your email provider requires SSL or TLS, you must authenticate each sender account before you can send an email. Enter these credentials in the OpenSearch keystore using the CLI. Run the following commands (in your OpenSearch directory) to enter your username and password. The `<sender_name>` is the name you entered for **Sender** earlier.
|
||||
|
||||
```bash
|
||||
./bin/opensearch-keystore add opendistro.alerting.destination.email.<sender_name>.username
|
||||
./bin/opensearch-keystore add opendistro.alerting.destination.email.<sender_name>.password
|
||||
./bin/opensearch-keystore add plugins.alerting.destination.email.<sender_name>.username
|
||||
./bin/opensearch-keystore add plugins.alerting.destination.email.<sender_name>.password
|
||||
```
|
||||
|
||||
**Note**: Keystore settings are node-specific. You must run these commands on each node.
|
||||
|
|
|
@ -44,7 +44,7 @@ Next, enable the following setting:
|
|||
PUT _cluster/settings
|
||||
{
|
||||
"transient": {
|
||||
"opendistro.alerting.filter_by_backend_roles": "true"
|
||||
"plugins.alerting.filter_by_backend_roles": "true"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
layout: default
|
||||
title: Index Rollups
|
||||
title: Index rollups
|
||||
nav_order: 35
|
||||
parent: Index management
|
||||
has_children: true
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
layout: default
|
||||
title: Index Rollups API
|
||||
parent: Index Rollups
|
||||
title: Index rollups API
|
||||
parent: Index rollups
|
||||
grand_parent: Index management
|
||||
redirect_from: /docs/ism/rollup-api/
|
||||
nav_order: 9
|
||||
|
|
|
@ -0,0 +1,156 @@
|
|||
---
|
||||
layout: default
|
||||
title: Index transforms
|
||||
nav_order: 40
|
||||
parent: Index management
|
||||
has_children: true
|
||||
has_toc: false
|
||||
---
|
||||
|
||||
# Index transforms
|
||||
|
||||
Whereas index rollup jobs let you reduce data granularity by rolling up old data into condensed indices, transform jobs let you create a different, summarized view of your data centered around certain fields, so you can visualize or analyze the data in different ways.
|
||||
|
||||
For example, suppose that you have airline data that’s scattered across multiple fields and categories, and you want to view a summary of the data that’s organized by airline, quarter, and then price. You can use a transform job to create a new, summarized index that’s organized by those specific categories.
|
||||
|
||||
You can use transform jobs in two ways:
|
||||
|
||||
1. Use the OpenSearch Dashboards UI to specify the index you want to transform and any optional data filters you want to use to filter the original index. Then select the fields you want to transform and the aggregations to use in the transformation. Finally, define a schedule for your job to follow.
|
||||
2. Use the transforms API to specify all the details about your job: the index you want to transform, target groups you want the transformed index to have, any aggregations you want to use to group columns, and a schedule for your job to follow.
|
||||
|
||||
OpenSearch Dashboards provides a detailed summary of the jobs you created and their relevant information, such as associated indices and job statuses. You can review and edit your job’s details and selections before creation, and even preview a transformed index’s data as you’re choosing which fields to transform. However, you can also use the REST API to create transform jobs and preview transform job results, but you must know all of the necessary settings and parameters to submit them as part of the HTTP request body. Submitting your transform job configurations as JSON scripts offers you more portability, allowing you to share and replicate your transform jobs, which is harder to do using OpenSearch Dashboards.
|
||||
|
||||
Your use cases will help you decide which method to use to create transform jobs.
|
||||
|
||||
## Create a transform job
|
||||
|
||||
If you don't have any data in your cluster, you can use the sample flight data within OpenSearch Dashboards to try out transform jobs. Otherwise, after launching OpenSearch Dashboards, choose **Index Management**. Select **Transform Jobs**, and choose **Create Transform Job**.
|
||||
|
||||
### Step 1: Choose indices
|
||||
|
||||
1. In the **Job name and description** section, specify a name and an optional description for your job.
|
||||
2. In the **Indices** section, select the source and target index. You can either select an existing target index or create a new one by entering a name for your new index. If you want to transform just a subset of your source index, choose **Add Data Filter**, and use the OpenSearch query DSL to specify a subset of your source index. For more information about the OpenSearch query DSL, see [query DSL](../../opensearch/query-dsl/).
|
||||
3. Choose **Next**.
|
||||
|
||||
### Step 2: Select fields to transform
|
||||
|
||||
After specifying the indices, you can select the fields you want to use in your transform job, as well as whether to use groupings or aggregations.
|
||||
|
||||
You can use groupings to place your data into separate buckets in your transformed index. For example, if you want to group all of the airport destinations within the sample flight data, you can group the `DestAirportID` field into a target field of `DestAirportID_terms` field, and you can find the grouped airport IDs in your transformed index after the transform job finishes.
|
||||
|
||||
On the other hand, aggregations let you perform simple calculations. For example, you can include an aggregation in your transform job to define a new field of `sum_of_total_ticket_price` that calculates the sum of all airplane tickets, and then analyze the newly summer data within your transformed index.
|
||||
|
||||
1. In the data table, select the fields you want to transform and expand the drop-down menu within the column header to choose the grouping or aggregation you want to use.
|
||||
|
||||
Currently, transform jobs support histogram, date_histogram, and terms groupings. For more information about groupings, see [Bucket Aggregations](../../opensearch/bucket-agg/). In terms of aggregations, you can select from `sum`, `avg`, `max`, `min`, `value_count`, `percentiles`, and `scripted_metric`. For more information about aggregations, see [Metric Aggregations](../../opensearch/metric-agg/).
|
||||
|
||||
2. Repeat step 1 for any other fields that you want to transform.
|
||||
3. After selecting the fields that you want to transform and verifying the transformation, choose **Next**.
|
||||
|
||||
### Step 3: Specify a schedule
|
||||
|
||||
You can configure transform jobs to run once or multiple times on a schedule. Transform jobs are enabled by default.
|
||||
|
||||
1. For **transformation execution frequency**, select **Define by fixed interval** and specify a **transform interval**.
|
||||
2. Under **Advanced**, specify an optional amount for **Pages per execution**. A larger number means more data is processed in each search request, but also uses more memory and causes higher latency. Exceeding allowed memory limits can cause exceptions and errors to occur.
|
||||
3. Choose **Next**.
|
||||
|
||||
### Step 4: Review and confirm details
|
||||
|
||||
After confirming your transform job’s details are correct, choose **Create Transform Job**. If you want to edit any part of the job, choose **Edit** of the section you want to change, and make the necessary changes. You can’t change aggregations or groupings after creating a job.
|
||||
|
||||
### Step 5: Search through the transformed index.
|
||||
|
||||
Once the transform job finishes, you can use the `_search` API operation to search the target index.
|
||||
|
||||
```json
|
||||
GET <target_index>/_search
|
||||
```
|
||||
|
||||
For example, after running a transform job that transforms the flight data based on a `DestAirportID` field, you can run the following request that returns all of the fields that have a value of `SFO`.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
GET finished_flight_job/_search
|
||||
{
|
||||
"query": {
|
||||
"match": {
|
||||
"DestAirportID_terms" : "SFO"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"took" : 3,
|
||||
"timed_out" : false,
|
||||
"_shards" : {
|
||||
"total" : 5,
|
||||
"successful" : 5,
|
||||
"skipped" : 0,
|
||||
"failed" : 0
|
||||
},
|
||||
"hits" : {
|
||||
"total" : {
|
||||
"value" : 4,
|
||||
"relation" : "eq"
|
||||
},
|
||||
"max_score" : 3.845883,
|
||||
"hits" : [
|
||||
{
|
||||
"_index" : "finished_flight_job",
|
||||
"_type" : "_doc",
|
||||
"_id" : "dSNKGb8U3OJOmC4RqVCi1Q",
|
||||
"_score" : 3.845883,
|
||||
"_source" : {
|
||||
"transform._id" : "sample_flight_job",
|
||||
"transform._doc_count" : 14,
|
||||
"Carrier_terms" : "Dashboards Airlines",
|
||||
"DestAirportID_terms" : "SFO"
|
||||
}
|
||||
},
|
||||
{
|
||||
"_index" : "finished_flight_job",
|
||||
"_type" : "_doc",
|
||||
"_id" : "_D7oqOy7drx9E-MG96U5RA",
|
||||
"_score" : 3.845883,
|
||||
"_source" : {
|
||||
"transform._id" : "sample_flight_job",
|
||||
"transform._doc_count" : 14,
|
||||
"Carrier_terms" : "Logstash Airways",
|
||||
"DestAirportID_terms" : "SFO"
|
||||
}
|
||||
},
|
||||
{
|
||||
"_index" : "finished_flight_job",
|
||||
"_type" : "_doc",
|
||||
"_id" : "YuZ8tOt1OsBA54e84WuAEw",
|
||||
"_score" : 3.6988301,
|
||||
"_source" : {
|
||||
"transform._id" : "sample_flight_job",
|
||||
"transform._doc_count" : 11,
|
||||
"Carrier_terms" : "ES-Air",
|
||||
"DestAirportID_terms" : "SFO"
|
||||
}
|
||||
},
|
||||
{
|
||||
"_index" : "finished_flight_job",
|
||||
"_type" : "_doc",
|
||||
"_id" : "W_-e7bVmH6eu8veJeK8ZxQ",
|
||||
"_score" : 3.6988301,
|
||||
"_source" : {
|
||||
"transform._id" : "sample_flight_job",
|
||||
"transform._doc_count" : 10,
|
||||
"Carrier_terms" : "JetBeats",
|
||||
"DestAirportID_terms" : "SFO"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
```
|
|
@ -0,0 +1,714 @@
|
|||
---
|
||||
layout: default
|
||||
title: Transforms APIs
|
||||
nav_order: 45
|
||||
parent: Index transforms
|
||||
grand_parent: Index management
|
||||
has_toc: true
|
||||
---
|
||||
|
||||
# Transforms APIs
|
||||
|
||||
Aside from using OpenSearch Dashboards, you can also use the REST API to create, start, stop, and complete other operations relative to transform jobs.
|
||||
|
||||
#### Table of contents
|
||||
- TOC
|
||||
{:toc}
|
||||
|
||||
## Create a transform job
|
||||
|
||||
Creates a transform job.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
PUT _plugins/_transform/<transform_id>
|
||||
|
||||
{
|
||||
"transform": {
|
||||
"enabled": true,
|
||||
"schedule": {
|
||||
"interval": {
|
||||
"period": 1,
|
||||
"unit": "Minutes",
|
||||
"start_time": 1602100553
|
||||
}
|
||||
},
|
||||
"description": "Sample transform job",
|
||||
"source_index": "sample_index",
|
||||
"target_index": "sample_target",
|
||||
"data_selection_query": {
|
||||
"match_all": {}
|
||||
},
|
||||
"page_size": 1,
|
||||
"groups": [
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "customer_gender",
|
||||
"target_field": "gender"
|
||||
}
|
||||
},
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "day_of_week",
|
||||
"target_field": "day"
|
||||
}
|
||||
}
|
||||
],
|
||||
"aggregations": {
|
||||
"quantity": {
|
||||
"sum": {
|
||||
"field": "total_quantity"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"_id": "sample",
|
||||
"_version": 7,
|
||||
"_seq_no": 13,
|
||||
"_primary_term": 1,
|
||||
"transform": {
|
||||
"transform_id": "sample",
|
||||
"schema_version": 7,
|
||||
"schedule": {
|
||||
"interval": {
|
||||
"start_time": 1621467964243,
|
||||
"period": 1,
|
||||
"unit": "Minutes"
|
||||
}
|
||||
},
|
||||
"metadata_id": null,
|
||||
"updated_at": 1621467964243,
|
||||
"enabled": true,
|
||||
"enabled_at": 1621467964243,
|
||||
"description": "Sample transform job",
|
||||
"source_index": "sample_index",
|
||||
"data_selection_query": {
|
||||
"match_all": {
|
||||
"boost": 1.0
|
||||
}
|
||||
},
|
||||
"target_index": "sample_target",
|
||||
"roles": [],
|
||||
"page_size": 1,
|
||||
"groups": [
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "customer_gender",
|
||||
"target_field": "gender"
|
||||
}
|
||||
},
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "day_of_week",
|
||||
"target_field": "day"
|
||||
}
|
||||
}
|
||||
],
|
||||
"aggregations": {
|
||||
"quantity": {
|
||||
"sum": {
|
||||
"field": "total_quantity"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
You can specify the following options in the HTTP request body:
|
||||
|
||||
Option | Data Type | Description | Required
|
||||
:--- | :--- | :--- | :---
|
||||
enabled | Boolean | If true, the transform job is enabled at creation. | No
|
||||
schedule | JSON | The schedule the transform job runs on. | Yes
|
||||
start_time | Integer | The Unix epoch time of the transform job's start time. | Yes
|
||||
description | String | Describes the transform job. | No
|
||||
metadata_id | String | Any metadata to be associated with the transform job. | No
|
||||
source_index | String | The source index whose data to transform. | Yes
|
||||
target_index | String | The target index the newly transformed data is added into. You can create a new index or update an existing one. | Yes
|
||||
data_selection_query | JSON | The query DSL to use to filter a subset of the source index for the transform job. See [query DSL](../../../opensearch/query-dsl) for more information. | Yes
|
||||
page_size | Integer | The number of fields to transform at a time. Higher number means higher performance but requires more memory and can cause higher latency. (Default: 1) | Yes
|
||||
groups | Array | Specifies the grouping(s) to use in the transform job. Supported groups are `terms`, `histogram`, and `date_histogram`. For more information, see [Bucket Aggregations](../../../opensearch/bucket-agg). | Yes if not using aggregations
|
||||
source_field | String | The field(s) to transform | Yes
|
||||
aggregations | JSON | The aggregations to use in the transform job. Supported aggregations are: `sum`, `max`, `min`, `value_count`, `avg`, `scripted_metric`, and `percentiles`. For more information, see [Metric Aggregations](../../../opensearch/metric-agg). | Yes if not using groups
|
||||
|
||||
## Update a transform job
|
||||
|
||||
Updates a transform job if `transform_id` already exists.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
PUT _plugins/_transform/<transform_id>
|
||||
|
||||
{
|
||||
"transform": {
|
||||
"enabled": true,
|
||||
"schedule": {
|
||||
"interval": {
|
||||
"period": 1,
|
||||
"unit": "Minutes",
|
||||
"start_time": 1602100553
|
||||
}
|
||||
},
|
||||
"description": "Sample transform job",
|
||||
"source_index": "sample_index",
|
||||
"target_index": "sample_target",
|
||||
"data_selection_query": {
|
||||
"match_all": {}
|
||||
},
|
||||
"page_size": 1,
|
||||
"groups": [
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "customer_gender",
|
||||
"target_field": "gender"
|
||||
}
|
||||
},
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "day_of_week",
|
||||
"target_field": "day"
|
||||
}
|
||||
}
|
||||
],
|
||||
"aggregations": {
|
||||
"quantity": {
|
||||
"sum": {
|
||||
"field": "total_quantity"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"_id": "sample",
|
||||
"_version": 2,
|
||||
"_seq_no": 14,
|
||||
"_primary_term": 1,
|
||||
"transform": {
|
||||
"transform_id": "sample",
|
||||
"schema_version": 7,
|
||||
"schedule": {
|
||||
"interval": {
|
||||
"start_time": 1602100553,
|
||||
"period": 1,
|
||||
"unit": "Minutes"
|
||||
}
|
||||
},
|
||||
"metadata_id": null,
|
||||
"updated_at": 1621889843874,
|
||||
"enabled": true,
|
||||
"enabled_at": 1621889843874,
|
||||
"description": "Sample transform job",
|
||||
"source_index": "sample_index",
|
||||
"data_selection_query": {
|
||||
"match_all": {
|
||||
"boost": 1.0
|
||||
}
|
||||
},
|
||||
"target_index": "sample_target",
|
||||
"roles": [],
|
||||
"page_size": 1,
|
||||
"groups": [
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "customer_gender",
|
||||
"target_field": "gender"
|
||||
}
|
||||
},
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "day_of_week",
|
||||
"target_field": "day"
|
||||
}
|
||||
}
|
||||
],
|
||||
"aggregations": {
|
||||
"quantity": {
|
||||
"sum": {
|
||||
"field": "total_quantity"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The `Update` operation supports the following URL parameters:
|
||||
|
||||
Parameter | Description | Required
|
||||
:---| :--- | :---
|
||||
`if_seq_no` | Only perform the transform operation if the last operation that changed the transform job has the specified sequence number. | No
|
||||
`if_primary_term` | Only perform the transform operation if the last operation that changed the transform job has the specified sequence term. | No
|
||||
|
||||
## Get a transform job's details
|
||||
|
||||
Returns a transform job's details.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
GET _plugins/_transform/<transform_id>
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"_id": "sample",
|
||||
"_version": 7,
|
||||
"_seq_no": 13,
|
||||
"_primary_term": 1,
|
||||
"transform": {
|
||||
"transform_id": "sample",
|
||||
"schema_version": 7,
|
||||
"schedule": {
|
||||
"interval": {
|
||||
"start_time": 1621467964243,
|
||||
"period": 1,
|
||||
"unit": "Minutes"
|
||||
}
|
||||
},
|
||||
"metadata_id": null,
|
||||
"updated_at": 1621467964243,
|
||||
"enabled": true,
|
||||
"enabled_at": 1621467964243,
|
||||
"description": "Sample transform job",
|
||||
"source_index": "sample_index",
|
||||
"data_selection_query": {
|
||||
"match_all": {
|
||||
"boost": 1.0
|
||||
}
|
||||
},
|
||||
"target_index": "sample_target",
|
||||
"roles": [],
|
||||
"page_size": 1,
|
||||
"groups": [
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "customer_gender",
|
||||
"target_field": "gender"
|
||||
}
|
||||
},
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "day_of_week",
|
||||
"target_field": "day"
|
||||
}
|
||||
}
|
||||
],
|
||||
"aggregations": {
|
||||
"quantity": {
|
||||
"sum": {
|
||||
"field": "total_quantity"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
You can also get details of all transform jobs by omitting `transform_id`.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
GET _plugins/_transform/
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"total_transforms": 1,
|
||||
"transforms": [
|
||||
{
|
||||
"_id": "sample",
|
||||
"_seq_no": 13,
|
||||
"_primary_term": 1,
|
||||
"transform": {
|
||||
"transform_id": "sample",
|
||||
"schema_version": 7,
|
||||
"schedule": {
|
||||
"interval": {
|
||||
"start_time": 1621467964243,
|
||||
"period": 1,
|
||||
"unit": "Minutes"
|
||||
}
|
||||
},
|
||||
"metadata_id": null,
|
||||
"updated_at": 1621467964243,
|
||||
"enabled": true,
|
||||
"enabled_at": 1621467964243,
|
||||
"description": "Sample transform job",
|
||||
"source_index": "sample_index",
|
||||
"data_selection_query": {
|
||||
"match_all": {
|
||||
"boost": 1.0
|
||||
}
|
||||
},
|
||||
"target_index": "sample_target",
|
||||
"roles": [],
|
||||
"page_size": 1,
|
||||
"groups": [
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "customer_gender",
|
||||
"target_field": "gender"
|
||||
}
|
||||
},
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "day_of_week",
|
||||
"target_field": "day"
|
||||
}
|
||||
}
|
||||
],
|
||||
"aggregations": {
|
||||
"quantity": {
|
||||
"sum": {
|
||||
"field": "total_quantity"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
You can specify these options as the `GET` API operation’s URL parameters to filter results:
|
||||
|
||||
Parameter | Description | Required
|
||||
:--- | :--- | :---
|
||||
from | The starting index to search from. (Default: 0) | No
|
||||
size | Specifies the amount of results to return (Default: 10) | No
|
||||
search |The search term to use to filter results. | No
|
||||
sortField | The field to sort results with. | No
|
||||
sortDirection | Specifies the direction to sort results in. Can be `ASC` or `DESC`. (Default: ASC) | No
|
||||
|
||||
For example, this request returns two results starting from the eighth index.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
GET _plugins/_transform?size=2&from=8
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"total_transforms": 18,
|
||||
"transforms": [
|
||||
{
|
||||
"_id": "sample8",
|
||||
"_seq_no": 93,
|
||||
"_primary_term": 1,
|
||||
"transform": {
|
||||
"transform_id": "sample8",
|
||||
"schema_version": 7,
|
||||
"schedule": {
|
||||
"interval": {
|
||||
"start_time": 1622063596812,
|
||||
"period": 1,
|
||||
"unit": "Minutes"
|
||||
}
|
||||
},
|
||||
"metadata_id": "y4hFAB2ZURQ2dzY7BAMxWA",
|
||||
"updated_at": 1622063657233,
|
||||
"enabled": false,
|
||||
"enabled_at": null,
|
||||
"description": "Sample transform job",
|
||||
"source_index": "sample_index3",
|
||||
"data_selection_query": {
|
||||
"match_all": {
|
||||
"boost": 1.0
|
||||
}
|
||||
},
|
||||
"target_index": "sample_target3",
|
||||
"roles": [],
|
||||
"page_size": 1,
|
||||
"groups": [
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "customer_gender",
|
||||
"target_field": "gender"
|
||||
}
|
||||
},
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "day_of_week",
|
||||
"target_field": "day"
|
||||
}
|
||||
}
|
||||
],
|
||||
"aggregations": {
|
||||
"quantity": {
|
||||
"sum": {
|
||||
"field": "total_quantity"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"_id": "sample9",
|
||||
"_seq_no": 98,
|
||||
"_primary_term": 1,
|
||||
"transform": {
|
||||
"transform_id": "sample9",
|
||||
"schema_version": 7,
|
||||
"schedule": {
|
||||
"interval": {
|
||||
"start_time": 1622063598065,
|
||||
"period": 1,
|
||||
"unit": "Minutes"
|
||||
}
|
||||
},
|
||||
"metadata_id": "x8tCIiYMTE3veSbIJkit5A",
|
||||
"updated_at": 1622063658388,
|
||||
"enabled": false,
|
||||
"enabled_at": null,
|
||||
"description": "Sample transform job",
|
||||
"source_index": "sample_index4",
|
||||
"data_selection_query": {
|
||||
"match_all": {
|
||||
"boost": 1.0
|
||||
}
|
||||
},
|
||||
"target_index": "sample_target4",
|
||||
"roles": [],
|
||||
"page_size": 1,
|
||||
"groups": [
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "customer_gender",
|
||||
"target_field": "gender"
|
||||
}
|
||||
},
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "day_of_week",
|
||||
"target_field": "day"
|
||||
}
|
||||
}
|
||||
],
|
||||
"aggregations": {
|
||||
"quantity": {
|
||||
"sum": {
|
||||
"field": "total_quantity"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Start a transform job
|
||||
|
||||
Transform jobs created using the API are automatically enabled, but if you ever need to enable a job, you can use the `start` API operation.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
POST _plugins/_transform/<transform_id>/_start
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"acknowledged": true
|
||||
}
|
||||
```
|
||||
|
||||
## Stop a transform job
|
||||
|
||||
Stops/disables a transform job.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
POST _plugins/_transform/<transform_id>/_stop
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"acknowledged": true
|
||||
}
|
||||
```
|
||||
|
||||
## Get the status of a transform job
|
||||
|
||||
Returns the status and metadata of a transform job.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
GET _plugins/_transform/<transform_id>/_explain
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"sample": {
|
||||
"metadata_id": "PzmjweME5xbgkenl9UpsYw",
|
||||
"transform_metadata": {
|
||||
"transform_id": "sample",
|
||||
"last_updated_at": 1621883525873,
|
||||
"status": "finished",
|
||||
"failure_reason": "null",
|
||||
"stats": {
|
||||
"pages_processed": 0,
|
||||
"documents_processed": 0,
|
||||
"documents_indexed": 0,
|
||||
"index_time_in_millis": 0,
|
||||
"search_time_in_millis": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Preview a transform job's results
|
||||
|
||||
Returns a preview of what a transformed index would look like.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
POST _plugins/_transform/_preview
|
||||
|
||||
{
|
||||
"transform": {
|
||||
"enabled": false,
|
||||
"schedule": {
|
||||
"interval": {
|
||||
"period": 1,
|
||||
"unit": "Minutes",
|
||||
"start_time": 1602100553
|
||||
}
|
||||
},
|
||||
"description": "test transform",
|
||||
"source_index": "sample_index",
|
||||
"target_index": "sample_target",
|
||||
"data_selection_query": {
|
||||
"match_all": {}
|
||||
},
|
||||
"page_size": 10,
|
||||
"groups": [
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "customer_gender",
|
||||
"target_field": "gender"
|
||||
}
|
||||
},
|
||||
{
|
||||
"terms": {
|
||||
"source_field": "day_of_week",
|
||||
"target_field": "day"
|
||||
}
|
||||
}
|
||||
],
|
||||
"aggregations": {
|
||||
"quantity": {
|
||||
"sum": {
|
||||
"field": "total_quantity"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"documents" : [
|
||||
{
|
||||
"quantity" : 862.0,
|
||||
"gender" : "FEMALE",
|
||||
"day" : "Friday"
|
||||
},
|
||||
{
|
||||
"quantity" : 682.0,
|
||||
"gender" : "FEMALE",
|
||||
"day" : "Monday"
|
||||
},
|
||||
{
|
||||
"quantity" : 772.0,
|
||||
"gender" : "FEMALE",
|
||||
"day" : "Saturday"
|
||||
},
|
||||
{
|
||||
"quantity" : 669.0,
|
||||
"gender" : "FEMALE",
|
||||
"day" : "Sunday"
|
||||
},
|
||||
{
|
||||
"quantity" : 887.0,
|
||||
"gender" : "FEMALE",
|
||||
"day" : "Thursday"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Delete a transform job
|
||||
|
||||
Deletes a transform job. This operation does not delete the source or target indices.
|
||||
|
||||
**Sample Request**
|
||||
|
||||
```json
|
||||
DELETE _plugins/_transform/<transform_id>
|
||||
```
|
||||
|
||||
**Sample Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"took": 205,
|
||||
"errors": false,
|
||||
"items": [
|
||||
{
|
||||
"delete": {
|
||||
"_index": ".opensearch-ism-config",
|
||||
"_type": "_doc",
|
||||
"_id": "sample",
|
||||
"_version": 4,
|
||||
"result": "deleted",
|
||||
"forced_refresh": true,
|
||||
"_shards": {
|
||||
"total": 2,
|
||||
"successful": 1,
|
||||
"failed": 0
|
||||
},
|
||||
"_seq_no": 6,
|
||||
"_primary_term": 1,
|
||||
"status": 200
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
|
@ -452,13 +452,12 @@ GET _plugins/_ism/explain/index_1
|
|||
```json
|
||||
{
|
||||
"index_1": {
|
||||
"index.opendistro.index_state_management.policy_id": "policy_1"
|
||||
"index.plugins.index_state_management.policy_id": "policy_1"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The `opendistro.index_state_management.policy_id` setting is deprecated starting from version 1.13.0.
|
||||
We retain this field in the response API for consistency.
|
||||
The `plugins.index_state_management.policy_id` setting is deprecated starting from ODFE version 1.13.0. We retain this field in the response API for consistency.
|
||||
|
||||
---
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
layout: default
|
||||
title: Refresh Search Analyzer
|
||||
nav_order: 40
|
||||
nav_order: 50
|
||||
parent: Index management
|
||||
has_children: false
|
||||
redirect_from: /docs/ism/refresh-analyzer/
|
||||
|
|
|
@ -29,11 +29,25 @@ If you don't want to use the all-in-one installation options, you can install th
|
|||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>1.0.0-rc1</td>
|
||||
<td>
|
||||
<pre>alertingDashboards 1.0.0.0-rc1
|
||||
anomalyDetectionDashboards 1.0.0.0-rc1
|
||||
ganttChartDashboards 1.0.0.0-rc1
|
||||
indexManagementDashboards 1.0.0.0-rc1
|
||||
notebooksDashboards 1.0.0.0-rc1
|
||||
queryWorkbenchDashboards 1.0.0.0-rc1
|
||||
reportsDashboards 1.0.0.0-rc1
|
||||
securityDashboards 1.0.0.0-rc1
|
||||
traceAnalyticsDashboards 1.0.0.0-rc1
|
||||
</pre>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>1.0.0-beta1</td>
|
||||
<td>
|
||||
<pre>
|
||||
alertingDashboards 1.0.0.0-beta1
|
||||
<pre>alertingDashboards 1.0.0.0-beta1
|
||||
anomalyDetectionDashboards 1.0.0.0-beta1
|
||||
ganttChartDashboards 1.0.0.0-beta1
|
||||
indexManagementDashboards 1.0.0.0-beta1
|
||||
|
@ -60,6 +74,7 @@ traceAnalyticsDashboards 1.0.0.0-beta1
|
|||
|
||||
Navigate to the OpenSearch Dashboards home directory (likely `/usr/share/opensearch-dashboards`) and run the install command for each plugin.
|
||||
|
||||
{% comment %}
|
||||
|
||||
#### Security OpenSearch Dashboards
|
||||
|
||||
|
@ -146,6 +161,7 @@ sudo bin/opensearch-dashboards-plugin install https://d3g5vo6xdbdb9a.cloudfront.
|
|||
|
||||
This plugin adds a new Gantt chart visualization.
|
||||
|
||||
{% endcomment %}
|
||||
|
||||
## List installed plugins
|
||||
|
||||
|
|
|
@ -16,13 +16,13 @@ OpenSearch can perform aggregations on massive datasets in milliseconds. Compare
|
|||
|
||||
## Aggregations on text fields
|
||||
|
||||
By default, OpenSearch doesn't support aggregations on a text field.
|
||||
Because text fields are tokenized, an aggregation on a text field has to reverse the tokenization process back to its original string and then formulate an aggregation based on that. Such an operation consumes significant memory and degrades cluster performance.
|
||||
By default, OpenSearch doesn't support aggregations on a text field. Because text fields are tokenized, an aggregation on a text field has to reverse the tokenization process back to its original string and then formulate an aggregation based on that. This kind of an operation consumes significant memory and degrades cluster performance.
|
||||
|
||||
While you can enable aggregations on text fields by setting the `fielddata` parameter to `true` in the mapping, the aggregations are still based on the tokenized words and not on the raw text.
|
||||
|
||||
We recommend keeping a raw version of the text field as a `keyword` field that you can aggregate on.
|
||||
In this case, you can perform aggregations on the `title.raw` field, instead of the `title` field:
|
||||
|
||||
In this case, you can perform aggregations on the `title.raw` field, instead of on the `title` field:
|
||||
|
||||
```json
|
||||
PUT movies
|
||||
|
@ -61,15 +61,13 @@ GET _search
|
|||
|
||||
If you’re only interested in the aggregation result and not in the results of the query, set `size` to 0.
|
||||
|
||||
In the `aggs` property (you can use `aggregations` if you want), you can define any number of aggregations.
|
||||
Each aggregation is defined by its name and one of the types of aggregations that OpenSearch supports.
|
||||
In the `aggs` property (you can use `aggregations` if you want), you can define any number of aggregations. Each aggregation is defined by its name and one of the types of aggregations that OpenSearch supports.
|
||||
|
||||
The name of the aggregation helps you to distinguish between different aggregations in the response.
|
||||
The `AGG_TYPE` property is where you specify the type of aggregation.
|
||||
The name of the aggregation helps you to distinguish between different aggregations in the response. The `AGG_TYPE` property is where you specify the type of aggregation.
|
||||
|
||||
## Sample aggregation
|
||||
|
||||
This section uses the OpenSearch Dashboards sample e-commerce data and web log data. To add the sample data, log in to OpenSearch Dashboards, choose **Home** and **Try our sample data**. For **Sample eCommerce orders** and **Sample web logs**, choose **Add data**.
|
||||
This section uses the OpenSearch Dashboards sample ecommerce data and web log data. To add the sample data, log in to OpenSearch Dashboards, choose **Home**, and then choose **Try our sample data**. For **Sample eCommerce orders** and **Sample web logs**, choose **Add data**.
|
||||
|
||||
### avg
|
||||
|
||||
|
@ -129,7 +127,7 @@ There are three main types of aggregations:
|
|||
|
||||
## Nested aggregations
|
||||
|
||||
Aggregations within aggregations are called nested or sub aggregations.
|
||||
Aggregations within aggregations are called nested or subaggregations.
|
||||
|
||||
Metric aggregations produce simple results and can't contain nested aggregations.
|
||||
|
||||
|
|
|
@ -112,26 +112,26 @@ networks:
|
|||
Then make your changes to `opensearch.yml`. For a full list of settings, see [Security](../../../security/configuration/). This example adds (extremely) verbose audit logging:
|
||||
|
||||
```yml
|
||||
opensearch_security.ssl.transport.pemcert_filepath: node.pem
|
||||
opensearch_security.ssl.transport.pemkey_filepath: node-key.pem
|
||||
opensearch_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
|
||||
opensearch_security.ssl.transport.enforce_hostname_verification: false
|
||||
opensearch_security.ssl.http.enabled: true
|
||||
opensearch_security.ssl.http.pemcert_filepath: node.pem
|
||||
opensearch_security.ssl.http.pemkey_filepath: node-key.pem
|
||||
opensearch_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
|
||||
opensearch_security.allow_default_init_securityindex: true
|
||||
opensearch_security.authcz.admin_dn:
|
||||
plugins.security.ssl.transport.pemcert_filepath: node.pem
|
||||
plugins.security.ssl.transport.pemkey_filepath: node-key.pem
|
||||
plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
|
||||
plugins.security.ssl.transport.enforce_hostname_verification: false
|
||||
plugins.security.ssl.http.enabled: true
|
||||
plugins.security.ssl.http.pemcert_filepath: node.pem
|
||||
plugins.security.ssl.http.pemkey_filepath: node-key.pem
|
||||
plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem
|
||||
plugins.security.allow_default_init_securityindex: true
|
||||
plugins.security.authcz.admin_dn:
|
||||
- CN=A,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA
|
||||
opensearch_security.nodes_dn:
|
||||
plugins.security.nodes_dn:
|
||||
- 'CN=N,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
|
||||
opensearch_security.audit.type: internal_opensearch
|
||||
opensearch_security.enable_snapshot_restore_privilege: true
|
||||
opensearch_security.check_snapshot_restore_write_privileges: true
|
||||
opensearch_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
|
||||
plugins.security.audit.type: internal_opensearch
|
||||
plugins.security.enable_snapshot_restore_privilege: true
|
||||
plugins.security.check_snapshot_restore_write_privileges: true
|
||||
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
|
||||
cluster.routing.allocation.disk.threshold_enabled: false
|
||||
opensearch_security.audit.config.disabled_rest_categories: NONE
|
||||
opensearch_security.audit.config.disabled_transport_categories: NONE
|
||||
plugins.security.audit.config.disabled_rest_categories: NONE
|
||||
plugins.security.audit.config.disabled_transport_categories: NONE
|
||||
```
|
||||
|
||||
Use this same override process to specify new [authentication settings](../../../security/configuration/configuration/) in `/usr/share/opensearch/plugins/opensearch-security/securityconfig/config.yml`, as well as new default [internal users, roles, mappings, action groups, and tenants](../../../security/configuration/yaml/).
|
||||
|
@ -166,13 +166,13 @@ volumes:
|
|||
Remember that the certificates you specify in your Docker Compose file must be the same as the certificates listed in your custom `opensearch.yml` file. At a minimum, you should replace the root, admin, and node certificates with your own. For more information about adding and using certificates, see [Configure TLS certificates](../security/configuration/tls.md).
|
||||
|
||||
```yml
|
||||
opensearch_security.ssl.transport.pemcert_filepath: new-node-cert.pem
|
||||
opensearch_security.ssl.transport.pemkey_filepath: new-node-cert-key.pem
|
||||
opensearch_security.ssl.transport.pemtrustedcas_filepath: new-root-ca.pem
|
||||
opensearch_security.ssl.http.pemcert_filepath: new-node-cert.pem
|
||||
opensearch_security.ssl.http.pemkey_filepath: new-node-cert-key.pem
|
||||
opensearch_security.ssl.http.pemtrustedcas_filepath: new-root-ca.pem
|
||||
opensearch_security.authcz.admin_dn:
|
||||
plugins.security.ssl.transport.pemcert_filepath: new-node-cert.pem
|
||||
plugins.security.ssl.transport.pemkey_filepath: new-node-cert-key.pem
|
||||
plugins.security.ssl.transport.pemtrustedcas_filepath: new-root-ca.pem
|
||||
plugins.security.ssl.http.pemcert_filepath: new-node-cert.pem
|
||||
plugins.security.ssl.http.pemkey_filepath: new-node-cert-key.pem
|
||||
plugins.security.ssl.http.pemtrustedcas_filepath: new-root-ca.pem
|
||||
plugins.security.authcz.admin_dn:
|
||||
- CN=admin,OU=SSL,O=Test,L=Test,C=DE
|
||||
```
|
||||
|
||||
|
|
|
@ -192,25 +192,25 @@ You can also configure `docker-compose.yml` and `opensearch.yml` [to take your o
|
|||
1. Enable the Performance Analyzer plugin:
|
||||
|
||||
```bash
|
||||
curl -XPOST localhost:9200/_opensearch/_performanceanalyzer/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}'
|
||||
curl -XPOST localhost:9200/_plugins/_performanceanalyzer/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}'
|
||||
```
|
||||
|
||||
If you receive the `curl: (52) Empty reply from server` error, you are likely protecting your cluster with the security plugin and you need to provide credentials. Modify the following command to use your username and password:
|
||||
|
||||
```bash
|
||||
curl -XPOST https://localhost:9200/_opensearch/_performanceanalyzer/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:admin' -k
|
||||
curl -XPOST https://localhost:9200/_plugins/_performanceanalyzer/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:admin' -k
|
||||
```
|
||||
|
||||
1. Enable the Root Cause Analyzer (RCA) framework
|
||||
|
||||
```bash
|
||||
curl -XPOST localhost:9200/_opensearch/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}'
|
||||
curl -XPOST localhost:9200/_plugins/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}'
|
||||
```
|
||||
|
||||
Similar to step 1, if you run into `curl: (52) Empty reply from server`, run the command below to enable RCA
|
||||
|
||||
```bash
|
||||
curl -XPOST https://localhost:9200/_opensearch/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:admin' -k
|
||||
curl -XPOST https://localhost:9200/_plugins/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:admin' -k
|
||||
```
|
||||
|
||||
1. By default, Performance Analyzer's endpoints are not accessible from outside the Docker container.
|
||||
|
@ -299,7 +299,7 @@ docker build --tag=opensearch-custom-plugin .
|
|||
docker run -p 9200:9200 -p 9600:9600 -v /usr/share/opensearch/data opensearch-custom-plugin
|
||||
```
|
||||
|
||||
You can also use a `Dockerfile` to pass your own certificates for use with the [Security](../../../security/) plugin, similar to the `-v` argument in [Configure OpenSearch](#configure-opensearch):
|
||||
You can also use a `Dockerfile` to pass your own certificates for use with the [security](../../../security/) plugin, similar to the `-v` argument in [Configure OpenSearch](#configure-opensearch):
|
||||
|
||||
```
|
||||
FROM opensearchproject/opensearch:{{site.opensearch_version}}
|
||||
|
@ -313,11 +313,11 @@ Alternately, you might want to remove a plugin. This `Dockerfile` removes the se
|
|||
|
||||
```
|
||||
FROM opensearchproject/opensearch:{{site.opensearch_version}}
|
||||
RUN /usr/share/opensearch/bin/opensearch-plugin remove opensearch_security
|
||||
RUN /usr/share/opensearch/bin/opensearch-plugin remove opensearch-security
|
||||
COPY --chown=opensearch:opensearch opensearch.yml /usr/share/opensearch/config/
|
||||
```
|
||||
|
||||
In this case, `opensearch.yml` is a "vanilla" version of the file with no OpenSearch entries. It might look like this:
|
||||
In this case, `opensearch.yml` is a "vanilla" version of the file with no plugin entries. It might look like this:
|
||||
|
||||
```yml
|
||||
cluster.name: "docker-cluster"
|
||||
|
|
|
@ -30,6 +30,23 @@ If you don't want to use the all-in-one OpenSearch installation options, you can
|
|||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td>1.0.0-rc1</td>
|
||||
<td>
|
||||
<pre>opensearch-alerting 1.0.0.0-rc1
|
||||
opensearch-anomaly-detection 1.0.0.0-rc1
|
||||
opensearch-asynchronous-search 1.0.0.0-rc1
|
||||
opensearch-index-management 1.0.0.0-rc1
|
||||
opensearch-job-scheduler 1.0.0.0-rc1
|
||||
opensearch-knn 1.0.0.0-rc1
|
||||
opensearch-notebooks 1.0.0.0-rc1
|
||||
opensearch-performance-analyzer 1.0.0.0-rc1
|
||||
opensearch-reports-scheduler 1.0.0.0-rc1
|
||||
opensearch-security 1.0.0.0-rc1
|
||||
opensearch-sql 1.0.0.0-rc1
|
||||
</pre>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>1.0.0-beta1</td>
|
||||
<td>
|
||||
|
@ -65,7 +82,7 @@ Then you can specify the version that you need:
|
|||
sudo yum install opensearch-oss-6.7.1
|
||||
```
|
||||
|
||||
{% endcomment %}
|
||||
|
||||
|
||||
|
||||
## Install plugins
|
||||
|
@ -227,9 +244,9 @@ Performance Analyzer requires some manual configuration after installing the plu
|
|||
1. Send a test request:
|
||||
|
||||
```bash
|
||||
curl -XGET "localhost:9600/_opensearch/_performanceanalyzer/metrics?metrics=Latency,CPU_Utilization&agg=avg,max&dim=ShardID&nodes=all"
|
||||
curl -XGET "localhost:9600/_plugins/_performanceanalyzer/metrics?metrics=Latency,CPU_Utilization&agg=avg,max&dim=ShardID&nodes=all"
|
||||
```
|
||||
|
||||
{% endcomment %}
|
||||
|
||||
## List installed plugins
|
||||
|
||||
|
|
|
@ -118,25 +118,25 @@ In a tarball installation, Performance Analyzer collects data when it is enabled
|
|||
1. In a separate window, enable the Performance Analyzer plugin:
|
||||
|
||||
```bash
|
||||
curl -XPOST localhost:9200/_opensearch/_performanceanalyzer/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}'
|
||||
curl -XPOST localhost:9200/_plugins/_performanceanalyzer/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}'
|
||||
```
|
||||
|
||||
If you receive the `curl: (52) Empty reply from server` error, you are likely protecting your cluster with the security plugin and you need to provide credentials. Modify the following command to use your username and password:
|
||||
|
||||
```bash
|
||||
curl -XPOST https://localhost:9200/_opensearch/_performanceanalyzer/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:admin' -k
|
||||
curl -XPOST https://localhost:9200/_plugins/_performanceanalyzer/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:admin' -k
|
||||
```
|
||||
|
||||
1. Finally, enable the Root Cause Analyzer (RCA) framework
|
||||
|
||||
```bash
|
||||
curl -XPOST localhost:9200/_opensearch/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}'
|
||||
curl -XPOST localhost:9200/_plugins/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}'
|
||||
```
|
||||
|
||||
Similar to step 4, if you run into `curl: (52) Empty reply from server`, run the command below to enable RCA
|
||||
|
||||
```bash
|
||||
curl -XPOST https://localhost:9200/_opensearch/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:admin' -k
|
||||
curl -XPOST https://localhost:9200/_plugins/_performanceanalyzer/rca/cluster/config -H 'Content-Type: application/json' -d '{"enabled": true}' -u 'admin:admin' -k
|
||||
```
|
||||
|
||||
{% comment %}
|
||||
|
|
|
@ -0,0 +1,72 @@
|
|||
---
|
||||
layout: default
|
||||
title: Cluster health
|
||||
parent: REST API reference
|
||||
grand_parent: OpenSearch
|
||||
nav_order: 45
|
||||
---
|
||||
|
||||
# Cluster health
|
||||
|
||||
The most basic cluster health request returns a simple status of the health of your cluster. OpenSearch expresses cluster health in three colors: green, yellow, and red. A green status means all primary shards and their replicas are allocated to nodes. A yellow status means all primary shards are allocated to nodes, but some replicas aren't. A red status means at least one primary shard is not allocated to any node.
|
||||
|
||||
To get the status of a specific index, provide the index name.
|
||||
|
||||
## Example
|
||||
|
||||
This request waits 50 seconds for the cluster to reach the yellow status or better:
|
||||
|
||||
```
|
||||
GET /_cluster/health?wait_for_status=yellow&timeout=50s
|
||||
```
|
||||
|
||||
If the cluster health becomes yellow or green before 50 seconds elapse, it returns a response immediately. Otherwise it returns a response as soon as it exceeds the timeout.
|
||||
|
||||
## Path and HTTP methods
|
||||
|
||||
```
|
||||
GET /_cluster/health
|
||||
GET /_cluster/health/<index>
|
||||
```
|
||||
|
||||
## URL parameters
|
||||
|
||||
All cluster health parameters are optional.
|
||||
|
||||
Parameter | Type | Description
|
||||
:--- | :--- | :---
|
||||
expand_wildcards | enum | Expands wildcard expressions to concrete indices. Combine multiple values with commas. Supported values are `all`, `open`, `closed`, `hidden`, and `none`. Default is `open`.
|
||||
level | enum | The level of detail for returned health information. Supported values are `cluster`, `indices`, and `shards`. Default is `cluster`.
|
||||
local | boolean | Whether to return information from the local node only instead of from the master node. Default is false.
|
||||
master_timeout | time | The amount of time to wait for a connection to the master node. Default is 30 seconds.
|
||||
timeout | time | The amount of time to wait for a response. If the timeout expires, the request fails. Default is 30 seconds.
|
||||
wait_for_active_shards | string | Wait until the specified number of shards is active before returning a response. `all` for all shards. Default is `0`.
|
||||
wait_for_events | enum | Wait until all currently queued events with the given priority are processed. Supported values are `immediate`, `urgent`, `high`, `normal`, `low`, and `languid`.
|
||||
wait_for_no_relocating_shards | boolean | Whether to wait until there are no relocating shards in the cluster. Default is false.
|
||||
wait_for_no_initializing_shards | boolean | Whether to wait until there are no initializing shards in the cluster. Default is false.
|
||||
wait_for_status | enum | Wait until the cluster is in a specific state or better. Supported values are `green`, `yellow`, and `red`.
|
||||
|
||||
<!-- wait_for_nodes | string | Wait until the specified number of nodes is available. Also supports operators <=, >=, <, and >
|
||||
# Not working properly when tested -->
|
||||
|
||||
## Response
|
||||
|
||||
```json
|
||||
{
|
||||
"cluster_name" : "opensearch-cluster",
|
||||
"status" : "green",
|
||||
"timed_out" : false,
|
||||
"number_of_nodes" : 2,
|
||||
"number_of_data_nodes" : 2,
|
||||
"active_primary_shards" : 6,
|
||||
"active_shards" : 12,
|
||||
"relocating_shards" : 0,
|
||||
"initializing_shards" : 0,
|
||||
"unassigned_shards" : 0,
|
||||
"delayed_unassigned_shards" : 0,
|
||||
"number_of_pending_tasks" : 0,
|
||||
"number_of_in_flight_fetch" : 0,
|
||||
"task_max_waiting_in_queue_millis" : 0,
|
||||
"active_shards_percent_as_number" : 100.0
|
||||
}
|
||||
```
|
|
@ -10,7 +10,7 @@ nav_order: 1
|
|||
Performance Analyzer uses a single HTTP method and URI for most requests:
|
||||
|
||||
```
|
||||
GET <endpoint>:9600/_opensearch/_performanceanalyzer/metrics
|
||||
GET <endpoint>:9600/_plugins/_performanceanalyzer/metrics
|
||||
```
|
||||
|
||||
Note the use of port 9600. Provide parameters for metrics, aggregations, dimensions, and nodes (optional):
|
||||
|
@ -25,7 +25,7 @@ For a full list of metrics, see [Metrics reference](../reference/). Performance
|
|||
#### Sample request
|
||||
|
||||
```
|
||||
GET localhost:9600/_opensearch/_performanceanalyzer/metrics?metrics=Latency,CPU_Utilization&agg=avg,max&dim=ShardID&nodes=all
|
||||
GET localhost:9600/_plugins/_performanceanalyzer/metrics?metrics=Latency,CPU_Utilization&agg=avg,max&dim=ShardID&nodes=all
|
||||
```
|
||||
|
||||
|
||||
|
@ -104,7 +104,7 @@ Performance Analyzer has one additional URI that returns the unit for each metri
|
|||
#### Sample request
|
||||
|
||||
```
|
||||
GET localhost:9600/_opensearch/_performanceanalyzer/metrics/units
|
||||
GET localhost:9600/_plugins/_performanceanalyzer/metrics/units
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -25,25 +25,25 @@ npm install -g @aws/opensearch-perftop
|
|||
The basic syntax is:
|
||||
|
||||
```bash
|
||||
./perf-top-<operating_system> --dashboard <dashboard>.json --endpoint <endpoint>
|
||||
./opensearch-perf-top-<operating_system> --dashboard <dashboard>.json --endpoint <endpoint>
|
||||
```
|
||||
|
||||
If you're using npm, the syntax is similar:
|
||||
|
||||
```bash
|
||||
perf-top --dashboard <dashboard> --endpoint <endpoint>
|
||||
opensearch-perf-top --dashboard <dashboard> --endpoint <endpoint>
|
||||
```
|
||||
|
||||
If you're running PerfTop from a node (i.e. locally), specify port 9600:
|
||||
|
||||
```bash
|
||||
./perf-top-linux --dashboard dashboards/<dashboard>.json --endpoint localhost:9600
|
||||
./opensearch-perf-top-linux --dashboard dashboards/<dashboard>.json --endpoint localhost:9600
|
||||
```
|
||||
|
||||
Otherwise, just specify the OpenSearch endpoint:
|
||||
|
||||
```bash
|
||||
./perf-top-macos --dashboard dashboards/<dashboard>.json --endpoint my-cluster.my-domain.com
|
||||
./opensearch-perf-top-macos --dashboard dashboards/<dashboard>.json --endpoint my-cluster.my-domain.com
|
||||
```
|
||||
|
||||
PerfTop has four pre-built dashboards in the `dashboards` directory, but you can also [create your own](dashboards/).
|
||||
|
@ -83,10 +83,10 @@ mount -o remount /dev/shm
|
|||
|
||||
### Security
|
||||
|
||||
Performance Analyzer supports encryption in transit for requests. It currently does *not* support client or server authentication for requests. To enable encryption in transit, edit `performance-analyzer.properties` in your `$ES_HOME` directory:
|
||||
Performance Analyzer supports encryption in transit for requests. It currently does *not* support client or server authentication for requests. To enable encryption in transit, edit `performance-analyzer.properties` in your `$OPENSEARCH_HOME` directory:
|
||||
|
||||
```bash
|
||||
vi $ES_HOME/plugins/opensearch_performance_analyzer/pa_config/performance-analyzer.properties
|
||||
vi $OPENSEARCH_HOME/plugins/opensearch-performance-analyzer/pa_config/performance-analyzer.properties
|
||||
```
|
||||
|
||||
Change the following lines to configure encryption in transit. Note that `certificate-file-path` must be a certificate for the server, not a root CA:
|
||||
|
|
|
@ -12,10 +12,10 @@ nav_order: 1
|
|||
|
||||
```
|
||||
# Request all available RCAs
|
||||
GET localhost:9600/_opensearch/_performanceanalyzer/rca
|
||||
GET localhost:9600/_plugins/_performanceanalyzer/rca
|
||||
|
||||
# Request a specific RCA
|
||||
GET localhost:9600/_opensearch/_performanceanalyzer/rca?name=HighHeapUsageClusterRca
|
||||
GET localhost:9600/_plugins/_performanceanalyzer/rca?name=HighHeapUsageClusterRca
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -8,4 +8,4 @@ nav_order: 3
|
|||
|
||||
# RCA reference
|
||||
|
||||
You can find a reference of available RCAs and their purposes on [Github](https://github.com/opensearch-project/performance-analyzer-rca/tree/master/docs).
|
||||
You can find a reference of available RCAs and their purposes on [Github](https://github.com/opensearch-project/performance-analyzer-rca/tree/main/docs).
|
||||
|
|
|
@ -46,7 +46,7 @@ search source=accounts;
|
|||
```
|
||||
|
||||
| account_number | firstname | address | balance | gender | city | employer | state | age | email | lastname |
|
||||
:--- | :--- |
|
||||
:--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :---
|
||||
| 1 | Amber | 880 Holmes Lane | 39225 | M | Brogan | Pyrami | IL | 32 | amberduke@pyrami.com | Duke
|
||||
| 6 | Hattie | 671 Bristol Street | 5686 | M | Dante | Netagy | TN | 36 | hattiebond@netagy.com | Bond
|
||||
| 13 | Nanette | 789 Madison Street | 32838 | F | Nogal | Quility | VA | 28 | null | Bates
|
||||
|
@ -61,7 +61,7 @@ search source=accounts account_number=1 or gender="F";
|
|||
```
|
||||
|
||||
| account_number | firstname | address | balance | gender | city | employer | state | age | email | lastname |
|
||||
:--- | :--- |
|
||||
:--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :---
|
||||
| 1 | Amber | 880 Holmes Lane | 39225 | M | Brogan | Pyrami | IL | 32 | amberduke@pyrami.com | Duke |
|
||||
| 13 | Nanette | 789 Madison Street | 32838 | F | Nogal | Quility | VA | 28 | null | Bates |
|
||||
|
||||
|
@ -513,13 +513,11 @@ Use the `head` command to return the first N number of results in a specified se
|
|||
### Syntax
|
||||
|
||||
```sql
|
||||
head [keeplast = (true | false)] [while "("<boolean-expression>")"] [N]
|
||||
head [N]
|
||||
```
|
||||
|
||||
Field | Description | Required | Default
|
||||
:--- | :--- |:---
|
||||
`keeplast` | Use along with the `while` argument to check if the last result in the result set is retained. The last result is what caused the `while` condition to evaluate to false or NULL. Set `keeplast` to true to retain the last result and false to discard it. | No | True
|
||||
`while` | An expression that evaluates to either true or false. You cannot use statistical functions in this expression. | No | False
|
||||
`N` | Specify the number of results to return. | No | 10
|
||||
|
||||
*Example 1*: Get the first 10 results
|
||||
|
@ -549,31 +547,6 @@ search source=accounts | fields firstname, age | head 2;
|
|||
| Amber | 32
|
||||
| Hattie | 36
|
||||
|
||||
*Example 3*: Get the first N results that match a while condition
|
||||
|
||||
To get the first 3 results from all accounts with age less than 30:
|
||||
|
||||
```sql
|
||||
search source=accounts | fields firstname, age | sort age | head while(age < 30) 3;
|
||||
```
|
||||
|
||||
| firstname | age
|
||||
:--- | :--- |
|
||||
| Nanette | 28
|
||||
| Amber | 32
|
||||
|
||||
*Example 4*: Get the first N results with a while condition with the last result that failed the condition
|
||||
|
||||
To get the first 3 results from all accounts with age less than 30 and include the last failed condition:
|
||||
|
||||
```sql
|
||||
search source=accounts | fields firstname, age | sort age | head keeplast=false while(age < 30) 3;
|
||||
```
|
||||
|
||||
| firstname | age
|
||||
:--- | :--- |
|
||||
| Nanette | 28
|
||||
|
||||
## rare
|
||||
|
||||
Use the `rare` command to find the least common values of all fields in a field list.
|
||||
|
|
|
@ -10,13 +10,13 @@ nav_order: 1
|
|||
To send a query request to PPL plugin, use the HTTP POST request.
|
||||
We recommend a POST request because it doesn't have any length limit and it allows you to pass other parameters to the plugin for other functionality.
|
||||
|
||||
Use the explain endpoint for query translation and troubleshooting.
|
||||
Use the `_explain` endpoint for query translation and troubleshooting.
|
||||
|
||||
## Request Format
|
||||
|
||||
To use the PPL plugin with your own applications, send requests to `_opensearch/_ppl`, with your query in the request body:
|
||||
To use the PPL plugin with your own applications, send requests to `_plugins/_ppl`, with your query in the request body:
|
||||
|
||||
```json
|
||||
curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_ppl \
|
||||
curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_ppl \
|
||||
... -d '{"query" : "source=accounts | fields firstname, lastname"}'
|
||||
```
|
||||
|
|
|
@ -48,11 +48,11 @@ search source=accounts
|
|||
|
||||
#### Sample Response
|
||||
|
||||
| id | firstname | lastname |
|
||||
:--- | :--- | :--- |
|
||||
| 0 | Amber | Duke
|
||||
| 1 | Hattie | Bond
|
||||
| 2 | Nanette | Bates
|
||||
| 3 | Dale | Adams
|
||||
firstname | lastname |
|
||||
:--- | :--- |
|
||||
Amber | Duke
|
||||
Hattie | Bond
|
||||
Nanette | Bates
|
||||
Dale | Adams
|
||||
|
||||
![PPL query workbench](../images/ppl.png)
|
||||
|
|
|
@ -14,7 +14,7 @@ The PPL plugin provides responses in JDBC format. The JDBC format is widely used
|
|||
The body of HTTP POST request can take a few more additional fields with the PPL query:
|
||||
|
||||
```json
|
||||
curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_ppl \
|
||||
curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_ppl \
|
||||
... -d '{"query" : "source=accounts | fields firstname, lastname"}'
|
||||
```
|
||||
|
||||
|
@ -58,7 +58,7 @@ The following example shows a normal response where the schema includes a field
|
|||
If any error occurred, error message and the cause will be returned instead:
|
||||
|
||||
```json
|
||||
curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_ppl \
|
||||
curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_ppl \
|
||||
... -d '{"query" : "source=unknown | fields firstname, lastname"}'
|
||||
{
|
||||
"error": {
|
||||
|
|
|
@ -15,7 +15,7 @@ You can update these settings like any other cluster setting:
|
|||
PUT _cluster/settings
|
||||
{
|
||||
"transient": {
|
||||
"opensearch": {
|
||||
"plugins": {
|
||||
"ppl": {
|
||||
"enabled": "false"
|
||||
}
|
||||
|
@ -24,12 +24,26 @@ PUT _cluster/settings
|
|||
}
|
||||
```
|
||||
|
||||
Requests to `_opensearch/_ppl` include index names in the request body, so they have the same access policy considerations as the `bulk`, `mget`, and `msearch` operations. If you set the `rest.action.multi.allow_explicit_index` parameter to `false`, the PPL plugin is disabled.
|
||||
Similarly, you can also update the settings by sending request to the plugin setting endpoint `_plugins/_query/settings` :
|
||||
```json
|
||||
PUT _plugins/_query/settings
|
||||
{
|
||||
"transient": {
|
||||
"plugins": {
|
||||
"ppl": {
|
||||
"enabled": "false"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Requests to `_plugins/_ppl` include index names in the request body, so they have the same access policy considerations as the `bulk`, `mget`, and `msearch` operations. If you set the `rest.action.multi.allow_explicit_index` parameter to `false`, the PPL plugin is disabled.
|
||||
|
||||
You can specify the settings shown in the following table:
|
||||
|
||||
Setting | Description | Default
|
||||
:--- | :--- | :---
|
||||
`opensearch.ppl.enabled` | Change to `false` to disable the plugin. | True
|
||||
`opensearch.ppl.query.memory_limit` | Set heap memory usage limit. If a query crosses this limit, it's terminated. | 85%
|
||||
`opensearch.query.size_limit` | Set the maximum number of results that you want to see. This impacts the accuracy of aggregation operations. For example, if you have 1000 documents in an index, by default, only 200 documents are extracted from the index for aggregation. | 200
|
||||
`plugins.ppl.enabled` | Change to `false` to disable the PPL component. | True
|
||||
`plugins.query.memory_limit` | Set heap memory usage limit. If a query crosses this limit, it's terminated. | 85%
|
||||
`plugins.query.size_limit` | Set the maximum number of results that you want to see. This impacts the accuracy of aggregation operations. For example, if you have 1000 documents in an index, by default, only 200 documents are extracted from the index for aggregation. | 200
|
||||
|
|
|
@ -24,13 +24,13 @@ The security plugin REST API lets you programmatically create and manage users,
|
|||
Just like OpenSearch permissions, you control access to the security plugin REST API using roles. Specify roles in `opensearch.yml`:
|
||||
|
||||
```yml
|
||||
opensearch_security.restapi.roles_enabled: ["<role>", ...]
|
||||
plugins.security.restapi.roles_enabled: ["<role>", ...]
|
||||
```
|
||||
|
||||
These roles can now access all APIs. To prevent access to certain APIs:
|
||||
|
||||
```yml
|
||||
opensearch_security.restapi.endpoints_disabled.<role>.<endpoint>: ["<method>", ...]
|
||||
plugins.security.restapi.endpoints_disabled.<role>.<endpoint>: ["<method>", ...]
|
||||
```
|
||||
|
||||
Possible values for `endpoint` are:
|
||||
|
@ -55,15 +55,15 @@ Possible values for `method` are:
|
|||
For example, the following configuration grants three roles access to the REST API, but then prevents `test-role` from making PUT, POST, DELETE, or PATCH requests to `_opensearch/_security/api/roles` or `_opensearch/_security/api/internalusers`:
|
||||
|
||||
```yml
|
||||
opensearch_security.restapi.roles_enabled: ["all_access", "security_rest_api_access", "test-role"]
|
||||
opensearch_security.restapi.endpoints_disabled.test-role.ROLES: ["PUT", "POST", "DELETE", "PATCH"]
|
||||
opensearch_security.restapi.endpoints_disabled.test-role.INTERNALUSERS: ["PUT", "POST", "DELETE", "PATCH"]
|
||||
plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access", "test-role"]
|
||||
plugins.security.restapi.endpoints_disabled.test-role.ROLES: ["PUT", "POST", "DELETE", "PATCH"]
|
||||
plugins.security.restapi.endpoints_disabled.test-role.INTERNALUSERS: ["PUT", "POST", "DELETE", "PATCH"]
|
||||
```
|
||||
|
||||
To use the PUT and PATCH methods for the [configuration APIs](#configuration), add the following line to `opensearch.yml`:
|
||||
|
||||
```yml
|
||||
opensearch_security.unsupported.restapi.allow_securityconfig_modification: true
|
||||
plugins.security.unsupported.restapi.allow_securityconfig_modification: true
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -32,12 +32,12 @@ Field masking works alongside field-level security on the same per-role, per-ind
|
|||
You set the salt (a random string used to hash your data) in `opensearch.yml`:
|
||||
|
||||
```yml
|
||||
opensearch_security.compliance.salt: abcdefghijklmnopqrstuvqxyz1234567890
|
||||
plugins.security.compliance.salt: abcdefghijklmnopqrstuvqxyz1234567890
|
||||
```
|
||||
|
||||
Property | Description
|
||||
:--- | :---
|
||||
`opensearch_security.compliance.salt` | The salt to use when generating the hash value. Must be at least 32 characters. Only ASCII characters are allowed. Optional.
|
||||
`plugins.security.compliance.salt` | The salt to use when generating the hash value. Must be at least 32 characters. Only ASCII characters are allowed. Optional.
|
||||
|
||||
Setting the salt is optional, but we highly recommend it.
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ Impersonation can occur on either the REST interface or at the transport layer.
|
|||
To allow one user to impersonate another, add the following to `opensearch.yml`:
|
||||
|
||||
```yml
|
||||
opensearch_security.authcz.rest_impersonation_user:
|
||||
plugins.security.authcz.rest_impersonation_user:
|
||||
<AUTHENTICATED_USER>:
|
||||
- <IMPERSONATED_USER_1>
|
||||
- <IMPERSONATED_USER_2>
|
||||
|
@ -34,7 +34,7 @@ The impersonated user field supports wildcards. Setting it to `*` allows `AUTHEN
|
|||
In a similar fashion, add the following to enable transport layer impersonation:
|
||||
|
||||
```yml
|
||||
opensearch_security.authcz.impersonation_dn:
|
||||
plugins.security.authcz.impersonation_dn:
|
||||
"CN=spock,OU=client,O=client,L=Test,C=DE":
|
||||
- worf
|
||||
```
|
||||
|
|
|
@ -48,21 +48,21 @@ Setting | Description
|
|||
opensearch.username: kibanaserver
|
||||
opensearch.password: kibanaserver
|
||||
opensearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
|
||||
opensearch_security.multitenancy.enabled: true
|
||||
opensearch_security.multitenancy.tenants.enable_global: true
|
||||
opensearch_security.multitenancy.tenants.enable_private: true
|
||||
opensearch_security.multitenancy.tenants.preferred: ["Private", "Global"]
|
||||
opensearch_security.multitenancy.enable_filter: false
|
||||
plugins.security.multitenancy.enabled: true
|
||||
plugins.security.multitenancy.tenants.enable_global: true
|
||||
plugins.security.multitenancy.tenants.enable_private: true
|
||||
plugins.security.multitenancy.tenants.preferred: ["Private", "Global"]
|
||||
plugins.security.multitenancy.enable_filter: false
|
||||
```
|
||||
|
||||
Setting | Description
|
||||
:--- | :---
|
||||
`opensearch.requestHeadersWhitelist` | OpenSearch Dashboards requires that you whitelist all HTTP headers that it passes to OpenSearch. Multi-tenancy uses a specific header, `securitytenant`, that must be present with the standard `Authorization` header. If the `securitytenant` header is not whitelisted, OpenSearch Dashboards starts with a red status.
|
||||
`opensearch_security.multitenancy.enabled` | Enables or disables multi-tenancy in OpenSearch Dashboards. Default is true.
|
||||
`opensearch_security.multitenancy.tenants.enable_global` | Enables or disables the global tenant. Default is true.
|
||||
`opensearch_security.multitenancy.tenants.enable_private` | Enables or disables the private tenant. Default is true.
|
||||
`opensearch_security.multitenancy.tenants.preferred` | Lets you change ordering in the **Tenants** tab of OpenSearch Dashboards. By default, the list starts with global and private (if enabled) and then proceeds alphabetically. You can add tenants here to move them to the top of the list.
|
||||
`opensearch_security.multitenancy.enable_filter` | If you have many tenants, you can add a search bar to the top of the list. Default is false.
|
||||
`plugins.security.multitenancy.enabled` | Enables or disables multi-tenancy in OpenSearch Dashboards. Default is true.
|
||||
`plugins.security.multitenancy.tenants.enable_global` | Enables or disables the global tenant. Default is true.
|
||||
`plugins.security.multitenancy.tenants.enable_private` | Enables or disables the private tenant. Default is true.
|
||||
`plugins.security.multitenancy.tenants.preferred` | Lets you change ordering in the **Tenants** tab of OpenSearch Dashboards. By default, the list starts with global and private (if enabled) and then proceeds alphabetically. You can add tenants here to move them to the top of the list.
|
||||
`plugins.security.multitenancy.enable_filter` | If you have many tenants, you can add a search bar to the top of the list. Default is false.
|
||||
|
||||
|
||||
## Add tenants
|
||||
|
|
|
@ -110,13 +110,13 @@ Role | Description
|
|||
`anomaly_full_access` | Grants full permissions to all anomaly detection actions.
|
||||
`anomaly_read_access` | Grants permissions to view detectors, but not create, modify, or delete detectors.
|
||||
`all_access` | Grants full access to the cluster: all cluster-wide operations, write to all indices, write to all tenants.
|
||||
`kibana_read_only` | A special role that prevents users from making changes to visualizations, dashboards, and other OpenSearch Dashboards objects. See `opensearch_security.readonly_mode.roles` in `opensearch_dashboards.yml`. Pair with the `kibana_user` role.
|
||||
`kibana_read_only` | A special role that prevents users from making changes to visualizations, dashboards, and other OpenSearch Dashboards objects. See `plugins.security.readonly_mode.roles` in `opensearch_dashboards.yml`. Pair with the `kibana_user` role.
|
||||
`kibana_user` | Grants permissions to use OpenSearch Dashboards: cluster-wide searches, index monitoring, and write to various OpenSearch Dashboards indices.
|
||||
`logstash` | Grants permissions for Logstash to interact with the cluster: cluster-wide searches, cluster monitoring, and write to the various Logstash indices.
|
||||
`manage_snapshots` | Grants permissions to manage snapshot repositories, take snapshots, and restore snapshots.
|
||||
`readall` | Grants permissions for cluster-wide searches like `msearch` and search permissions for all indices.
|
||||
`readall_and_monitor` | Same as `readall`, but with added cluster monitoring permissions.
|
||||
`security_rest_api_access` | A special role that allows access to the REST API. See `opensearch_security.restapi.roles_enabled` in `opensearch.yml` and [Access control for the API](../api/#access-control-for-the-api).
|
||||
`security_rest_api_access` | A special role that allows access to the REST API. See `plugins.security.restapi.roles_enabled` in `opensearch.yml` and [Access control for the API](../api/#access-control-for-the-api).
|
||||
`reports_read_access` | Grants permissions to generate on-demand reports, download existing reports, and view report definitions, but not to create report definitions.
|
||||
`reports_instances_read_access` | Grants permissions to generate on-demand reports and download existing reports, but not to view or create report definitions.
|
||||
`reports_full_access` | Grants full permissions to reports.
|
||||
|
|
|
@ -18,7 +18,7 @@ The following attributes are logged for all event categories, independent of the
|
|||
Name | Description
|
||||
:--- | :---
|
||||
`audit_format_version` | The audit log message format version.
|
||||
`audit_category` | The audit log category, one of FAILED_LOGIN, MISSING_PRIVILEGES, BAD_HEADERS, SSL_EXCEPTION, opensearch_SECURITY_INDEX_ATTEMPT, AUTHENTICATED or GRANTED_PRIVILEGES.
|
||||
`audit_category` | The audit log category. FAILED_LOGIN, MISSING_PRIVILEGES, BAD_HEADERS, SSL_EXCEPTION, OPENSEARCH_SECURITY_INDEX_ATTEMPT, AUTHENTICATED, or GRANTED_PRIVILEGES.
|
||||
`audit_node_id ` | The ID of the node where the event was generated.
|
||||
`audit_node_name` | The name of the node where the event was generated.
|
||||
`audit_node_host_address` | The host address of the node where the event was generated.
|
||||
|
|
|
@ -16,7 +16,7 @@ To enable audit logging:
|
|||
1. Add the following line to `opensearch.yml` on each node:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.type: internal_opensearch
|
||||
plugins.security.audit.type: internal_opensearch
|
||||
```
|
||||
|
||||
This setting stores audit logs on the current cluster. For other storage options, see [Audit Log Storage Types](storage-types/).
|
||||
|
@ -57,22 +57,22 @@ These default log settings work well for most use cases, but you can change sett
|
|||
To exclude categories, set:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.config.disabled_rest_categories: <disabled categories>
|
||||
opensearch_security.audit.config.disabled_transport_categories: <disabled categories>
|
||||
plugins.security.audit.config.disabled_rest_categories: <disabled categories>
|
||||
plugins.security.audit.config.disabled_transport_categories: <disabled categories>
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.config.disabled_rest_categories: AUTHENTICATED, opensearch_SECURITY_INDEX_ATTEMPT
|
||||
opensearch_security.audit.config.disabled_transport_categories: GRANTED_PRIVILEGES
|
||||
plugins.security.audit.config.disabled_rest_categories: AUTHENTICATED, opensearch_SECURITY_INDEX_ATTEMPT
|
||||
plugins.security.audit.config.disabled_transport_categories: GRANTED_PRIVILEGES
|
||||
```
|
||||
|
||||
If you want to log events in all categories, use `NONE`:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.config.disabled_rest_categories: NONE
|
||||
opensearch_security.audit.config.disabled_transport_categories: NONE
|
||||
plugins.security.audit.config.disabled_rest_categories: NONE
|
||||
plugins.security.audit.config.disabled_transport_categories: NONE
|
||||
```
|
||||
|
||||
|
||||
|
@ -81,8 +81,8 @@ opensearch_security.audit.config.disabled_transport_categories: NONE
|
|||
By default, the security plugin logs events on both REST and the transport layer. You can disable either type:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.enable_rest: false
|
||||
opensearch_security.audit.enable_transport: false
|
||||
plugins.security.audit.enable_rest: false
|
||||
plugins.security.audit.enable_transport: false
|
||||
```
|
||||
|
||||
|
||||
|
@ -91,7 +91,7 @@ opensearch_security.audit.enable_transport: false
|
|||
By default, the security plugin includes the body of the request (if available) for both REST and the transport layer. If you do not want or need the request body, you can disable it:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.log_request_body: false
|
||||
plugins.security.audit.log_request_body: false
|
||||
```
|
||||
|
||||
|
||||
|
@ -113,10 +113,10 @@ audit_trace_resolved_indices: [
|
|||
You can disable this feature by setting:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.resolve_indices: false
|
||||
plugins.security.audit.resolve_indices: false
|
||||
```
|
||||
|
||||
Disabling this feature only takes effect if `opensearch_security.audit.log_request_body` is also set to `false`.
|
||||
Disabling this feature only takes effect if `plugins.security.audit.log_request_body` is also set to `false`.
|
||||
{: .note }
|
||||
|
||||
|
||||
|
@ -127,7 +127,7 @@ Bulk requests can contain many indexing operations. By default, the security plu
|
|||
The security plugin can be configured to log each indexing operation as a separate event:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.resolve_bulk_requests: true
|
||||
plugins.security.audit.resolve_bulk_requests: true
|
||||
```
|
||||
|
||||
This change can create a massive number of events in the audit logs, so we don't recommend enabling this setting if you make heavy use of the `_bulk` API.
|
||||
|
@ -138,7 +138,7 @@ This change can create a massive number of events in the audit logs, so we don't
|
|||
You can exclude certain requests from being logged completely, by either configuring actions (for transport requests) and/or HTTP request paths (REST):
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.ignore_requests: ["indices:data/read/*", "SearchRequest"]
|
||||
plugins.security.audit.ignore_requests: ["indices:data/read/*", "SearchRequest"]
|
||||
```
|
||||
|
||||
|
||||
|
@ -147,7 +147,7 @@ opensearch_security.audit.ignore_requests: ["indices:data/read/*", "SearchReques
|
|||
By default, the security plugin logs events from all users, but excludes the internal OpenSearch Dashboards server user `kibanaserver`. You can exclude other users:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.ignore_users:
|
||||
plugins.security.audit.ignore_users:
|
||||
- kibanaserver
|
||||
- admin
|
||||
```
|
||||
|
@ -155,7 +155,7 @@ opensearch_security.audit.ignore_users:
|
|||
If requests from all users should be logged, use `NONE`:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.ignore_users: NONE
|
||||
plugins.security.audit.ignore_users: NONE
|
||||
```
|
||||
|
||||
|
||||
|
@ -164,13 +164,13 @@ opensearch_security.audit.ignore_users: NONE
|
|||
By default, the security plugin stores audit events in a daily rolling index named `auditlog-YYYY.MM.dd`. You can configure the name of the index in `opensearch.yml`:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.config.index: myauditlogindex
|
||||
plugins.security.audit.config.index: myauditlogindex
|
||||
```
|
||||
|
||||
Use a date pattern in the index name to configure daily, weekly, or monthly rolling indices:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.config.index: "'auditlog-'YYYY.MM.dd"
|
||||
plugins.security.audit.config.index: "'auditlog-'YYYY.MM.dd"
|
||||
```
|
||||
|
||||
For a reference on the date pattern format, see the [Joda DateTimeFormat documentation](http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html).
|
||||
|
@ -181,11 +181,11 @@ For a reference on the date pattern format, see the [Joda DateTimeFormat documen
|
|||
The Search plugin logs events asynchronously, which keeps performance impact on your cluster minimal. The plugin uses a fixed thread pool to log events. You can define the number of threads in the pool in `opensearch.yml`:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.threadpool.size: <integer>
|
||||
plugins.security.audit.threadpool.size: <integer>
|
||||
```
|
||||
|
||||
The default setting is `10`. Setting this value to `0` disables the thread pool, which means the plugin logs events synchronously. To set the maximum queue length per thread:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.threadpool.max_queue_len: 100000
|
||||
plugins.security.audit.threadpool.max_queue_len: 100000
|
||||
```
|
||||
|
|
|
@ -21,7 +21,7 @@ log4j | Writes the events to a Log4j logger. You can use any Log4j [appender](ht
|
|||
You configure the output location in `opensearch.yml`:
|
||||
|
||||
```
|
||||
opensearch_security.audit.type: <debug|internal_opensearch|external_opensearch|webhook|log4j>
|
||||
plugins.security.audit.type: <debug|internal_opensearch|external_opensearch|webhook|log4j>
|
||||
```
|
||||
|
||||
`external_opensearch`, `webhook`, and `log4j` all have additional configuration options. Details follow.
|
||||
|
@ -32,16 +32,16 @@ opensearch_security.audit.type: <debug|internal_opensearch|external_opensearch|w
|
|||
The `external_opensearch` storage type requires one or more OpenSearch endpoints with a host/IP address and port. Optionally, provide the index name and a document type.
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.type: external_opensearch
|
||||
opensearch_security.audit.config.http_endpoints: [<endpoints>]
|
||||
opensearch_security.audit.config.index: <indexname>
|
||||
opensearch_security.audit.config.type: _doc
|
||||
plugins.security.audit.type: external_opensearch
|
||||
plugins.security.audit.config.http_endpoints: [<endpoints>]
|
||||
plugins.security.audit.config.index: <indexname>
|
||||
plugins.security.audit.config.type: _doc
|
||||
```
|
||||
|
||||
The security plugin uses the OpenSearch REST API to send events, just like any other indexing request. For `opensearch_security.audit.config.http_endpoints`, use a comma-separated list of hosts/IP addresses and the REST port (default 9200).
|
||||
The security plugin uses the OpenSearch REST API to send events, just like any other indexing request. For `plugins.security.audit.config.http_endpoints`, use a comma-separated list of hosts/IP addresses and the REST port (default 9200).
|
||||
|
||||
```
|
||||
opensearch_security.audit.config.http_endpoints: [192.168.178.1:9200,192.168.178.2:9200]
|
||||
plugins.security.audit.config.http_endpoints: [192.168.178.1:9200,192.168.178.2:9200]
|
||||
```
|
||||
|
||||
If you use `external_opensearch` and the remote cluster also uses the security plugin, you must supply some additional parameters for authentication. These parameters depend on which authentication type you configured for the remote cluster.
|
||||
|
@ -51,16 +51,16 @@ If you use `external_opensearch` and the remote cluster also uses the security p
|
|||
|
||||
Name | Data Type | Description
|
||||
:--- | :--- | :---
|
||||
`opensearch_security.audit.config.enable_ssl` | Boolean | If you enabled SSL/TLS on the receiving cluster, set to true. The default is false.
|
||||
`opensearch_security.audit.config.verify_hostnames` | Boolean | Whether to verify the hostname of the SSL/TLS certificate of the receiving cluster. Default is true.
|
||||
`opensearch_security.audit.config.pemtrustedcas_filepath` | String | The trusted root certificate of the external OpenSearch cluster, relative to the `config` directory.
|
||||
`opensearch_security.audit.config.pemtrustedcas_content` | String | Instead of specifying the path (`opensearch_security.audit.config.pemtrustedcas_filepath`), you can configure the Base64-encoded certificate content directly.
|
||||
`opensearch_security.audit.config.enable_ssl_client_auth` | Boolean | Whether to enable SSL/TLS client authentication. If you set this to true, the audit log module sends the node's certificate along with the request. The receiving cluster can use this certificate to verify the identity of the caller.
|
||||
`opensearch_security.audit.config.pemcert_filepath` | String | The path to the TLS certificate to send to the external OpenSearch cluster, relative to the `config` directory.
|
||||
`opensearch_security.audit.config.pemcert_content` | String | Instead of specifying the path (`opensearch_security.audit.config.pemcert_filepath`), you can configure the Base64-encoded certificate content directly.
|
||||
`opensearch_security.audit.config.pemkey_filepath` | String | The path to the private key of the TLS certificate to send to the external OpenSearch cluster, relative to the `config` directory.
|
||||
`opensearch_security.audit.config.pemkey_content` | String | Instead of specifying the path (`opensearch_security.audit.config.pemkey_filepath`), you can configure the Base64-encoded certificate content directly.
|
||||
`opensearch_security.audit.config.pemkey_password` | String | The password of the private key.
|
||||
`plugins.security.audit.config.enable_ssl` | Boolean | If you enabled SSL/TLS on the receiving cluster, set to true. The default is false.
|
||||
`plugins.security.audit.config.verify_hostnames` | Boolean | Whether to verify the hostname of the SSL/TLS certificate of the receiving cluster. Default is true.
|
||||
`plugins.security.audit.config.pemtrustedcas_filepath` | String | The trusted root certificate of the external OpenSearch cluster, relative to the `config` directory.
|
||||
`plugins.security.audit.config.pemtrustedcas_content` | String | Instead of specifying the path (`plugins.security.audit.config.pemtrustedcas_filepath`), you can configure the Base64-encoded certificate content directly.
|
||||
`plugins.security.audit.config.enable_ssl_client_auth` | Boolean | Whether to enable SSL/TLS client authentication. If you set this to true, the audit log module sends the node's certificate along with the request. The receiving cluster can use this certificate to verify the identity of the caller.
|
||||
`plugins.security.audit.config.pemcert_filepath` | String | The path to the TLS certificate to send to the external OpenSearch cluster, relative to the `config` directory.
|
||||
`plugins.security.audit.config.pemcert_content` | String | Instead of specifying the path (`plugins.security.audit.config.pemcert_filepath`), you can configure the Base64-encoded certificate content directly.
|
||||
`plugins.security.audit.config.pemkey_filepath` | String | The path to the private key of the TLS certificate to send to the external OpenSearch cluster, relative to the `config` directory.
|
||||
`plugins.security.audit.config.pemkey_content` | String | Instead of specifying the path (`plugins.security.audit.config.pemkey_filepath`), you can configure the Base64-encoded certificate content directly.
|
||||
`plugins.security.audit.config.pemkey_password` | String | The password of the private key.
|
||||
|
||||
|
||||
### Basic auth settings
|
||||
|
@ -68,8 +68,8 @@ Name | Data Type | Description
|
|||
If you enabled HTTP basic authentication on the receiving cluster, use these settings to specify the username and password:
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.config.username: <username>
|
||||
opensearch_security.audit.config.password: <password>
|
||||
plugins.security.audit.config.username: <username>
|
||||
plugins.security.audit.config.password: <password>
|
||||
```
|
||||
|
||||
|
||||
|
@ -79,11 +79,11 @@ Use the following keys to configure the `webhook` storage type.
|
|||
|
||||
Name | Data Type | Description
|
||||
:--- | :--- | :---
|
||||
`opensearch_security.audit.config.webhook.url` | String | The HTTP or HTTPS URL to send the logs to.
|
||||
`opensearch_security.audit.config.webhook.ssl.verify` | Boolean | If true, the TLS certificate provided by the endpoint (if any) will be verified. If set to false, no verification is performed. You can disable this check if you use self-signed certificates.
|
||||
`opensearch_security.audit.config.webhook.ssl.pemtrustedcas_filepath` | String | The path to the trusted certificate against which the webhook's TLS certificate is validated.
|
||||
`opensearch_security.audit.config.webhook.ssl.pemtrustedcas_content` | String | Same as `opensearch_security.audit.config.webhook.ssl.pemtrustedcas_content`, but you can configure the base 64 encoded certificate content directly.
|
||||
`opensearch_security.audit.config.webhook.format` | String | The format in which the audit log message is logged, can be one of `URL_PARAMETER_GET`, `URL_PARAMETER_POST`, `TEXT`, `JSON`, `SLACK`. See [Formats](#formats).
|
||||
`plugins.security.audit.config.webhook.url` | String | The HTTP or HTTPS URL to send the logs to.
|
||||
`plugins.security.audit.config.webhook.ssl.verify` | Boolean | If true, the TLS certificate provided by the endpoint (if any) will be verified. If set to false, no verification is performed. You can disable this check if you use self-signed certificates.
|
||||
`plugins.security.audit.config.webhook.ssl.pemtrustedcas_filepath` | String | The path to the trusted certificate against which the webhook's TLS certificate is validated.
|
||||
`plugins.security.audit.config.webhook.ssl.pemtrustedcas_content` | String | Same as `plugins.security.audit.config.webhook.ssl.pemtrustedcas_content`, but you can configure the base 64 encoded certificate content directly.
|
||||
`plugins.security.audit.config.webhook.format` | String | The format in which the audit log message is logged, can be one of `URL_PARAMETER_GET`, `URL_PARAMETER_POST`, `TEXT`, `JSON`, `SLACK`. See [Formats](#formats).
|
||||
|
||||
|
||||
### Formats
|
||||
|
@ -102,8 +102,8 @@ Format | Description
|
|||
The `log4j` storage type lets you specify the name of the logger and log level.
|
||||
|
||||
```yml
|
||||
opensearch_security.audit.config.log4j.logger_name: audit
|
||||
opensearch_security.audit.config.log4j.level: INFO
|
||||
plugins.security.audit.config.log4j.logger_name: audit
|
||||
plugins.security.audit.config.log4j.level: INFO
|
||||
```
|
||||
|
||||
By default, the security plugin uses the logger name `audit` and logs the events on `INFO` level. Audit events are stored in JSON format.
|
||||
|
|
|
@ -19,7 +19,7 @@ Another benefit of client certificate authentication is you can use it along wit
|
|||
To enable client certificate authentication, you must first set `clientauth_mode` in `opensearch.yml` to either `OPTIONAL` or `REQUIRE`:
|
||||
|
||||
```yml
|
||||
opensearch_security.ssl.http.clientauth_mode: OPTIONAL
|
||||
plugins.security.ssl.http.clientauth_mode: OPTIONAL
|
||||
```
|
||||
|
||||
Next, enable client certificate authentication in the `client_auth_domain` section of `config.yml`.
|
||||
|
|
|
@ -151,15 +151,15 @@ Due to the nature of Kerberos, you must define some settings in `opensearch.yml`
|
|||
In `opensearch.yml`, define the following:
|
||||
|
||||
```yml
|
||||
opensearch_security.kerberos.krb5_filepath: '/etc/krb5.conf'
|
||||
opensearch_security.kerberos.acceptor_keytab_filepath: 'eskeytab.tab'
|
||||
plugins.security.kerberos.krb5_filepath: '/etc/krb5.conf'
|
||||
plugins.security.kerberos.acceptor_keytab_filepath: 'eskeytab.tab'
|
||||
```
|
||||
|
||||
`opensearch_security.kerberos.krb5_filepath` defines the path to your Kerberos configuration file. This file contains various settings regarding your Kerberos installation, for example, the realm names, hostnames, and ports of the Kerberos key distribution center (KDC).
|
||||
`plugins.security.kerberos.krb5_filepath` defines the path to your Kerberos configuration file. This file contains various settings regarding your Kerberos installation, for example, the realm names, hostnames, and ports of the Kerberos key distribution center (KDC).
|
||||
|
||||
`opensearch_security.kerberos.acceptor_keytab_filepath` defines the path to the keytab file, which contains the principal that the security plugin uses to issue requests against Kerberos.
|
||||
`plugins.security.kerberos.acceptor_keytab_filepath` defines the path to the keytab file, which contains the principal that the security plugin uses to issue requests against Kerberos.
|
||||
|
||||
`opensearch_security.kerberos.acceptor_principal: 'HTTP/localhost'` defines the principal that the security plugin uses to issue requests against Kerberos. This value must be present in the keytab file.
|
||||
`plugins.security.kerberos.acceptor_principal: 'HTTP/localhost'` defines the principal that the security plugin uses to issue requests against Kerberos. This value must be present in the keytab file.
|
||||
|
||||
Due to security restrictions, the keytab file must be placed in `config` or a subdirectory, and the path in `opensearch.yml` must be relative, not absolute.
|
||||
{: .warning }
|
||||
|
@ -273,7 +273,7 @@ eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJsb2dnZWRJbkFzIjoiYWRtaW4iLCJpYXQiOjE0MjI
|
|||
|
||||
### Configure JSON web tokens
|
||||
|
||||
If JSON web tokens are the only authentication method that you use, disable the user cache by setting `opensearch_security.cache.ttl_minutes: 0`.
|
||||
If JSON web tokens are the only authentication method that you use, disable the user cache by setting `plugins.security.cache.ttl_minutes: 0`.
|
||||
{: .warning }
|
||||
|
||||
Set up an authentication domain and choose `jwt` as the HTTP authentication type. Because the tokens already contain all required information to verify the request, `challenge` must be set to `false` and `authentication_backend` to `noop`.
|
||||
|
|
|
@ -11,10 +11,10 @@ nav_order: 99
|
|||
You might want to temporarily disable the security plugin to make testing or internal usage more straightforward. To disable the plugin, add the following line in `opensearch.yml`:
|
||||
|
||||
```yml
|
||||
opensearch_security.disabled: true
|
||||
plugins.security.disabled: true
|
||||
```
|
||||
|
||||
A more permanent option is to remove the security plugin entirely. Delete the `plugins/opensearch-security` folder on all nodes, and delete the `opensearch_security` configuration entries from `opensearch.yml`.
|
||||
A more permanent option is to remove the security plugin entirely. Delete the `plugins/opensearch-security` folder on all nodes, and delete the `plugins.security.*` configuration entries from `opensearch.yml`.
|
||||
|
||||
To perform these steps on the Docker image, see [Customize the Docker image](../../../opensearch/install/docker/#customize-the-docker-image).
|
||||
|
||||
|
|
|
@ -89,7 +89,7 @@ Just like the root certificate, use the `-days` option to specify an expiration
|
|||
|
||||
Follow the steps in [Generate an admin certificate](#generate-an-admin-certificate) with new file names to generate a new certificate for each node and as many client certificates as you need. Each certificate should use its own private key.
|
||||
|
||||
If you generate node certificates and have `opensearch_security.ssl.transport.enforce_hostname_verification` set to `true` (default), be sure to specify a common name (CN) for the certificate that matches the hostname of the intended node. If you want to use the same node certificate on all nodes (not recommended), set hostname verification to `false`. For more information, see [Configure TLS certificates](../tls/#advanced-hostname-verification-and-dns-lookup).
|
||||
If you generate node certificates and have `plugins.security.ssl.transport.enforce_hostname_verification` set to `true` (default), be sure to specify a common name (CN) for the certificate that matches the hostname of the intended node. If you want to use the same node certificate on all nodes (not recommended), set hostname verification to `false`. For more information, see [Configure TLS certificates](../tls/#advanced-hostname-verification-and-dns-lookup).
|
||||
|
||||
|
||||
### Sample script
|
||||
|
@ -134,9 +134,9 @@ openssl req -new -key node-key.pem -subj "/C=CA/ST=ONTARIO/L=TORONTO/O=ORG/OU=UN
|
|||
If you created admin and node certificates, you must specify their distinguished names (DNs) in `opensearch.yml` on all nodes:
|
||||
|
||||
```yml
|
||||
opensearch_security.authcz.admin_dn:
|
||||
plugins.security.authcz.admin_dn:
|
||||
- 'CN=ADMIN,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
|
||||
opensearch_security.nodes_dn:
|
||||
plugins.security.nodes_dn:
|
||||
- 'CN=node1.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
|
||||
- 'CN=node2.example.com,OU=UNIT,O=ORG,L=TORONTO,ST=ONTARIO,C=CA'
|
||||
```
|
||||
|
|
|
@ -154,8 +154,8 @@ Name | Description
|
|||
By default, the security plugin validates the TLS certificate of the LDAP servers against the root CA configured in `opensearch.yml`, either as a PEM certificate or a truststore:
|
||||
|
||||
```
|
||||
opensearch_security.ssl.transport.pemtrustedcas_filepath: ...
|
||||
opensearch_security.ssl.http.truststore_filepath: ...
|
||||
plugins.security.ssl.transport.pemtrustedcas_filepath: ...
|
||||
plugins.security.ssl.http.truststore_filepath: ...
|
||||
```
|
||||
|
||||
If your server uses a certificate signed by a different CA, import this CA into your truststore or add it to your trusted CA file on each node.
|
||||
|
|
|
@ -244,7 +244,7 @@ Name | Description
|
|||
Activate OpenID Connect by adding the following to `opensearch_dashboards.yml`:
|
||||
|
||||
```
|
||||
opensearch_security.auth.type: "openid"
|
||||
plugins.security.auth.type: "openid"
|
||||
```
|
||||
|
||||
|
||||
|
@ -266,29 +266,29 @@ OpenID Connect providers usually publish their configuration in JSON format unde
|
|||
|
||||
Name | Description
|
||||
:--- | :---
|
||||
`opensearch_security.openid.connect_url` | The URL where the IdP publishes the OpenID metadata. Required.
|
||||
`opensearch_security.openid.client_id` | The ID of the OpenID Connect client configured in your IdP. Required.
|
||||
`opensearch_security.openid.client_secret` | The client secret of the OpenID Connect client configured in your IdP. Required.
|
||||
`opensearch_security.openid.scope` | The [scope of the identity token](https://auth0.com/docs/scopes/current) issued by the IdP. Optional. Default is `openid profile email address phone`.
|
||||
`opensearch_security.openid.header` | HTTP header name of the JWT token. Optional. Default is `Authorization`.
|
||||
`opensearch_security.openid.logout_url` | The logout URL of your IdP. Optional. Only necessary if your IdP does not publish the logout URL in its metadata.
|
||||
`opensearch_security.openid.base_redirect_url` | The base of the redirect URL that will be sent to your IdP. Optional. Only necessary when OpenSearch Dashboards is behind a reverse proxy, in which case it should be different than `server.host` and `server.port` in `opensearch_dashboards.yml`.
|
||||
`plugins.security.openid.connect_url` | The URL where the IdP publishes the OpenID metadata. Required.
|
||||
`plugins.security.openid.client_id` | The ID of the OpenID Connect client configured in your IdP. Required.
|
||||
`plugins.security.openid.client_secret` | The client secret of the OpenID Connect client configured in your IdP. Required.
|
||||
`plugins.security.openid.scope` | The [scope of the identity token](https://auth0.com/docs/scopes/current) issued by the IdP. Optional. Default is `openid profile email address phone`.
|
||||
`plugins.security.openid.header` | HTTP header name of the JWT token. Optional. Default is `Authorization`.
|
||||
`plugins.security.openid.logout_url` | The logout URL of your IdP. Optional. Only necessary if your IdP does not publish the logout URL in its metadata.
|
||||
`plugins.security.openid.base_redirect_url` | The base of the redirect URL that will be sent to your IdP. Optional. Only necessary when OpenSearch Dashboards is behind a reverse proxy, in which case it should be different than `server.host` and `server.port` in `opensearch_dashboards.yml`.
|
||||
|
||||
|
||||
### Configuration example
|
||||
|
||||
```yml
|
||||
# Enable OpenID authentication
|
||||
opensearch_security.auth.type: "openid"
|
||||
plugins.security.auth.type: "openid"
|
||||
|
||||
# The IdP metadata endpoint
|
||||
opensearch_security.openid.connect_url: "http://keycloak.example.com:8080/auth/realms/master/.well-known/openid-configuration"
|
||||
plugins.security.openid.connect_url: "http://keycloak.example.com:8080/auth/realms/master/.well-known/openid-configuration"
|
||||
|
||||
# The ID of the OpenID Connect client in your IdP
|
||||
opensearch_security.openid.client_id: "opensearch-dashboards-sso"
|
||||
plugins.security.openid.client_id: "opensearch-dashboards-sso"
|
||||
|
||||
# The client secret of the OpenID Connect client
|
||||
opensearch_security.openid.client_secret: "a59c51f5-f052-4740-a3b0-e14ba355b520"
|
||||
plugins.security.openid.client_secret: "a59c51f5-f052-4740-a3b0-e14ba355b520"
|
||||
|
||||
# Use HTTPS instead of HTTP
|
||||
opensearch.url: "https://<hostname>.com:<http port>"
|
||||
|
|
|
@ -202,7 +202,7 @@ opensearch.requestHeadersWhitelist: ["securitytenant","Authorization","x-forward
|
|||
You must also enable the authentication type in `opensearch_dashboards.yml`:
|
||||
|
||||
```yml
|
||||
opensearch_security.auth.type: "proxy"
|
||||
opensearch_security.proxycache.user_header: "x-proxy-user"
|
||||
opensearch_security.proxycache.roles_header: "x-proxy-roles"
|
||||
plugins.security.auth.type: "proxy"
|
||||
plugins.security.proxycache.user_header: "x-proxy-user"
|
||||
plugins.security.proxycache.roles_header: "x-proxy-roles"
|
||||
```
|
||||
|
|
|
@ -302,7 +302,7 @@ authc:
|
|||
Because most of the SAML-specific configuration is done in the security plugin, just activate SAML in your `opensearch_dashboards.yml` by adding the following:
|
||||
|
||||
```
|
||||
opensearch_security.auth.type: "saml"
|
||||
plugins.security.auth.type: "saml"
|
||||
```
|
||||
|
||||
In addition, the OpenSearch Dashboards endpoint for validating the SAML assertions must be whitelisted:
|
||||
|
|
|
@ -20,7 +20,7 @@ After the `.opensearch_security` index is initialized, you can use OpenSearch Da
|
|||
You can configure all certificates that should have admin privileges in `opensearch.yml` by specifying respective distinguished names (DNs). If you use the demo certificates, for example, you can use the `kirk` certificate:
|
||||
|
||||
```yml
|
||||
opensearch_security.authcz.admin_dn:
|
||||
plugins.security.authcz.admin_dn:
|
||||
- CN=kirk,OU=client,O=client,L=test,C=DE
|
||||
```
|
||||
|
||||
|
|
|
@ -13,8 +13,8 @@ By default, OpenSearch has a protected system index, `.opensearch_security`, whi
|
|||
You can add additional system indices in in `opensearch.yml`. In addition to automatically creating `.opensearch_security`, the demo configuration adds several indices for the various OpenSearch plugins that integrate with the security plugin:
|
||||
|
||||
```yml
|
||||
opendistro_security.system_indices.enabled: true
|
||||
opendistro_security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opendistro-asynchronous-search-response*"]
|
||||
plugins.security.system_indices.enabled: true
|
||||
plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opendistro-asynchronous-search-response*"]
|
||||
```
|
||||
|
||||
To access these indices, you must authenticate with an [admin certificate](../tls/#configure-admin-certificates):
|
||||
|
@ -23,4 +23,4 @@ To access these indices, you must authenticate with an [admin certificate](../tl
|
|||
curl -k --cert ./kirk.pem --key ./kirk-key.pem -XGET 'https://localhost:9200/.opensearch_security/_search'
|
||||
```
|
||||
|
||||
The alternative is to remove indices from the `opensearch_security.system_indices.indices` list on each node and restart OpenSearch.
|
||||
The alternative is to remove indices from the `plugins.security.system_indices.indices` list on each node and restart OpenSearch.
|
||||
|
|
|
@ -23,20 +23,20 @@ The following tables contain the settings you can use to configure the location
|
|||
|
||||
Name | Description
|
||||
:--- | :---
|
||||
`opensearch_security.ssl.transport.pemkey_filepath` | Path to the certificate's key file (PKCS \#8), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`opensearch_security.ssl.transport.pemkey_password` | Key password. Omit this setting if the key has no password. Optional.
|
||||
`opensearch_security.ssl.transport.pemcert_filepath` | Path to the X.509 node certificate chain (PEM format), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`opensearch_security.ssl.transport.pemtrustedcas_filepath` | Path to the root CAs (PEM format), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.transport.pemkey_filepath` | Path to the certificate's key file (PKCS \#8), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.transport.pemkey_password` | Key password. Omit this setting if the key has no password. Optional.
|
||||
`plugins.security.ssl.transport.pemcert_filepath` | Path to the X.509 node certificate chain (PEM format), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.transport.pemtrustedcas_filepath` | Path to the root CAs (PEM format), which must be under the `config` directory, specified using a relative path. Required.
|
||||
|
||||
|
||||
### REST layer TLS
|
||||
|
||||
Name | Description
|
||||
:--- | :---
|
||||
`opensearch_security.ssl.http.pemkey_filepath` | Path to the certificate's key file (PKCS \#8), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`opensearch_security.ssl.http.pemkey_password` | Key password. Omit this setting if the key has no password. Optional.
|
||||
`opensearch_security.ssl.http.pemcert_filepath` | Path to the X.509 node certificate chain (PEM format), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`opensearch_security.ssl.http.pemtrustedcas_filepath` | Path to the root CAs (PEM format), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.http.pemkey_filepath` | Path to the certificate's key file (PKCS \#8), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.http.pemkey_password` | Key password. Omit this setting if the key has no password. Optional.
|
||||
`plugins.security.ssl.http.pemcert_filepath` | Path to the X.509 node certificate chain (PEM format), which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.http.pemtrustedcas_filepath` | Path to the root CAs (PEM format), which must be under the `config` directory, specified using a relative path. Required.
|
||||
|
||||
|
||||
## Keystore and truststore files
|
||||
|
@ -50,29 +50,29 @@ The following settings configure the location and password of your keystore and
|
|||
|
||||
Name | Description
|
||||
:--- | :---
|
||||
`opensearch_security.ssl.transport.keystore_type` | The type of the keystore file, JKS or PKCS12/PFX. Optional. Default is JKS.
|
||||
`opensearch_security.ssl.transport.keystore_filepath` | Path to the keystore file, which must be under the `config` directory, specified using a relative path. Required.
|
||||
`opensearch_security.ssl.transport.keystore_alias: my_alias` | Alias name. Optional. Default is the first alias.
|
||||
`opensearch_security.ssl.transport.keystore_password` | Keystore password. Default is `changeit`.
|
||||
`opensearch_security.ssl.transport.truststore_type` | The type of the truststore file, JKS or PKCS12/PFX. Default is JKS.
|
||||
`opensearch_security.ssl.transport.truststore_filepath` | Path to the truststore file, which must be under the `config` directory, specified using a relative path. Required.
|
||||
`opensearch_security.ssl.transport.truststore_alias` | Alias name. Optional. Default is all certificates.
|
||||
`opensearch_security.ssl.transport.truststore_password` | Truststore password. Default is `changeit`.
|
||||
`plugins.security.ssl.transport.keystore_type` | The type of the keystore file, JKS or PKCS12/PFX. Optional. Default is JKS.
|
||||
`plugins.security.ssl.transport.keystore_filepath` | Path to the keystore file, which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.transport.keystore_alias: my_alias` | Alias name. Optional. Default is the first alias.
|
||||
`plugins.security.ssl.transport.keystore_password` | Keystore password. Default is `changeit`.
|
||||
`plugins.security.ssl.transport.truststore_type` | The type of the truststore file, JKS or PKCS12/PFX. Default is JKS.
|
||||
`plugins.security.ssl.transport.truststore_filepath` | Path to the truststore file, which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.transport.truststore_alias` | Alias name. Optional. Default is all certificates.
|
||||
`plugins.security.ssl.transport.truststore_password` | Truststore password. Default is `changeit`.
|
||||
|
||||
|
||||
### REST layer TLS
|
||||
|
||||
Name | Description
|
||||
:--- | :---
|
||||
`opensearch_security.ssl.http.enabled` | Whether to enable TLS on the REST layer. If enabled, only HTTPS is allowed. Optional. Default is false.
|
||||
`opensearch_security.ssl.http.keystore_type` | The type of the keystore file, JKS or PKCS12/PFX. Optional. Default is JKS.
|
||||
`opensearch_security.ssl.http.keystore_filepath` | Path to the keystore file, which must be under the `config` directory, specified using a relative path. Required.
|
||||
`opensearch_security.ssl.http.keystore_alias` | Alias name. Optional. Default is the first alias.
|
||||
`opensearch_security.ssl.http.keystore_password` | Keystore password. Default is `changeit`.
|
||||
`opensearch_security.ssl.http.truststore_type` | The type of the truststore file, JKS or PKCS12/PFX. Default is JKS.
|
||||
`opensearch_security.ssl.http.truststore_filepath` | Path to the truststore file, which must be under the `config` directory, specified using a relative path. Required.
|
||||
`opensearch_security.ssl.http.truststore_alias` | Alias name. Optional. Default is all certificates.
|
||||
`opensearch_security.ssl.http.truststore_password` | Truststore password. Default is `changeit`.
|
||||
`plugins.security.ssl.http.enabled` | Whether to enable TLS on the REST layer. If enabled, only HTTPS is allowed. Optional. Default is false.
|
||||
`plugins.security.ssl.http.keystore_type` | The type of the keystore file, JKS or PKCS12/PFX. Optional. Default is JKS.
|
||||
`plugins.security.ssl.http.keystore_filepath` | Path to the keystore file, which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.http.keystore_alias` | Alias name. Optional. Default is the first alias.
|
||||
`plugins.security.ssl.http.keystore_password` | Keystore password. Default is `changeit`.
|
||||
`plugins.security.ssl.http.truststore_type` | The type of the truststore file, JKS or PKCS12/PFX. Default is JKS.
|
||||
`plugins.security.ssl.http.truststore_filepath` | Path to the truststore file, which must be under the `config` directory, specified using a relative path. Required.
|
||||
`plugins.security.ssl.http.truststore_alias` | Alias name. Optional. Default is all certificates.
|
||||
`plugins.security.ssl.http.truststore_password` | Truststore password. Default is `changeit`.
|
||||
|
||||
|
||||
## Configure node certificates
|
||||
|
@ -80,7 +80,7 @@ Name | Description
|
|||
The security plugin needs to identify inter-cluster requests (i.e. requests between the nodes). The simplest way of configuring node certificates is to list the Distinguished Names (DNs) of these certificates in `opensearch.yml`. All DNs must be included in `opensearch.yml` on all nodes. The security plugin supports wildcards and regular expressions:
|
||||
|
||||
```yml
|
||||
opensearch_security.nodes_dn:
|
||||
plugins.security.nodes_dn:
|
||||
- 'CN=node.other.com,OU=SSL,O=Test,L=Test,C=DE'
|
||||
- 'CN=*.example.com,OU=SSL,O=Test,L=Test,C=DE'
|
||||
- 'CN=elk-devcluster*'
|
||||
|
@ -95,7 +95,7 @@ If your node certificates have an Object ID (OID) identifier in the SAN section,
|
|||
Admin certificates are regular client certificates that have elevated rights to perform administrative tasks. You need an admin certificate to change the the security plugin configuration using `plugins/opensearch-security/tools/securityadmin.sh` or the REST API. Admin certificates are configured in `opensearch.yml` by stating their DN(s):
|
||||
|
||||
```yml
|
||||
opensearch_security.authcz.admin_dn:
|
||||
plugins.security.authcz.admin_dn:
|
||||
- CN=admin,OU=SSL,O=Test,L=Test,C=DE
|
||||
```
|
||||
|
||||
|
@ -112,8 +112,8 @@ If OpenSSL is enabled, but for one reason or another the installation does not w
|
|||
|
||||
Name | Description
|
||||
:--- | :---
|
||||
`opensearch_security.ssl.transport.enable_openssl_if_available` | Enable OpenSSL on the transport layer if available. Optional. Default is true.
|
||||
`opensearch_security.ssl.http.enable_openssl_if_available` | Enable OpenSSL on the REST layer if available. Optional. Default is true.
|
||||
`plugins.security.ssl.transport.enable_openssl_if_available` | Enable OpenSSL on the transport layer if available. Optional. Default is true.
|
||||
`plugins.security.ssl.http.enable_openssl_if_available` | Enable OpenSSL on the REST layer if available. Optional. Default is true.
|
||||
|
||||
|
||||
{% comment %}
|
||||
|
@ -144,8 +144,8 @@ In addition, when `resolve_hostnames` is enabled, the security plugin resolves t
|
|||
|
||||
Name | Description
|
||||
:--- | :---
|
||||
`opensearch_security.ssl.transport.enforce_hostname_verification` | Whether to verify hostnames on the transport layer. Optional. Default is true.
|
||||
`opensearch_security.ssl.transport.resolve_hostname` | Whether to resolve hostnames against DNS on the transport layer. Optional. Default is true. Only works if hostname verification is also enabled.
|
||||
`plugins.security.ssl.transport.enforce_hostname_verification` | Whether to verify hostnames on the transport layer. Optional. Default is true.
|
||||
`plugins.security.ssl.transport.resolve_hostname` | Whether to resolve hostnames against DNS on the transport layer. Optional. Default is true. Only works if hostname verification is also enabled.
|
||||
|
||||
|
||||
## (Advanced) Client authentication
|
||||
|
@ -168,7 +168,7 @@ You can configure the client authentication mode by using the following setting:
|
|||
|
||||
Name | Description
|
||||
:--- | :---
|
||||
opensearch_security.ssl.http.clientauth_mode | The TLS client authentication mode to use. Can be one of `NONE`, `OPTIONAL` (default) or `REQUIRE`. Optional.
|
||||
plugins.security.ssl.http.clientauth_mode | The TLS client authentication mode to use. Can be one of `NONE`, `OPTIONAL` (default) or `REQUIRE`. Optional.
|
||||
|
||||
|
||||
## (Advanced) Enabled ciphers and protocols
|
||||
|
@ -179,18 +179,18 @@ If this setting is not enabled, the ciphers and TLS versions are negotiated betw
|
|||
|
||||
Name | Data Type | Description
|
||||
:--- | :--- | :---
|
||||
`opensearch_security.ssl.http.enabled_ciphers` | Array | Enabled TLS cipher suites for the REST layer. Only Java format is supported.
|
||||
`opensearch_security.ssl.http.enabled_protocols` | Array | Enabled TLS protocols for the REST layer. Only Java format is supported.
|
||||
`opensearch_security.ssl.transport.enabled_ciphers` | Array | Enabled TLS cipher suites for the transport layer. Only Java format is supported.
|
||||
`opensearch_security.ssl.transport.enabled_protocols` | Array | Enabled TLS protocols for the transport layer. Only Java format is supported.
|
||||
`plugins.security.ssl.http.enabled_ciphers` | Array | Enabled TLS cipher suites for the REST layer. Only Java format is supported.
|
||||
`plugins.security.ssl.http.enabled_protocols` | Array | Enabled TLS protocols for the REST layer. Only Java format is supported.
|
||||
`plugins.security.ssl.transport.enabled_ciphers` | Array | Enabled TLS cipher suites for the transport layer. Only Java format is supported.
|
||||
`plugins.security.ssl.transport.enabled_protocols` | Array | Enabled TLS protocols for the transport layer. Only Java format is supported.
|
||||
|
||||
### Example settings
|
||||
|
||||
```yml
|
||||
opensearch_security.ssl.http.enabled_ciphers:
|
||||
plugins.security.ssl.http.enabled_ciphers:
|
||||
- "TLS_DHE_RSA_WITH_AES_256_CBC_SHA"
|
||||
- "TLS_DHE_DSS_WITH_AES_128_CBC_SHA256"
|
||||
opensearch_security.ssl.http.enabled_protocols:
|
||||
plugins.security.ssl.http.enabled_protocols:
|
||||
- "TLSv1.1"
|
||||
- "TLSv1.2"
|
||||
```
|
||||
|
@ -198,7 +198,7 @@ opensearch_security.ssl.http.enabled_protocols:
|
|||
Because it is insecure, the security plugin disables `TLSv1` by default. If you need to use `TLSv1` and accept the risks, you can still enable it:
|
||||
|
||||
```yml
|
||||
opensearch_security.ssl.http.enabled_protocols:
|
||||
plugins.security.ssl.http.enabled_protocols:
|
||||
- "TLSv1"
|
||||
- "TLSv1.1"
|
||||
- "TLSv1.2"
|
||||
|
|
|
@ -81,12 +81,12 @@ SELECT *
|
|||
FROM accounts
|
||||
```
|
||||
|
||||
| id | account_number | firstname | gender | city | balance | employer | state | email | address | lastname | age
|
||||
:--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :---
|
||||
0 | 1 | Amber | M | Brogan | 39225 | Pyrami | IL | amberduke@pyrami.com | 880 Holmes Lane | Duke | 32
|
||||
1 | 16 | Hattie | M | Dante | 5686 | Netagy | TN | hattiebond@netagy.com | 671 Bristol Street | Bond | 36
|
||||
2 | 13 | Nanette | F | Nogal | 32838 | Quility | VA | nanettebates@quility.com | 789 Madison Street | Bates | 28
|
||||
3 | 18 | Dale | M | Orick | 4180 | | MD | daleadams@boink.com | 467 Hutchinson Court | Adams | 33
|
||||
| account_number | firstname | gender | city | balance | employer | state | email | address | lastname | age
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :---
|
||||
| 1 | Amber | M | Brogan | 39225 | Pyrami | IL | amberduke@pyrami.com | 880 Holmes Lane | Duke | 32
|
||||
| 16 | Hattie | M | Dante | 5686 | Netagy | TN | hattiebond@netagy.com | 671 Bristol Street | Bond | 36
|
||||
| 13 | Nanette | F | Nogal | 32838 | Quility | VA | nanettebates@quility.com | 789 Madison Street | Bates | 28
|
||||
| 18 | Dale | M | Orick | 4180 | | MD | daleadams@boink.com | 467 Hutchinson Court | Adams | 33
|
||||
|
||||
*Example 2*: Use field name(s) to retrieve only specific fields:
|
||||
|
||||
|
@ -95,12 +95,12 @@ SELECT firstname, lastname
|
|||
FROM accounts
|
||||
```
|
||||
|
||||
| id | firstname | lastname
|
||||
:--- | :--- | :---
|
||||
0 | Amber | Duke
|
||||
1 | Hattie | Bond
|
||||
2 | Nanette | Bates
|
||||
3 | Dale | Adams
|
||||
| firstname | lastname
|
||||
| :--- | :---
|
||||
| Amber | Duke
|
||||
| Hattie | Bond
|
||||
| Nanette | Bates
|
||||
| Dale | Adams
|
||||
|
||||
*Example 3*: Use field aliases instead of field names. Field aliases are used to make field names more readable:
|
||||
|
||||
|
@ -109,12 +109,12 @@ SELECT account_number AS num
|
|||
FROM accounts
|
||||
```
|
||||
|
||||
| id | num
|
||||
:--- | :---
|
||||
0 | 1
|
||||
1 | 6
|
||||
2 | 13
|
||||
3 | 18
|
||||
| num
|
||||
:---
|
||||
| 1
|
||||
| 6
|
||||
| 13
|
||||
| 18
|
||||
|
||||
*Example 4*: Use the `DISTINCT` clause to get back only unique field values. You can specify one or more field names:
|
||||
|
||||
|
@ -123,12 +123,12 @@ SELECT DISTINCT age
|
|||
FROM accounts
|
||||
```
|
||||
|
||||
| id | age
|
||||
:--- | :---
|
||||
0 | 28
|
||||
1 | 32
|
||||
2 | 33
|
||||
3 | 36
|
||||
| age
|
||||
:---
|
||||
| 28
|
||||
| 32
|
||||
| 33
|
||||
| 36
|
||||
|
||||
## From
|
||||
|
||||
|
@ -156,12 +156,12 @@ SELECT account_number, acc.age
|
|||
FROM accounts acc
|
||||
```
|
||||
|
||||
| id | account_number | age
|
||||
:--- | :--- | :---
|
||||
0 | 1 | 32
|
||||
1 | 6 | 36
|
||||
2 | 13 | 28
|
||||
3 | 18 | 33
|
||||
| account_number | age
|
||||
| :--- | :---
|
||||
| 1 | 32
|
||||
| 6 | 36
|
||||
| 13 | 28
|
||||
| 18 | 33
|
||||
|
||||
*Example 2*: Use index patterns to query indices that match a specific pattern:
|
||||
|
||||
|
@ -170,12 +170,12 @@ SELECT account_number
|
|||
FROM account*
|
||||
```
|
||||
|
||||
| id | account_number
|
||||
:--- | :---
|
||||
0 | 1
|
||||
1 | 6
|
||||
2 | 13
|
||||
3 | 18
|
||||
| account_number
|
||||
:---
|
||||
| 1
|
||||
| 6
|
||||
| 13
|
||||
| 18
|
||||
|
||||
## Where
|
||||
|
||||
|
@ -205,11 +205,11 @@ FROM accounts
|
|||
WHERE account_number = 1
|
||||
```
|
||||
|
||||
| id | account_number
|
||||
:--- | :---
|
||||
0 | 1
|
||||
| account_number
|
||||
| :---
|
||||
| 1
|
||||
|
||||
*Example 2*: OpenSearch allows for flexible schema so documents in an index may have different fields. Use `IS NULL` or `IS NOT NULL` to retrieve only missing fields or existing fields. We do not differentiate between missing fields and fields explicitly set to `NULL`:
|
||||
*Example 2*: OpenSearch allows for flexible schema, so documents in an index may have different fields. Use `IS NULL` or `IS NOT NULL` to retrieve only missing fields or existing fields. We do not differentiate between missing fields and fields explicitly set to `NULL`:
|
||||
|
||||
```sql
|
||||
SELECT account_number, employer
|
||||
|
@ -217,9 +217,9 @@ FROM accounts
|
|||
WHERE employer IS NULL
|
||||
```
|
||||
|
||||
| id | account_number | employer
|
||||
:--- | :--- | :---
|
||||
0 | 18 |
|
||||
| account_number | employer
|
||||
| :--- | :---
|
||||
| 18 |
|
||||
|
||||
*Example 3*: Deletes a document that satisfies the predicates in the `WHERE` clause:
|
||||
|
||||
|
@ -307,12 +307,12 @@ FROM accounts
|
|||
ORDER BY account_number DESC
|
||||
```
|
||||
|
||||
| id | account_number
|
||||
:--- | :---
|
||||
0 | 18
|
||||
1 | 13
|
||||
2 | 6
|
||||
3 | 1
|
||||
| account_number
|
||||
| :---
|
||||
| 18
|
||||
| 13
|
||||
| 6
|
||||
| 1
|
||||
|
||||
*Example 2*: Specify if documents with missing fields are to be put at the beginning or at the end of the results. The default behavior of OpenSearch is to return nulls or missing fields at the end. To push them before non-nulls, use the `IS NOT NULL` operator:
|
||||
|
||||
|
@ -322,12 +322,12 @@ FROM accounts
|
|||
ORDER BY employer IS NOT NULL
|
||||
```
|
||||
|
||||
| id | employer
|
||||
:--- | :---
|
||||
0 |
|
||||
1 | Netagy
|
||||
2 | Pyrami
|
||||
3 | Quility
|
||||
| employer
|
||||
| :---
|
||||
||
|
||||
| Netagy
|
||||
| Pyrami
|
||||
| Quility
|
||||
|
||||
## Limit
|
||||
|
||||
|
@ -341,9 +341,9 @@ FROM accounts
|
|||
ORDER BY account_number LIMIT 1
|
||||
```
|
||||
|
||||
| id | account_number
|
||||
:--- | :---
|
||||
0 | 1
|
||||
| account_number
|
||||
| :---
|
||||
| 1
|
||||
|
||||
*Example 2*: If you pass in two arguments, the first is mapped to the `from` parameter and the second to the `size` parameter in OpenSearch. You can use this for simple pagination for small indices, as it's inefficient for large indices.
|
||||
Use `ORDER BY` to ensure the same order between pages:
|
||||
|
@ -354,6 +354,6 @@ FROM accounts
|
|||
ORDER BY account_number LIMIT 1, 1
|
||||
```
|
||||
|
||||
| id | account_number
|
||||
:--- | :---
|
||||
0 | 6
|
||||
| account_number
|
||||
| :---
|
||||
| 6
|
||||
|
|
|
@ -142,7 +142,7 @@ The `explain` output is complicated, because a `JOIN` clause is associated with
|
|||
Result set:
|
||||
|
||||
| a.account_number | a.firstname | a.lastname | e.id | e.name
|
||||
:--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :---
|
||||
:--- | :--- | :--- | :--- | :---
|
||||
6 | Hattie | Bond | 6 | Jane Smith
|
||||
|
||||
### Example 2: Cross join
|
||||
|
@ -167,7 +167,7 @@ JOIN employees_nested e
|
|||
Result set:
|
||||
|
||||
| a.account_number | a.firstname | a.lastname | e.id | e.name
|
||||
:--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :---
|
||||
:--- | :--- | :--- | :--- | :---
|
||||
1 | Amber | Duke | 3 | Bob Smith
|
||||
1 | Amber | Duke | 4 | Susan Smith
|
||||
1 | Amber | Duke | 6 | Jane Smith
|
||||
|
@ -199,7 +199,7 @@ LEFT JOIN employees_nested e
|
|||
Result set:
|
||||
|
||||
| a.account_number | a.firstname | a.lastname | e.id | e.name
|
||||
:--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :---
|
||||
:--- | :--- | :--- | :--- | :---
|
||||
1 | Amber | Duke | null | null
|
||||
6 | Hattie | Bond | 6 | Jane Smith
|
||||
13 | Nanette | Bates | null | null
|
||||
|
@ -350,7 +350,7 @@ Explain:
|
|||
Result set:
|
||||
|
||||
| a1.firstname | a1.lastname | a1.balance
|
||||
:--- | :--- | :--- | :--- | :--- | :---
|
||||
:--- | :--- | :---
|
||||
Amber | Duke | 39225
|
||||
Nanette | Bates | 32838
|
||||
|
||||
|
|
|
@ -79,7 +79,7 @@ timestamp | `yyyy-MM-dd hh:mm:ss[.fraction]` | `0001-01-01 00:00:01.000000` UTC
|
|||
The `interval` type represents a temporal duration or a period.
|
||||
|
||||
| Type | Syntax
|
||||
:--- | :--- | :---
|
||||
:--- | :---
|
||||
interval | `INTERVAL expr unit`
|
||||
|
||||
The `expr` unit is any expression that eventually iterates to a quantity value. It represents a unit for interpreting the quantity, including `MICROSECOND`, `SECOND`, `MINUTE`, `HOUR`, `DAY`, `WEEK`, `MONTH`, `QUARTER`, and `YEAR`. The `INTERVAL` keyword and the unit specifier are not case sensitive.
|
||||
|
|
|
@ -11,6 +11,19 @@ nav_order: 12
|
|||
The `DELETE` statement deletes documents that satisfy the predicates in the `WHERE` clause.
|
||||
If you don't specify the `WHERE` clause, all documents are deleted.
|
||||
|
||||
### Setting
|
||||
|
||||
The `DELETE` statement is disabled by default. To enable the `DELETE` functionality in SQL, you need to update the configuration by sending the following request:
|
||||
|
||||
```json
|
||||
PUT _plugins/_query/settings
|
||||
{
|
||||
"transient": {
|
||||
"plugins.sql.delete.enabled": "true"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Syntax
|
||||
|
||||
Rule `singleDeleteStatement`:
|
||||
|
|
|
@ -26,7 +26,7 @@ You can send HTTP GET request with your query embedded in URL parameter.
|
|||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X GET localhost:9200/_opensearch/_sql?sql=SELECT * FROM accounts
|
||||
>> curl -H 'Content-Type: application/json' -X GET localhost:9200/_plugins/_sql?sql=SELECT * FROM accounts
|
||||
```
|
||||
|
||||
## POST
|
||||
|
@ -40,7 +40,7 @@ You can also send HTTP POST request with your query in request body.
|
|||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql -d '{
|
||||
"query" : "SELECT * FROM accounts"
|
||||
}'
|
||||
```
|
||||
|
@ -59,7 +59,7 @@ directly.
|
|||
Explain query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql/_explain -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql/_explain -d '{
|
||||
"query" : "SELECT firstname, lastname FROM accounts WHERE age > 20"
|
||||
}'
|
||||
```
|
||||
|
@ -119,7 +119,7 @@ The `fetch_size` parameter is only supported for the JDBC response format.
|
|||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql -d '{
|
||||
"fetch_size" : 5,
|
||||
"query" : "SELECT firstname, lastname FROM accounts WHERE age > 20 ORDER BY state ASC"
|
||||
}'
|
||||
|
@ -171,7 +171,7 @@ Result set:
|
|||
To fetch subsequent pages, use the `cursor` from last response:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql -d '{
|
||||
"cursor": "d:eyJhIjp7fSwicyI6IkRYRjFaWEo1UVc1a1JtVjBZMmdCQUFBQUFBQUFBQU1XZWpkdFRFRkZUMlpTZEZkeFdsWnJkRlZoYnpaeVVRPT0iLCJjIjpbeyJuYW1lIjoiZmlyc3RuYW1lIiwidHlwZSI6InRleHQifSx7Im5hbWUiOiJsYXN0bmFtZSIsInR5cGUiOiJ0ZXh0In1dLCJmIjo1LCJpIjoiYWNjb3VudHMiLCJsIjo5NTF9"
|
||||
}'
|
||||
```
|
||||
|
@ -182,7 +182,7 @@ The `datarows` can have more than the `fetch_size` number of records in case the
|
|||
|
||||
```json
|
||||
{
|
||||
"cursor": "d:eyJhIjp7fSwicyI6IkRYRjFaWEo1UVc1a1JtVjBZMmdCQUFBQUFBQUFBQU1XZWpkdFRFRkZUMlpTZEZkeFdsWnJkRlZoYnpaeVVRPT0iLCJjIjpbeyJuYW1lIjoiZmlyc3RuYW1lIiwidHlwZSI6InRleHQifSx7Im5hbWUiOiJsYXN0bmFtZSIsInR5cGUiOiJ0ZXh0In1dLCJmIjo1LCJpIjoiYWNjb3VudHMabcde12345",
|
||||
"cursor": "d:eyJhIjp7fSwicyI6IkRYRjFaWEo1UVc1a1JtVjBZMmdCQUFBQUFBQUFBQU1XZWpkdFRFRkZUMlpTZEZkeFdsWnJkRlZoYnpaeVVRPT0iLCJjIjpbeyJuYW1lIjoiZmlyc3RuYW1lIiwidHlwZSI6InRleHQifSx7Im5hbWUiOiJsYXN0bmFtZSIsInR5cGUiOiJ0ZXh0In1dLCJmIjo1LCJpIjoiYWNjb3VudHMabcde12345",
|
||||
"datarows": [
|
||||
[
|
||||
"Abbey",
|
||||
|
@ -209,10 +209,10 @@ The `datarows` can have more than the `fetch_size` number of records in case the
|
|||
```
|
||||
|
||||
The `cursor` context is automatically cleared on the last page.
|
||||
To explicitly clear cursor context, use the `_opensearch/_sql/close endpoint` operation.
|
||||
To explicitly clear cursor context, use the `_plugins/_sql/close endpoint` operation.
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql/close -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql/close -d '{
|
||||
"cursor": "d:eyJhIjp7fSwicyI6IkRYRjFaWEo1UVc1a1JtVjBZMmdCQUFBQUFBQUFBQU1XZWpkdFRFRkZUMlpTZEZkeFdsWnJkRlZoYnpaeVVRPT0iLCJjIjpbeyJuYW1lIjoiZmlyc3RuYW1lIiwidHlwZSI6InRleHQifSx7Im5hbWUiOiJsYXN0bmFtZSIsInR5cGUiOiJ0ZXh0In1dLCJmIjo1LCJpIjoiYWNjb3VudHMiLCJsIjo5NTF9"
|
||||
}'
|
||||
```
|
||||
|
|
|
@ -13,17 +13,17 @@ OpenSearch SQL lets you write queries in SQL rather than the [OpenSearch query d
|
|||
|
||||
## Workbench
|
||||
|
||||
The easiest way to get familiar with the SQL plugin is to use **SQL Workbench** in OpenSearch Dashboards to test various queries. To learn more, see [Workbench](workbench/).
|
||||
The easiest way to get familiar with the SQL plugin is to use **Query Workbench** in OpenSearch Dashboards to test various queries. To learn more, see [Workbench](workbench/).
|
||||
|
||||
![OpenSearch Dashboards SQL UI plugin](../images/sql.png)
|
||||
|
||||
|
||||
## REST API
|
||||
|
||||
To use the SQL plugin with your own applications, send requests to `_opensearch/_sql`:
|
||||
To use the SQL plugin with your own applications, send requests to `_plugins/_sql`:
|
||||
|
||||
```json
|
||||
POST _opensearch/_sql
|
||||
POST _plugins/_sql
|
||||
{
|
||||
"query": "SELECT * FROM my-index LIMIT 50"
|
||||
}
|
||||
|
@ -40,12 +40,12 @@ Column | Field
|
|||
You can query multiple indices by listing them or using wildcards:
|
||||
|
||||
```json
|
||||
POST _opensearch/_sql
|
||||
POST _plugins/_sql
|
||||
{
|
||||
"query": "SELECT * FROM my-index1,myindex2,myindex3 LIMIT 50"
|
||||
}
|
||||
|
||||
POST _opensearch/_sql
|
||||
POST _plugins/_sql
|
||||
{
|
||||
"query": "SELECT * FROM my-index* LIMIT 50"
|
||||
}
|
||||
|
@ -54,13 +54,13 @@ POST _opensearch/_sql
|
|||
For a sample [curl](https://curl.haxx.se/) command, try:
|
||||
|
||||
```bash
|
||||
curl -XPOST https://localhost:9200/_opensearch/_sql -u 'admin:admin' -k -H 'Content-Type: application/json' -d '{"query": "SELECT * FROM opensearch_dashboards_sample_data_flights LIMIT 10"}'
|
||||
curl -XPOST https://localhost:9200/_plugins/_sql -u 'admin:admin' -k -H 'Content-Type: application/json' -d '{"query": "SELECT * FROM opensearch_dashboards_sample_data_flights LIMIT 10"}'
|
||||
```
|
||||
|
||||
By default, queries return data in JDBC format, but you can also return data in standard OpenSearch JSON, CSV, or raw formats:
|
||||
|
||||
```json
|
||||
POST _opensearch/_sql?format=json|csv|raw
|
||||
POST _plugins/_sql?format=json|csv|raw
|
||||
{
|
||||
"query": "SELECT * FROM my-index LIMIT 50"
|
||||
}
|
||||
|
|
|
@ -84,7 +84,7 @@ The pagination query enables you to get back paginated responses.
|
|||
Currently, the pagination only supports basic queries. For example, the following query returns the data with cursor id.
|
||||
|
||||
```json
|
||||
POST _opensearch/_sql/
|
||||
POST _plugins/_sql/
|
||||
{
|
||||
"fetch_size" : 5,
|
||||
"query" : "SELECT OriginCountry, DestCountry FROM opensearch_dashboards_sample_data_flights ORDER BY OriginCountry ASC"
|
||||
|
|
|
@ -33,7 +33,7 @@ The meaning of fields in the response is as follows:
|
|||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X GET localhost:9200/_opensearch/_sql/stats
|
||||
>> curl -H 'Content-Type: application/json' -X GET localhost:9200/_plugins/_sql/stats
|
||||
```
|
||||
|
||||
Result set:
|
||||
|
|
|
@ -28,7 +28,7 @@ OpenSearch DSL directly.
|
|||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql -d '{
|
||||
"query" : "SELECT firstname, lastname, balance FROM accounts",
|
||||
"filter" : {
|
||||
"range" : {
|
||||
|
@ -88,7 +88,7 @@ in prepared SQL query.
|
|||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql -d '{
|
||||
"query": "SELECT * FROM accounts WHERE age = ?",
|
||||
"parameters": [{
|
||||
"type": "integer",
|
||||
|
@ -124,13 +124,96 @@ Explain:
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
## JDBC Format
|
||||
|
||||
### Description
|
||||
|
||||
By default, the plugin returns the JDBC standard format. This format
|
||||
is provided for JDBC driver and clients that need both schema and
|
||||
result set well formatted.
|
||||
|
||||
### Example 1
|
||||
|
||||
Here is an example for normal response. The
|
||||
`schema` includes field name and its type
|
||||
and `datarows` includes the result set.
|
||||
|
||||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql -d '{
|
||||
"query" : "SELECT firstname, lastname, age FROM accounts ORDER BY age LIMIT 2"
|
||||
}'
|
||||
```
|
||||
|
||||
Result set:
|
||||
|
||||
```json
|
||||
{
|
||||
"schema": [{
|
||||
"name": "firstname",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "lastname",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "age",
|
||||
"type": "long"
|
||||
}
|
||||
],
|
||||
"total": 4,
|
||||
"datarows": [
|
||||
[
|
||||
"Nanette",
|
||||
"Bates",
|
||||
28
|
||||
],
|
||||
[
|
||||
"Amber",
|
||||
"Duke",
|
||||
32
|
||||
]
|
||||
],
|
||||
"size": 2,
|
||||
"status": 200
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2
|
||||
|
||||
If any error occurred, error message and the cause will be returned
|
||||
instead.
|
||||
|
||||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql -d '{
|
||||
"query" : "SELECT unknown FROM accounts"
|
||||
}'
|
||||
```
|
||||
|
||||
Result set:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"reason": "Invalid SQL query",
|
||||
"details": "Field [unknown] cannot be found or used here.",
|
||||
"type": "SemanticAnalysisException"
|
||||
},
|
||||
"status": 400
|
||||
}
|
||||
```
|
||||
|
||||
## OpenSearch DSL
|
||||
|
||||
### Description
|
||||
|
||||
By default the plugin returns original response from OpenSearch in
|
||||
The `json` format returns original response from OpenSearch in
|
||||
JSON. Because this is the native response from OpenSearch, extra
|
||||
efforts are needed to parse and interpret it.
|
||||
|
||||
|
@ -139,7 +222,7 @@ efforts are needed to parse and interpret it.
|
|||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql?format=json -d '{
|
||||
"query" : "SELECT firstname, lastname, age FROM accounts ORDER BY age LIMIT 2"
|
||||
}'
|
||||
```
|
||||
|
@ -195,88 +278,6 @@ Result set:
|
|||
}
|
||||
```
|
||||
|
||||
## JDBC Format
|
||||
|
||||
### Description
|
||||
|
||||
JDBC format is provided for JDBC driver and client side that needs both
|
||||
schema and result set well formatted.
|
||||
|
||||
### Example 1
|
||||
|
||||
Here is an example for normal response. The
|
||||
`schema` includes field name and its type
|
||||
and `datarows` includes the result set.
|
||||
|
||||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql?format=jdbc -d '{
|
||||
"query" : "SELECT firstname, lastname, age FROM accounts ORDER BY age LIMIT 2"
|
||||
}'
|
||||
```
|
||||
|
||||
Result set:
|
||||
|
||||
```json
|
||||
{
|
||||
"schema": [{
|
||||
"name": "firstname",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "lastname",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "age",
|
||||
"type": "long"
|
||||
}
|
||||
],
|
||||
"total": 4,
|
||||
"datarows": [
|
||||
[
|
||||
"Nanette",
|
||||
"Bates",
|
||||
28
|
||||
],
|
||||
[
|
||||
"Amber",
|
||||
"Duke",
|
||||
32
|
||||
]
|
||||
],
|
||||
"size": 2,
|
||||
"status": 200
|
||||
}
|
||||
```
|
||||
|
||||
### Example 2
|
||||
|
||||
If any error occurred, error message and the cause will be returned
|
||||
instead.
|
||||
|
||||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql?format=jdbc -d '{
|
||||
"query" : "SELECT unknown FROM accounts"
|
||||
}'
|
||||
```
|
||||
|
||||
Result set:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"reason": "Invalid SQL query",
|
||||
"details": "Field [unknown] cannot be found or used here.",
|
||||
"type": "SemanticAnalysisException"
|
||||
},
|
||||
"status": 400
|
||||
}
|
||||
```
|
||||
|
||||
## CSV Format
|
||||
|
||||
### Description
|
||||
|
@ -288,7 +289,7 @@ You can also use CSV format to download result set as CSV.
|
|||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql?format=csv -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql?format=csv -d '{
|
||||
"query" : "SELECT firstname, lastname, age FROM accounts ORDER BY age"
|
||||
}'
|
||||
```
|
||||
|
@ -315,7 +316,7 @@ line tool for post processing.
|
|||
SQL query:
|
||||
|
||||
```console
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_opensearch/_sql?format=raw -d '{
|
||||
>> curl -H 'Content-Type: application/json' -X POST localhost:9200/_plugins/_sql?format=raw -d '{
|
||||
"query" : "SELECT firstname, lastname, age FROM accounts ORDER BY age"
|
||||
}'
|
||||
```
|
||||
|
|
|
@ -15,19 +15,25 @@ You can update these settings like any other cluster setting:
|
|||
PUT _cluster/settings
|
||||
{
|
||||
"transient" : {
|
||||
"opendistro.sql.enabled" : false
|
||||
"plugins.sql.enabled" : false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Similarly, you can also update the settings by sending the request to the plugin setting endpoing `_plugins/_query/setting`:
|
||||
```json
|
||||
PUT _plugins/_query/settings
|
||||
{
|
||||
"transient" : {
|
||||
"plugins.sql.enabled" : false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Setting | Default | Description
|
||||
:--- | :--- | :---
|
||||
`opendistro.sql.enabled` | True | Change to `false` to disable the plugin.
|
||||
`opendistro.sql.query.slowlog` | 2 seconds | Configure the time limit (in seconds) for slow queries. The plugin logs slow queries as `Slow query: elapsed=xxx (ms)` in `opensearch.log`.
|
||||
`opendistro.sql.query.analysis.enabled` | True | Enables or disables the query analyzer. Changing this setting to `false` lets you bypass strict syntactic and semantic analysis.
|
||||
`opendistro.sql.query.analysis.semantic.suggestion` | False | If enabled, the query analyzer suggests correct field names for quick fixes.
|
||||
`opendistro.sql.query.analysis.semantic.threshold` | 200 | Because query analysis needs to build semantic context in memory, indices with a large number of fields are be skipped. You can update this setting to apply analysis to smaller or larger indices as needed.
|
||||
`opendistro.sql.query.response.format` | JDBC | Sets the default response format for queries. The supported formats are JDBC, JSON, CSV, raw, and table.
|
||||
`opendistro.sql.cursor.enabled` | False | You can enable or disable pagination for all queries that are supported.
|
||||
`opendistro.sql.cursor.fetch_size` | 1,000 | You can set the default `fetch_size` for all queries that are supported by pagination. An explicit `fetch_size` passed in request overrides this value.
|
||||
`opendistro.sql.cursor.keep_alive` | 1 minute | This value configures how long the cursor context is kept open. Cursor contexts are resource heavy, so we recommend a low value.
|
||||
`plugins.sql.enabled` | True | Change to `false` to disable the plugin.
|
||||
`plugins.sql.slowlog` | 2 seconds | Configure the time limit (in seconds) for slow queries. The plugin logs slow queries as `Slow query: elapsed=xxx (ms)` in `opensearch.log`.
|
||||
`plugins.sql.cursor.keep_alive` | 1 minute | This value configures how long the cursor context is kept open. Cursor contexts are resource heavy, so we recommend a low value.
|
||||
`plugins.query.memory_limit` | 85% | This setting configures the heap memory usage limit for the circuit breaker of the query engine.
|
||||
`plugins.query.size_limit` | 200 | The setting sets the default size of index that the query engine fetches from OpenSearch.
|
|
@ -113,7 +113,7 @@ ORDER BY _score
|
|||
|
||||
|
||||
| account_number | address | score
|
||||
:--- | :---
|
||||
:--- | :--- | :---
|
||||
1 | 880 Holmes Lane | 0.5
|
||||
6 | 671 Bristol Street | 100
|
||||
13 | 789 Madison Street | 100
|
||||
|
|
|
@ -12,7 +12,7 @@ The SQL plugin is stateless, so troubleshooting is mostly focused on why a parti
|
|||
The most common error is the dreaded null pointer exception, which can occur during parsing errors or when using the wrong HTTP method (POST vs. GET and vice versa). The POST method and HTTP request body offer the most consistent results:
|
||||
|
||||
```json
|
||||
POST _opensearch/_sql
|
||||
POST _plugins/_sql
|
||||
{
|
||||
"query": "SELECT * FROM my-index WHERE ['name.firstname']='saanvi' LIMIT 5"
|
||||
}
|
||||
|
@ -23,7 +23,7 @@ If a query isn't behaving the way you expect, use the `_explain` API to see the
|
|||
#### Sample request
|
||||
|
||||
```json
|
||||
POST _opensearch/_sql/_explain
|
||||
POST _plugins/_sql/_explain
|
||||
{
|
||||
"query": "SELECT * FROM my-index LIMIT 50"
|
||||
}
|
||||
|
@ -39,37 +39,6 @@ POST _opensearch/_sql/_explain
|
|||
}
|
||||
```
|
||||
|
||||
## Syntax analysis exception
|
||||
|
||||
You might receive the following error if the plugin can't parse your query:
|
||||
|
||||
```json
|
||||
{
|
||||
"reason": "Invalid SQL query",
|
||||
"details": "Failed to parse query due to offending symbol [:] at: 'SELECT * FROM xxx WHERE xxx:' <--- HERE...
|
||||
More details: Expecting tokens in {<EOF>, 'AND', 'BETWEEN', 'GROUP', 'HAVING', 'IN', 'IS', 'LIKE', 'LIMIT',
|
||||
'NOT', 'OR', 'ORDER', 'REGEXP', '*', '/', '%', '+', '-', 'DIV', 'MOD', '=', '>', '<', '!',
|
||||
'|', '&', '^', '.', DOT_ID}",
|
||||
"type": "SyntaxAnalysisException"
|
||||
}
|
||||
```
|
||||
|
||||
To resolve this error:
|
||||
|
||||
1. Check if your syntax follows the [MySQL grammar](https://dev.mysql.com/doc/refman/8.0/en/).
|
||||
2. If your syntax is correct, disable strict query analysis:
|
||||
|
||||
```json
|
||||
PUT _cluster/settings
|
||||
{
|
||||
"persistent" : {
|
||||
"opendistro.sql.query.analysis.enabled" : false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. Run the query again to see if it works.
|
||||
|
||||
## Index mapping verification exception
|
||||
|
||||
If you see the following verification exception:
|
||||
|
|
|
@ -1,14 +1,14 @@
|
|||
---
|
||||
layout: default
|
||||
title: Workbench
|
||||
title: Query Workbench
|
||||
parent: SQL
|
||||
nav_order: 1
|
||||
---
|
||||
|
||||
|
||||
# Workbench
|
||||
# Query Workbench
|
||||
|
||||
Use the SQL workbench to easily run on-demand SQL queries, translate SQL into its REST equivalent, and view and save results as text, JSON, JDBC, or CSV.
|
||||
Use the Query Workbench to easily run on-demand SQL queries, translate SQL into its REST equivalent, and view and save results as text, JSON, JDBC, or CSV.
|
||||
|
||||
|
||||
## Quick start
|
||||
|
@ -38,9 +38,9 @@ To list all your indices:
|
|||
SHOW TABLES LIKE %
|
||||
```
|
||||
|
||||
| id | TABLE_NAME
|
||||
:--- | :---
|
||||
0 | accounts
|
||||
| TABLE_NAME
|
||||
| :---
|
||||
| accounts
|
||||
|
||||
|
||||
### Read data
|
||||
|
@ -53,9 +53,9 @@ FROM accounts
|
|||
WHERE _id = 1
|
||||
```
|
||||
|
||||
| id | account_number | firstname | gender | city | balance | employer | state | email | address | lastname | age
|
||||
:--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :---
|
||||
0 | 1 | Amber | M | Brogan | 39225 | Pyrami | IL | amberduke@pyrami.com | 880 Holmes Lane | Duke | 32
|
||||
| account_number | firstname | gender | city | balance | employer | state | email | address | lastname | age
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :---
|
||||
| 1 | Amber | M | Brogan | 39225 | Pyrami | IL | amberduke@pyrami.com | 880 Holmes Lane | Duke | 32
|
||||
|
||||
|
||||
### Delete data
|
||||
|
@ -68,6 +68,6 @@ FROM accounts
|
|||
WHERE _id = 0
|
||||
```
|
||||
|
||||
| id | deleted_rows
|
||||
:--- | :---
|
||||
0 | 1
|
||||
| deleted_rows
|
||||
| :---
|
||||
| 1
|
||||
|
|
|
@ -24,8 +24,8 @@ This page includes troubleshooting steps for using OpenID Connect with the secur
|
|||
To help troubleshoot OpenID Connect, set the log level to `debug` on OpenSearch. Add the following lines in `config/log4j2.properties` and restart the node:
|
||||
|
||||
```
|
||||
logger.opensearch_security.name = com.amazon.dlic.auth.http.jwt
|
||||
logger.opensearch_security.level = trace
|
||||
logger.plugins.security.name = com.amazon.dlic.auth.http.jwt
|
||||
logger.plugins.security.level = trace
|
||||
```
|
||||
|
||||
This setting prints a lot of helpful information to your log file. If this information isn't sufficient, you can also set the log level to `trace`.
|
||||
|
@ -36,7 +36,7 @@ This setting prints a lot of helpful information to your log file. If this infor
|
|||
This error indicates that the security plugin can't reach the metadata endpoint of your IdP. In `opensearch_dashboards.yml`, check the following setting:
|
||||
|
||||
```
|
||||
opensearch_security.openid.connect_url: "http://keycloak.example.com:8080/auth/realms/master/.well-known/openid-configuration"
|
||||
plugins.security.openid.connect_url: "http://keycloak.example.com:8080/auth/realms/master/.well-known/openid-configuration"
|
||||
```
|
||||
|
||||
If this error occurs on OpenSearch, check the following setting in `config.yml`:
|
||||
|
@ -60,9 +60,9 @@ This indicates that one or more of the OpenSearch Dashboards configuration setti
|
|||
Check `opensearch_dashboards.yml` and make sure you have set the following minimal configuration:
|
||||
|
||||
```yml
|
||||
opensearch_security.openid.connect_url: "..."
|
||||
opensearch_security.openid.client_id: "..."
|
||||
opensearch_security.openid.client_secret: "..."
|
||||
plugins.security.openid.connect_url: "..."
|
||||
plugins.security.openid.client_id: "..."
|
||||
plugins.security.openid.client_secret: "..."
|
||||
```
|
||||
|
||||
|
||||
|
@ -81,7 +81,7 @@ Please delete all cached browser data, or try again in a private browser window.
|
|||
To trade the access token for an identity token, most IdPs require you to provide a client secret. Check if the client secret in `opensearch_dashboards.yml` matches the client secret of your IdP configuration:
|
||||
|
||||
```
|
||||
opensearch_security.openid.client_secret: "..."
|
||||
plugins.security.openid.client_secret: "..."
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -49,7 +49,7 @@ The security plugin uses the [string representation of Distinguished Names (RFC1
|
|||
If parts of your DN contain special characters (e.g. a comma), make sure you escape it in your configuration:
|
||||
|
||||
```yml
|
||||
opensearch_security.nodes_dn:
|
||||
plugins.security.nodes_dn:
|
||||
- 'CN=node-0.example.com,OU=SSL,O=My\, Test,L=Test,C=DE'
|
||||
```
|
||||
|
||||
|
@ -58,14 +58,14 @@ You can have whitespace within a field, but not between fields.
|
|||
#### Bad configuration
|
||||
|
||||
```yml
|
||||
opensearch_security.nodes_dn:
|
||||
plugins.security.nodes_dn:
|
||||
- 'CN=node-0.example.com, OU=SSL,O=My\, Test, L=Test, C=DE'
|
||||
```
|
||||
|
||||
#### Good configuration
|
||||
|
||||
```yml
|
||||
opensearch_security.nodes_dn:
|
||||
plugins.security.nodes_dn:
|
||||
- 'CN=node-0.example.com,OU=SSL,O=My\, Test,L=Test,C=DE'
|
||||
```
|
||||
|
||||
|
@ -197,7 +197,7 @@ ExtendedKeyUsages [
|
|||
The security plugin disables TLS version 1.0 by default; it is outdated, insecure, and vulnerable. If you need to use `TLSv1` and accept the risks, you can enable it in `opensearch.yml`:
|
||||
|
||||
```yml
|
||||
opensearch_security.ssl.http.enabled_protocols:
|
||||
plugins.security.ssl.http.enabled_protocols:
|
||||
- "TLSv1"
|
||||
- "TLSv1.1"
|
||||
- "TLSv1.2"
|
||||
|
|
|
@ -9,4 +9,5 @@ permalink: /version-history/
|
|||
|
||||
OpenSearch version | Release highlights | Release date
|
||||
:--- | :--- | :--- | :---
|
||||
[1.0.0-rc1](https://github.com/opensearch-project/opensearch-build/tree/main/release-notes/opensearch-release-notes-1.0.0-rc1.md) | First release candidate. | 7 June 2021
|
||||
[1.0.0-beta1](https://github.com/opensearch-project/opensearch-build/tree/main/release-notes/opensearch-release-notes-1.0.0-beta1.md) | Initial beta release. Refactors plugins to work with OpenSearch. | 13 May 2021
|
||||
|
|
Loading…
Reference in New Issue