diff --git a/README.md b/README.md index 175730e0..a9bd14e2 100644 --- a/README.md +++ b/README.md @@ -196,17 +196,17 @@ If you're making major changes to the documentation and need to see the rendered ## New releases 1. Branch. -1. Change the `opensearch_version` and `opensearch_major_version` variables in `_config.yml`. +1. Change the `opensearch_version`, `opensearch_major_minor_version`, and `lucene_version` variables in `_config.yml`. 1. Start up a new cluster using the updated Docker Compose file in `docs/install/docker.md`. 1. Update the version table in `version-history.md`. - Use `curl -XGET https://localhost:9200 -u admin:admin -k` to verify the OpenSearch version. + Use `curl -XGET https://localhost:9200 -u admin:admin -k` to verify the OpenSearch and Lucene versions. -1. Update the plugin compatibility table in `docs/install/plugin.md`. +1. Update the plugin compatibility table in `_opensearch/install/plugin.md`. Use `curl -XGET https://localhost:9200/_cat/plugins -u admin:admin -k` to get the correct version strings. -1. Update the plugin compatibility table in `docs/opensearch-dashboards/plugins.md`. +1. Update the plugin compatibility table in `_dashboards/install/plugins.md`. Use `docker ps` to find the ID for the OpenSearch Dashboards node. Then use `docker exec -it /bin/bash` to get shell access. Finally, run `./bin/opensearch-dashboards-plugin list` to get the plugins and version strings. diff --git a/_clients/agents-and-ingestion-tools/index.md b/_clients/agents-and-ingestion-tools/index.md index 7b9ca7fb..04adfb1e 100644 --- a/_clients/agents-and-ingestion-tools/index.md +++ b/_clients/agents-and-ingestion-tools/index.md @@ -27,12 +27,18 @@ PUT _cluster/settings } ``` +[Just like any other setting]({{site.url}}{{site.baseurl}}/opensearch/configuration/), the alternative is to add the following line to `opensearch.yml` on each node and then restart the node: + +```yml +compatibility.override_main_response_version: true +``` + ## Downloads You can download the OpenSearch output plugin for Logstash from [OpenSearch downloads](https://opensearch.org/downloads.html). The Logstash output plugin is compatible with OpenSearch and Elasticsearch OSS (7.10.2 or lower). -These versions of Beats offer the best compatibility with OpenSearch. For more information, see the [compatibility matrices](#compatibility-matrices). +These are the latest versions of Beats OSS with OpenSearch compatibility. For more information, see the [compatibility matrices](#compatibility-matrices). - [Filebeat OSS 7.12.1](https://www.elastic.co/downloads/past-releases/filebeat-oss-7-12-1) - [Metricbeat OSS 7.12.1](https://www.elastic.co/downloads/past-releases/metricbeat-oss-7-12-1) @@ -41,7 +47,7 @@ These versions of Beats offer the best compatibility with OpenSearch. For more i - [Winlogbeat OSS 7.12.1](https://www.elastic.co/downloads/past-releases/winlogbeat-oss-7-12-1) - [Auditbeat OSS 7.12.1](https://elastic.co/downloads/past-releases/auditbeat-oss-7-12-1) -Some users report compatibility issues with ingest pipelines on these versions of Beats. If you use ingest pipelines with OpenSearch, consider using the 7.10.2 versions of Beats OSS instead. +Some users report compatibility issues with ingest pipelines on these versions of Beats. If you use ingest pipelines with OpenSearch, consider using the 7.10.2 versions of Beats instead. {: .note } diff --git a/_clients/go.md b/_clients/go.md new file mode 100644 index 00000000..75ee300e --- /dev/null +++ b/_clients/go.md @@ -0,0 +1,145 @@ +--- +layout: default +title: Go client +nav_order: 80 +--- + +# Go client + +The OpenSearch Go client lets you connect your Go application with the data in your OpenSearch cluster. + + +## Setup + +If you're creating a new project: + +```go +go mod init +``` + +To add the client to your project, import it like any other module: + +```go +go get github.com/opensearch-project/opensearch-go +``` + +## Sample code + +This sample code creates a client, adds an index with non-default settings, inserts a document, searches for the document, deletes the document, and finally deletes the index: + +```go +package main +import ( + "os" + "context" + "crypto/tls" + "fmt" + opensearch "github.com/opensearch-project/opensearch-go" + opensearchapi "github.com/opensearch-project/opensearch-go/opensearchapi" + "net/http" + "strings" +) +const IndexName = "go-test-index1" +func main() { + // Initialize the client with SSL/TLS enabled. + client, err := opensearch.NewClient(opensearch.Config{ + Transport: &http.Transport{ + TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, + }, + Addresses: []string{"https://localhost:9200"}, + Username: "admin", // For testing only. Don't store credentials in code. + Password: "admin", + }) + if err != nil { + fmt.Println("cannot initialize", err) + os.Exit(1) + } + + // Print OpenSearch version information on console. + fmt.Println(client.Info()) + + // Define index mapping. + mapping := strings.NewReader(`{ + 'settings': { + 'index': { + 'number_of_shards': 4 + } + } + }`) + + // Create an index with non-default settings. + res := opensearchapi.CreateRequest{ + Index: IndexName, + Body: mapping, + } + fmt.Println("creating index", res) + + // Add a document to the index. + document := strings.NewReader(`{ + "title": "Moneyball", + "director": "Bennett Miller", + "year": "2011" + }`) + + docId := "1" + req := opensearchapi.IndexRequest{ + Index: IndexName, + DocumentID: docId, + Body: document, + } + insertResponse, err := req.Do(context.Background(), client) + if err != nil { + fmt.Println("failed to insert document ", err) + os.Exit(1) + } + fmt.Println(insertResponse) + + // Search for the document. + content := strings.NewReader(`{ + "size": 5, + "query": { + "multi_match": { + "query": "miller", + "fields": ["title^2", "director"] + } + } + }`) + + search := opensearchapi.SearchRequest{ + Body: content, + } + + searchResponse, err := search.Do(context.Background(), client) + if err != nil { + fmt.Println("failed to search document ", err) + os.Exit(1) + } + fmt.Println(searchResponse) + + // Delete the document. + delete := opensearchapi.DeleteRequest{ + Index: IndexName, + DocumentID: docId, + } + + deleteResponse, err := delete.Do(context.Background(), client) + if err != nil { + fmt.Println("failed to delete document ", err) + os.Exit(1) + } + fmt.Println("deleting document") + fmt.Println(deleteResponse) + + // Delete previously created index. + deleteIndex := opensearchapi.IndicesDeleteRequest{ + Index: []string{IndexName}, + } + + deleteIndexResponse, err := deleteIndex.Do(context.Background(), client) + if err != nil { + fmt.Println("failed to delete index ", err) + os.Exit(1) + } + fmt.Println("deleting index", deleteIndexResponse) +} +``` diff --git a/_clients/grafana.md b/_clients/grafana.md new file mode 100644 index 00000000..97e35de4 --- /dev/null +++ b/_clients/grafana.md @@ -0,0 +1,10 @@ +--- +layout: default +title: Grafana +nav_order: 150 +has_children: false +--- + +# Grafana support + +Grafana has a data source plugin that lets you explore and visualize your OpenSearch data. For information on getting started with the plugin, see the [Grafana overview page](https://grafana.com/grafana/plugins/grafana-opensearch-datasource/). diff --git a/_clients/index.md b/_clients/index.md index bdf2bf05..2f3513dd 100644 --- a/_clients/index.md +++ b/_clients/index.md @@ -9,6 +9,20 @@ redirect_from: # OpenSearch client compatibility +OpenSearch provides clients for several popular programming languages, with more coming. In general, clients are compatible with clusters running the same major version of OpenSearch (`major.minor.patch`). + +For example, a 1.0.0 client works with an OpenSearch 1.1.0 cluster, but might not support any non-breaking API changes in OpenSearch 1.1.0. A 1.2.0 client works with the same cluster, but might allow you to pass unsupported options in certain functions. We recommend using the same version for both, but if your tests pass after a cluster upgrade, you don't necessarily need to upgrade your clients immediately. + +{% comment %} +* [OpenSearch Java client]({{site.url}}{{site.baseurl}}/clients/java/) +{% endcomment %} +* [OpenSearch Python client]({{site.url}}{{site.baseurl}}/clients/python/) +* [OpenSearch JavaScript (Node.js) client]({{site.url}}{{site.baseurl}}/clients/javascript/) +* [OpenSearch Go client]({{site.url}}{{site.baseurl}}/clients/go/) + + +## Legacy clients + Most clients that work with Elasticsearch OSS 7.10.2 *should* work with OpenSearch, but the latest versions of those clients might include license or version checks that artificially break compatibility. This page includes recommendations around which versions of those clients to use for best compatibility with OpenSearch. Client | Recommended version @@ -18,7 +32,7 @@ Client | Recommended version [Python Elasticsearch client](https://pypi.org/project/elasticsearch/7.13.4/) | 7.13.4 [Elasticsearch Node.js client](https://www.npmjs.com/package/@elastic/elasticsearch/v/7.13.0) | 7.13.0 -Clients exist for a wide variety of languages, so if you test a client and verify that it works, please [submit a PR](https://github.com/opensearch-project/documentation-website/pulls) and add it to this table. +If you test a legacy client and verify that it works, please [submit a PR](https://github.com/opensearch-project/documentation-website/pulls) and add it to this table. {% comment %} diff --git a/_clients/javascript.md b/_clients/javascript.md new file mode 100644 index 00000000..c670e4b8 --- /dev/null +++ b/_clients/javascript.md @@ -0,0 +1,141 @@ +--- +layout: default +title: JavaScript client +nav_order: 90 +--- + +# JavaScript client + +The OpenSearch JavaScript client provides a safer and easier way to interact with your OpenSearch cluster. Rather than using OpenSearch from the browser and potentially exposing your data to the public, you can build an OpenSearch client that takes care of sending requests to your cluster. + +The client contains a library of APIs that let you perform different operations on your cluster and return a standard response body. The example here demonstrates some basic operations like creating an index, adding documents, and searching your data. + +## Setup + +To add the client to your project, install it from [npm](https://www.npmjs.com): + +```bash +npm install @opensearch-project/opensearch +``` + +To install a specific major version of the client, run the following command: + +```bash +npm install @opensearch-project/opensearch@ +``` + +If you prefer to add the client manually or just want to examine the source code, see [opensearch-js](https://github.com/opensearch-project/opensearch-js) on GitHub. + +Then require the client: + +```javascript +const { Client } = require("@opensearch-project/opensearch"); +``` + +## Sample code + +```javascript +"use strict"; + +var host = "localhost"; +var protocol = "https"; +var port = 9200; +var auth = "admin:admin"; // For testing only. Don't store credentials in code. +var ca_certs_path = "/full/path/to/root-ca.pem"; + +// Optional client certificates if you don't want to use HTTP basic authentication. +// var client_cert_path = '/full/path/to/client.pem' +// var client_key_path = '/full/path/to/client-key.pem' + +// Create a client with SSL/TLS enabled. +var { Client } = require("@opensearch-project/opensearch"); +var fs = require("fs"); +var client = new Client({ + node: protocol + "://" + auth + "@" + host + ":" + port, + ssl: { + ca: fs.readFileSync(ca_certs_path), + // You can turn off certificate verification (rejectUnauthorized: false) if you're using self-signed certificates with a hostname mismatch. + // cert: fs.readFileSync(client_cert_path), + // key: fs.readFileSync(client_key_path) + }, +}); + +async function search() { + // Create an index with non-default settings. + var index_name = "books"; + var settings = { + settings: { + index: { + number_of_shards: 4, + number_of_replicas: 3, + }, + }, + }; + + var response = await client.indices.create({ + index: index_name, + body: settings, + }); + + console.log("Creating index:"); + console.log(response.body); + + // Add a document to the index. + var document = { + title: "The Outsider", + author: "Stephen King", + year: "2018", + genre: "Crime fiction", + }; + + var id = "1"; + + var response = await client.index({ + id: id, + index: index_name, + body: document, + refresh: true, + }); + + console.log("Adding document:"); + console.log(response.body); + + // Search for the document. + var query = { + query: { + match: { + title: { + query: "The Outsider", + }, + }, + }, + }; + + var response = await client.search({ + index: index_name, + body: query, + }); + + console.log("Search results:"); + console.log(response.body.hits); + + // Delete the document. + var response = await client.delete({ + index: index_name, + id: id, + }); + + console.log("Deleting document:"); + console.log(response.body); + + // Delete the index. + var response = await client.indices.delete({ + index: index_name, + }); + + console.log("Deleting index:"); + console.log(response.body); +} + +search().catch(console.log); +``` diff --git a/_clients/logstash/index.md b/_clients/logstash/index.md index f84d5244..d8f3ec2d 100644 --- a/_clients/logstash/index.md +++ b/_clients/logstash/index.md @@ -57,6 +57,9 @@ The OpenSearch Logstash plugin has two installation options at this time: Linux Make sure you have [Java Development Kit (JDK)](https://www.oracle.com/java/technologies/javase-downloads.html) version 8 or 11 installed. +If you're migrating from an existing Logstash installation, you can install the [OpenSearch output plugin](https://rubygems.org/gems/logstash-output-opensearch/) manually and [update pipeline.conf](https://opensearch.org/docs/latest/clients/logstash/ship-to-opensearch/). We include this plugin by default in our tarball and Docker downloads. +{: .note } + ### Tarball 1. Download the Logstash tarball from [OpenSearch downloads](https://opensearch.org/downloads.html). diff --git a/_clients/python.md b/_clients/python.md new file mode 100644 index 00000000..10a856a2 --- /dev/null +++ b/_clients/python.md @@ -0,0 +1,128 @@ +--- +layout: default +title: Python client +nav_order: 70 +--- + +# Python client + +The OpenSearch Python client provides a more natural syntax for interacting with your cluster. Rather than sending HTTP requests to a given URL, you can create an OpenSearch client for your cluster and call the client's built-in functions. + +{% comment %} +`opensearch-py` is the lower-level of the two Python clients. If you want a general client for assorted operations, it's a great choice. If you want a higher-level client strictly for indexing and search operations, consider [opensearch-dsl-py]({{site.url}}{{site.baseurl}}/clients/python-dsl/). +{% endcomment %} + + +## Setup + +To add the client to your project, install it using [pip](https://pip.pypa.io/): + +```bash +pip install opensearch-py +``` + +Then import it like any other module: + +```python +from opensearchpy import OpenSearch +``` + +If you prefer to add the client manually or just want to examine the source code, see [opensearch-py on GitHub](https://github.com/opensearch-project/opensearch-py). + + +## Sample code + +```python +from opensearchpy import OpenSearch + +host = 'localhost' +port = 9200 +auth = ('admin', 'admin') # For testing only. Don't store credentials in code. +ca_certs_path = '/full/path/to/root-ca.pem' # Provide a CA bundle if you use intermediate CAs with your root CA. + +# Optional client certificates if you don't want to use HTTP basic authentication. +# client_cert_path = '/full/path/to/client.pem' +# client_key_path = '/full/path/to/client-key.pem' + +# Create the client with SSL/TLS enabled, but hostname verification disabled. +client = OpenSearch( + hosts = [{'host': host, 'port': port}], + http_compress = True, # enables gzip compression for request bodies + http_auth = auth, + # client_cert = client_cert_path, + # client_key = client_key_path, + use_ssl = True, + verify_certs = True, + ssl_assert_hostname = False, + ssl_show_warn = False, + ca_certs = ca_certs_path +) + +# Create an index with non-default settings. +index_name = 'python-test-index' +index_body = { + 'settings': { + 'index': { + 'number_of_shards': 4 + } + } +} + +response = client.indices.create(index_name, body=index_body) +print('\nCreating index:') +print(response) + +# Add a document to the index. +document = { + 'title': 'Moneyball', + 'director': 'Bennett Miller', + 'year': '2011' +} +id = '1' + +response = client.index( + index = index_name, + body = document, + id = id, + refresh = True +) + +print('\nAdding document:') +print(response) + +# Search for the document. +q = 'miller' +query = { + 'size': 5, + 'query': { + 'multi_match': { + 'query': q, + 'fields': ['title^2', 'director'] + } + } +} + +response = client.search( + body = query, + index = index_name +) +print('\nSearch results:') +print(response) + +# Delete the document. +response = client.delete( + index = index_name, + id = id +) + +print('\nDeleting document:') +print(response) + +# Delete the index. +response = client.indices.delete( + index = index_name +) + +print('\nDeleting index:') +print(response) +``` diff --git a/_config.yml b/_config.yml index 0b56bce3..63d4cb50 100644 --- a/_config.yml +++ b/_config.yml @@ -1,12 +1,13 @@ title: OpenSearch documentation description: >- # this means to ignore newlines until "baseurl:" Documentation for OpenSearch, the Apache 2.0 search, analytics, and visualization suite with advanced security, alerting, SQL support, automated index management, deep performance analysis, and more. -baseurl: "/docs" # the subpath of your site, e.g. /blog +baseurl: "/docs/latest" # the subpath of your site, e.g. /blog url: "https://opensearch.org" # the base hostname & protocol for your site, e.g. http://example.com permalink: /:path/ -opensearch_version: 1.0.0 -opensearch_major_minor_version: 1.0 +opensearch_version: 1.1.0 +opensearch_major_minor_version: 1.1 +lucene_version: 8_9_0 # Build settings markdown: kramdown @@ -44,6 +45,9 @@ collections: im-plugin: permalink: /:collection/:path/ output: true + replication-plugin: + permalink: /:collection/:path/ + output: true monitoring-plugins: permalink: /:collection/:path/ output: true @@ -80,6 +84,9 @@ just_the_docs: im-plugin: name: Index management plugin nav_fold: true + replication-plugin: + name: Replication plugin + nav_fold: true monitoring-plugins: name: Monitoring plugins nav_fold: true diff --git a/_dashboards/dql.md b/_dashboards/dql.md new file mode 100644 index 00000000..3e71145f --- /dev/null +++ b/_dashboards/dql.md @@ -0,0 +1,142 @@ +--- +layout: default +title: Dashboards query language +nav_order: 99 +--- + +# Dashboards Query Language + +Similar to the [Query DSL]({{site.url}}{{site.baseurl}}/opensearch/query-dsl/index) that lets you use the HTTP request body to search for data, you can use the Dashbaords Query Language (DQL) in OpenSearch Dashboards to search for data and visualizations. + +For example, if you want to see all visualizations of visits to a host based in the US, enter `geo.dest:US` into the search field, and Dashboards refreshes to display all related data. + +Just like the query DSL, DQL has a handful of query types, so use whichever best fits your use case. + +This section uses the OpenSearch Dashboards sample web log data. To add sample data in Dashboards, log in to OpenSearch Dashboards, choose **Home**, **Add sample data**, and then **Add data**. + +--- + +#### Table of contents +1. TOC +{:toc} + +--- + +## Terms query + +The most basic query is to just specify the term you're searching for. + +``` +host:www.example.com +``` + +To access an object's nested field, list the complete path to the field separated by periods. For example, to retrieve the `lat` field in the `coordinates` object: + +``` +coordinates.lat:43.7102 +``` + +DQL also supports leading and trailing wildcards, so you can search for any terms that match your pattern. + +``` +host.keyword:*.example.com/* +``` + +To check if a field exists or has any data, use a wildcard to see if Dashboards returns any results. + +``` +host.keyword:* +``` + +## Boolean query + +To mix and match, or even combine, multiple queries for more refined results, you can use the boolean operators `and`, `or`, and `not`. DQL is not case sensitive, so `AND` and `and` are the same. + +``` +host.keyword:www.example.com and response.keyword:200 +``` + +The following example demonstrates how to use multiple operators in one query. + +``` +geo.dest:US or response.keyword:200 and host.keyword:www.example.com +``` + +Remember that boolean operators follow the logical precedence order of `not`, `and`, and `or`, so if you have an expression like the previous example, `response.keyword:200 and host.keyword:www.example.com` gets evaluated first, and then Dashboards uses that result to compare with `geo.dest:US`. + +To avoid confusion, we recommend using parentheses to dictate the order you want to evaluate in. If you want to evaluate `geo.dest:US or response.keyword:200` first, your expression becomes: + +``` +(geo.dest:US or response.keyword:200) and host.keyword:www.example.com +``` + +## Date and range queries + +DQL also supports inequalities if you're using numeric inequalities. + +``` +bytes >= 15 and memory < 15 +``` + +Similarly, you can use the same method to find a date before or after your query. `>` indicates a search for a date after your specified date, and `<` returns dates before. + +``` +@timestamp > "2020-12-14T09:35:33" +``` + +## Nested field query + +If you have a document with nested fields, you have to specify which parts of the document you want to retrieve. + +Suppose that you have the following document: + +```json +{ + "superheroes":[ + { + "hero-name": "Superman", + "real-identity": "Clark Kent", + "age": 28 + }, + { + "hero-name": "Batman", + "real-identity": "Bruce Wayne", + "age": 26 + }, + { + "hero-name": "Flash", + "real-identity": "Barry Allen", + "age": 28 + }, + { + "hero-name": "Robin", + "real-identity": "Dick Grayson", + "age": 15 + } + ] +} +``` + +The following example demonstrates how to use DQL to retrieve a specific field. + +``` +superheroes: {hero-name: Superman} +``` + +If you want to retrieve multiple objects from your document, just specify all of the fields you want to retrieve. + +``` +superheroes: {hero-name: Superman} and superheroes: {hero-name: Batman} +``` + +The previous boolean and range queries still work, so you can submit a more refined query. + +``` +superheroes: {hero-name: Superman and age < 50} +``` + +If your document has an object nested within another object, you can still retrieve data by specifying all of the levels. + +``` +justice-league.superheroes: {hero-name:Superman} +``` diff --git a/_dashboards/index.md b/_dashboards/index.md index d4ac0e23..df5a9516 100644 --- a/_dashboards/index.md +++ b/_dashboards/index.md @@ -5,9 +5,12 @@ nav_order: 1 has_children: false has_toc: false redirect_from: + - /docs/opensearch-dashboards/ - /dashboards/ --- +{%- comment -%}The `/docs/opensearch-dashboards/` redirect is specifically to support the UI links in OpenSearch Dashboards 1.0.0.{%- endcomment -%} + # OpenSearch Dashboards OpenSearch Dashboards is the default visualization tool for data in OpenSearch. It also serves as a user interface for many of the OpenSearch plugins, including security, alerting, Index State Management, SQL, and more. diff --git a/_dashboards/install/helm.md b/_dashboards/install/helm.md index 25936f1d..4d2e0c83 100644 --- a/_dashboards/install/helm.md +++ b/_dashboards/install/helm.md @@ -20,7 +20,7 @@ Resource | Description The specification in the default Helm chart supports many standard use cases and setups. You can modify the default chart to configure your desired specifications and set Transport Layer Security (TLS) and role-based access control (RBAC). For information about the default configuration, steps to configure security, and configurable parameters, see the -[README](https://github.com/opensearch-project/opensearch-devops/blob/main/Helm/README.md). +[README](https://github.com/opensearch-project/helm-charts/tree/main/charts). The instructions here assume you have a Kubernetes cluster with Helm preinstalled. See the [Kubernetes documentation](https://kubernetes.io/docs/setup/) for steps to configure a Kubernetes cluster and the [Helm documentation](https://helm.sh/docs/intro/install/) to install Helm. {: .note } diff --git a/_dashboards/install/plugins.md b/_dashboards/install/plugins.md index 805423c9..e0fc9d29 100644 --- a/_dashboards/install/plugins.md +++ b/_dashboards/install/plugins.md @@ -28,6 +28,36 @@ If you don't want to use the all-in-one installation options, you can install th + + 1.1.0 + +
alertingDashboards          1.1.0.0
+anomalyDetectionDashboards  1.1.0.0
+ganttChartDashboards        1.1.0.0
+indexManagementDashboards   1.1.0.0
+notebooksDashboards         1.1.0.0
+queryWorkbenchDashboards    1.1.0.0
+reportsDashboards           1.1.0.0
+securityDashboards          1.1.0.0
+traceAnalyticsDashboards    1.1.0.0
+
+ + + + 1.0.1 + +
alertingDashboards          1.0.0.0
+anomalyDetectionDashboards  1.0.0.0
+ganttChartDashboards        1.0.0.0
+indexManagementDashboards   1.0.1.0
+notebooksDashboards         1.0.0.0
+queryWorkbenchDashboards    1.0.0.0
+reportsDashboards           1.0.1.0
+securityDashboards          1.0.1.0
+traceAnalyticsDashboards    1.0.0.0
+
+ + 1.0.0 @@ -40,36 +70,6 @@ queryWorkbenchDashboards 1.0.0.0 reportsDashboards 1.0.0.0 securityDashboards 1.0.0.0 traceAnalyticsDashboards 1.0.0.0 - - - - - 1.0.0-rc1 - -
alertingDashboards          1.0.0.0-rc1
-anomalyDetectionDashboards  1.0.0.0-rc1
-ganttChartDashboards        1.0.0.0-rc1
-indexManagementDashboards   1.0.0.0-rc1
-notebooksDashboards         1.0.0.0-rc1
-queryWorkbenchDashboards    1.0.0.0-rc1
-reportsDashboards           1.0.0.0-rc1
-securityDashboards          1.0.0.0-rc1
-traceAnalyticsDashboards    1.0.0.0-rc1
-
- - - - 1.0.0-beta1 - -
alertingDashboards          1.0.0.0-beta1
-anomalyDetectionDashboards  1.0.0.0-beta1
-ganttChartDashboards        1.0.0.0-beta1
-indexManagementDashboards   1.0.0.0-beta1
-notebooksDashboards         1.0.0.0-beta1
-queryWorkbenchDashboards    1.0.0.0-beta1
-reportsDashboards           1.0.0.0-beta1
-securityDashboards          1.0.0.0-beta1
-traceAnalyticsDashboards    1.0.0.0-beta1
 
diff --git a/_dashboards/install/tar.md b/_dashboards/install/tar.md index 1c7e6933..026f23f7 100644 --- a/_dashboards/install/tar.md +++ b/_dashboards/install/tar.md @@ -14,9 +14,10 @@ nav_order: 30 ```bash # x64 tar -zxf opensearch-dashboards-{{site.opensearch_version}}-linux-x64.tar.gz - cd opensearch-dashboards{% comment %}# ARM64 + cd opensearch-dashboards + # ARM64 tar -zxf opensearch-dashboards-{{site.opensearch_version}}-linux-arm64.tar.gz - cd opensearch-dashboards{% endcomment %} + cd opensearch-dashboards ``` 1. If desired, modify `config/opensearch_dashboards.yml`. @@ -26,5 +27,3 @@ nav_order: 30 ```bash ./bin/opensearch-dashboards ``` - -1. See the [OpenSearch Dashboards documentation]({{site.url}}{{site.baseurl}}/dashboards/index/). diff --git a/_dashboards/maptiles.md b/_dashboards/maptiles.md index 1bbf27f6..f7a43046 100644 --- a/_dashboards/maptiles.md +++ b/_dashboards/maptiles.md @@ -2,8 +2,12 @@ layout: default title: WMS map server nav_order: 5 +redirect_from: + - /docs/opensearch-dashboards/maptiles/ --- +{%- comment -%}The `/docs/opensearch-dashboards/maptiles/` redirect is specifically to support the UI links in OpenSearch Dashboards 1.0.0.{%- endcomment -%} + # Configure WMS map server OpenSearch Dashboards includes default map tiles, but if you need more specialized maps, you can configure OpenSearch Dashboards to use a WMS map server: diff --git a/_data/_alert.yml b/_data/_alert.yml deleted file mode 100644 index 9c4d6ee6..00000000 --- a/_data/_alert.yml +++ /dev/null @@ -1 +0,0 @@ -message: "🔥 [OpenSearch 1.0 released on July 12th! Get it now!](/downloads.html)" \ No newline at end of file diff --git a/_data/alert.yml b/_data/alert.yml new file mode 100644 index 00000000..ecfc87f2 --- /dev/null +++ b/_data/alert.yml @@ -0,0 +1 @@ +message: "🌡️ [OpenSearch 1.1.0 arrived October 5 with cross-cluster replication, bucket-level alerting, and much, much more. Grab it here!](/downloads.html)" diff --git a/_data/versions.json b/_data/versions.json new file mode 100644 index 00000000..5fe13f29 --- /dev/null +++ b/_data/versions.json @@ -0,0 +1,6 @@ +{ + "current": "1.1", + "past": [ + "1.0" + ] +} \ No newline at end of file diff --git a/_external_links/developer-guide.md b/_external_links/developer-guide.md new file mode 100644 index 00000000..5f07b6ae --- /dev/null +++ b/_external_links/developer-guide.md @@ -0,0 +1,7 @@ +--- +layout: default +title: Dashboards developer guide +nav_order: 2 +permalink: /dashboards-developer-guide/ +redirect_to: https://github.com/opensearch-project/OpenSearch-Dashboards/blob/main/DEVELOPER_GUIDE.md +--- diff --git a/_im-plugin/index-rollups/rollup-api.md b/_im-plugin/index-rollups/rollup-api.md index 06df2e7a..7aa878d3 100644 --- a/_im-plugin/index-rollups/rollup-api.md +++ b/_im-plugin/index-rollups/rollup-api.md @@ -90,36 +90,36 @@ You can specify the following options. Options | Description | Type | Required :--- | :--- |:--- |:--- | -`source_index` | The name of the detector. | `string` | Yes -`target_index` | Specify the target index that the rolled up data is ingested into. You could either create a new target index or use an existing index. The target index cannot be a combination of raw and rolled up data. | `string` | Yes -`schedule` | Schedule of the index rollup job which can be an interval or a cron expression. | `object` | Yes -`schedule.interval` | Specify the frequency of execution of the rollup job. | `object` | No -`schedule.interval.start_time` | Start time of the interval. | `timestamp` | Yes -`schedule.interval.period` | Define the interval period. | `string` | Yes -`schedule.interval.unit` | Specify the time unit of the interval. | `string` | Yes -`schedule.interval.cron` | Optionally, specify a cron expression to define therollup frequency. | `list` | No -`schedule.interval.cron.expression` | Specify a Unix cron expression. | `string` | Yes -`schedule.interval.cron.timezone` | Specify timezones as defined by the IANA Time Zone Database. Defaults to UTC. | `string` | No -`description` | Optionally, describe the rollup job. | `string` | No -`enabled` | When true, the index rollup job is scheduled. Default is true. | `boolean` | Yes -`continuous` | Specify whether or not the index rollup job continuously rolls up data forever or just executes over the current data set once and stops. Default is false. | `boolean` | Yes -`error_notification` | Set up a Mustache message template sent for error notifications. For example, if an index rollup job fails, the system sends a message to a Slack channel. | `object` | No -`page_size` | Specify the number of buckets to paginate through at a time while rolling up. | `number` | Yes -`delay` | Specify time value to delay execution of the index rollup job. | `time_unit` | No -`dimensions` | Specify aggregations to create dimensions for the roll up time window. | `object` | Yes -`dimensions.date_histogram` | Specify either fixed_interval or calendar_interval, but not both. Either one limits what you can query in the target index. | `object` | No -`dimensions.date_histogram.fixed_interval` | Specify the fixed interval for aggregations in milliseconds, seconds, minutes, hours, or days. | `string` | No -`dimensions.date_histogram.calendar_interval` | Specify the calendar interval for aggregations in minutes, hours, days, weeks, months, quarters, or years. | `string` | No -`dimensions.date_histogram.field` | Specify the date field used in date histogram aggregation. | `string` | No -`dimensions.date_histogram.timezone` | Specify the timezones as defined by the IANA Time Zone Database. The default is UTC. | `string` | No -`dimensions.terms` | Specify the term aggregations that you want to roll up. | `object` | No -`dimensions.terms.fields` | Specify terms aggregation for compatible fields. | `object` | No -`dimensions.histogram` | Specify the histogram aggregations that you want to roll up. | `object` | No -`dimensions.histogram.field` | Add a field for histogram aggregations. | `string` | Yes -`dimensions.histogram.interval` | Specify the histogram aggregation interval for the field. | `long` | Yes -`dimensions.metrics` | Specify a list of objects that represent the fields and metrics that you want to calculate. | `nested object` | No -`dimensions.metrics.field` | Specify the field that you want to perform metric aggregations on. | `string` | No -`dimensions.metrics.field.metrics` | Specify the metric aggregations you want to calculate for the field. | `multiple strings` | No +`source_index` | The name of the detector. | String | Yes +`target_index` | Specify the target index that the rolled up data is ingested into. You could either create a new target index or use an existing index. The target index cannot be a combination of raw and rolled up data. | String | Yes +`schedule` | Schedule of the index rollup job which can be an interval or a cron expression. | Object | Yes +`schedule.interval` | Specify the frequency of execution of the rollup job. | Object | No +`schedule.interval.start_time` | Start time of the interval. | Timestamp | Yes +`schedule.interval.period` | Define the interval period. | String | Yes +`schedule.interval.unit` | Specify the time unit of the interval. | String | Yes +`schedule.interval.cron` | Optionally, specify a cron expression to define therollup frequency. | List | No +`schedule.interval.cron.expression` | Specify a Unix cron expression. | String | Yes +`schedule.interval.cron.timezone` | Specify timezones as defined by the IANA Time Zone Database. Defaults to UTC. | String | No +`description` | Optionally, describe the rollup job. | String | No +`enabled` | When true, the index rollup job is scheduled. Default is true. | Boolean | Yes +`continuous` | Specify whether or not the index rollup job continuously rolls up data forever or just executes over the current data set once and stops. Default is false. | Boolean | Yes +`error_notification` | Set up a Mustache message template sent for error notifications. For example, if an index rollup job fails, the system sends a message to a Slack channel. | Object | No +`page_size` | Specify the number of buckets to paginate through at a time while rolling up. | Number | Yes +`delay` | The number of milliseconds to delay execution of the index rollup job. | Long | No +`dimensions` | Specify aggregations to create dimensions for the roll up time window. | Object | Yes +`dimensions.date_histogram` | Specify either fixed_interval or calendar_interval, but not both. Either one limits what you can query in the target index. | Object | No +`dimensions.date_histogram.fixed_interval` | Specify the fixed interval for aggregations in milliseconds, seconds, minutes, hours, or days. | String | No +`dimensions.date_histogram.calendar_interval` | Specify the calendar interval for aggregations in minutes, hours, days, weeks, months, quarters, or years. | String | No +`dimensions.date_histogram.field` | Specify the date field used in date histogram aggregation. | String | No +`dimensions.date_histogram.timezone` | Specify the timezones as defined by the IANA Time Zone Database. The default is UTC. | String | No +`dimensions.terms` | Specify the term aggregations that you want to roll up. | Object | No +`dimensions.terms.fields` | Specify terms aggregation for compatible fields. | Object | No +`dimensions.histogram` | Specify the histogram aggregations that you want to roll up. | Object | No +`dimensions.histogram.field` | Add a field for histogram aggregations. | String | Yes +`dimensions.histogram.interval` | Specify the histogram aggregation interval for the field. | Long | Yes +`dimensions.metrics` | Specify a list of objects that represent the fields and metrics that you want to calculate. | Nested object | No +`dimensions.metrics.field` | Specify the field that you want to perform metric aggregations on. | String | No +`dimensions.metrics.field.metrics` | Specify the metric aggregations you want to calculate for the field. | Multiple strings | No #### Sample response diff --git a/_im-plugin/index-transforms/index.md b/_im-plugin/index-transforms/index.md index 6814170c..38d886ef 100644 --- a/_im-plugin/index-transforms/index.md +++ b/_im-plugin/index-transforms/index.md @@ -29,7 +29,7 @@ If you don't have any data in your cluster, you can use the sample flight data w ### Step 1: Choose indices 1. In the **Job name and description** section, specify a name and an optional description for your job. -2. In the **Indices** section, select the source and target index. You can either select an existing target index or create a new one by entering a name for your new index. If you want to transform just a subset of your source index, choose **Add Data Filter**, and use the OpenSearch query DSL to specify a subset of your source index. For more information about the OpenSearch query DSL, see [query DSL]({{site.url}}{{site.baseurl}}/opensearch/query-dsl/). +2. In the **Indices** section, select the source and target index. You can either select an existing target index or create a new one by entering a name for your new index. If you want to transform just a subset of your source index, choose **Edit data filter**, and use the OpenSearch query DSL to specify a subset of your source index. For more information about the OpenSearch query DSL, see [query DSL]({{site.url}}{{site.baseurl}}/opensearch/query-dsl/). 3. Choose **Next**. ### Step 2: Select fields to transform diff --git a/_im-plugin/ism/api.md b/_im-plugin/ism/api.md index af139695..3f7e0d1c 100644 --- a/_im-plugin/ism/api.md +++ b/_im-plugin/ism/api.md @@ -2,7 +2,7 @@ layout: default title: ISM API parent: Index State Management -nav_order: 5 +nav_order: 20 --- # ISM API diff --git a/_im-plugin/ism/index.md b/_im-plugin/ism/index.md index 7a2a3da3..4202b849 100644 --- a/_im-plugin/ism/index.md +++ b/_im-plugin/ism/index.md @@ -31,14 +31,21 @@ To get started, choose **Index Management** in OpenSearch Dashboards. A policy is a set of rules that describes how an index should be managed. For information about creating a policy, see [Policies]({{site.url}}{{site.baseurl}}/im-plugin/ism/policies/). +You can use the JSON editor or visual editor to create policies. Compared to the JSON editor, the visual editor offers a more structured way of defining policies by separating the process into creating error notifications, defining ISM templates, and adding states. We recommend using the visual editor if you want to see pre-defined fields, such as which actions you can assign to a state or under what conditions a state can transition into a destination state. + +#### JSON editor + 1. Choose the **Index Policies** tab. 2. Choose **Create policy**. -3. In the **Name policy** section, enter a policy ID. -4. In the **Define policy** section, enter your policy. -5. Choose **Create**. +3. Choose **JSON editor**. +4. In the **Name policy** section, enter a policy ID. +5. In the **Define policy** section, enter your policy. +6. Choose **Create**. -After you create a policy, your next step is to attach this policy to an index or indices. -You can set up an `ism_template` in the policy so when you create an index that matches the ISM template pattern, the index will have this policy attached to it: +After you create a policy, your next step is to attach it to an index or indices. +You can set up an `ism_template` in the policy so when an index that matches the ISM template pattern is created, the plugin automatically attaches the policy to the index. + +The following example demonstrates how to create a policy that automatically gets attached to all indices whose names start with `index_name-`. ```json PUT _plugins/_ism/policies/policy_id @@ -55,6 +62,8 @@ PUT _plugins/_ism/policies/policy_id } ``` +If you have more than one template that matches an index pattern, ISM uses the priority value to determine which template to apply. + For an example ISM template policy, see [Sample policy with ISM template]({{site.url}}{{site.baseurl}}/im-plugin/ism/policies#sample-policy-with-ism-template). Older versions of the plugin include the `policy_id` in an index template, so when an index is created that matches the index template pattern, the index will have the policy attached to it: @@ -89,6 +98,7 @@ Make sure that the alias that you enter already exists. For more information abo After you attach a policy to an index, ISM creates a job that runs every 5 minutes by default to perform policy actions, check conditions, and transition the index into different states. To change the default time interval for this job, see [Settings]({{site.url}}{{site.baseurl}}/im-plugin/ism/settings/). +ISM does not run jobs if the cluster state is red. ### Step 3: Manage indices diff --git a/_im-plugin/ism/policies.md b/_im-plugin/ism/policies.md index e6bfa983..cc09eab1 100644 --- a/_im-plugin/ism/policies.md +++ b/_im-plugin/ism/policies.md @@ -347,7 +347,7 @@ Parameter | Description | Type | Required | Default ### allocation -Allocate the index to a node with a specific attribute. +Allocate the index to a node with a specific attribute set [like this]({{site.url}}{{site.baseurl}}/opensearch/cluster/#advanced-step-7-set-up-a-hot-warm-architecture). For example, setting `require` to `warm` moves your data only to "warm" nodes. The `allocation` operation has the following parameters: @@ -363,7 +363,7 @@ Parameter | Description | Type | Required "actions": [ { "allocation": { - "require": { "box_type": "warm" } + "require": { "temp": "warm" } } } ] @@ -558,9 +558,11 @@ The following sample template policy is for a rollover use case. PUT _index_template/ism_rollover { "index_patterns": ["log*"], - "settings": { + "template": { + "settings": { "plugins.index_state_management.rollover_alias": "log" - } + } + } } ``` @@ -586,6 +588,12 @@ The following sample template policy is for a rollover use case. } ``` +5. Verify if the policy is attached to the `log-000001` index: + + ```json + GET _plugins/_ism/explain/log-000001?pretty + ``` + ## Example policy The following example policy implements a `hot`, `warm`, and `delete` workflow. You can use this policy as a template to prioritize resources to your indices based on their levels of activity. diff --git a/_im-plugin/refresh-analyzer/index.md b/_im-plugin/refresh-analyzer/index.md index d9beb9bb..641d3484 100644 --- a/_im-plugin/refresh-analyzer/index.md +++ b/_im-plugin/refresh-analyzer/index.md @@ -1,7 +1,7 @@ --- layout: default title: Refresh search analyzer -nav_order: 40 +nav_order: 50 has_children: false redirect_from: /im-plugin/refresh-analyzer/ has_toc: false diff --git a/_im-plugin/security.md b/_im-plugin/security.md new file mode 100644 index 00000000..d5d48ac6 --- /dev/null +++ b/_im-plugin/security.md @@ -0,0 +1,41 @@ +--- +layout: default +title: Index management security +nav_order: 40 +has_children: false +--- + +# Index management security + +Using the security plugin with index management lets you limit non-admin users to certain actions. For example, you might want to set up your security such that a group of users can only read ISM policies, while others can create, delete, or change policies. + +All index management data are protected as system indices, and only a super admin or an admin with a Transport Layer Security (TLS) certificate can access system indices. For more information, see [System indices]({{site.url}}{{site.baseurl}}/security-plugin/configuration/system-indices). + +## Basic permissions + +The security plugin comes with one role that offers full access to index management: `index_management_full_access`. For a description of the role's permissions, see [Predefined roles]({{site.url}}{{site.baseurl}}/security-plugin/access-control/users-roles#predefined-roles). + +With security enabled, users not only need the correct index management permissions, but they also need permissions to execute actions to involved indices. For example, if a user wants to use the REST API to attach a policy that executes a rollup job to an index named `system-logs`, they would need the permissions to attach a policy and execute a rollup job, as well as access to `system-logs`. + +Finally, with the exceptions of Create Policy, Get Policy, and Delete Policy, users also need the `indices:admin/opensearch/ism/managedindex` permission to execute [ISM APIs]({{site.url}}{{site.baseurl}}/im-plugin/ism/api). + +## (Advanced) Limit access by backend role + +You can use backend roles to configure fine-grained access to index management policies and actions. For example, users of different departments in an organization might view different policies depending on what roles and permissions they are assigned. + +First, ensure your users have the appropriate [backend roles]({{site.url}}{{site.baseurl}}/security-plugin/access-control/index/). Backend roles usually come from an [LDAP server]({{site.url}}{{site.baseurl}}/security-plugin/configuration/ldap/) or [SAML provider]({{site.url}}{{site.baseurl}}/security-plugin/configuration/saml/). However, if you use the internal user database, you can use the REST API to [add them manually]({{site.url}}{{site.baseurl}}/security-plugin/access-control/api#create-user). + +Use the REST API to enable the following setting: + +```json +PUT _cluster/settings +{ + "transient": { + "plugins.index_management.filter_by_backend_roles": "true" + } +} +``` + +With security enabled, only users who share at least one backend role can see and execute the policies and actions relevant to their roles. + +For example, consider a scenario with three users: `John` and `Jill`, who have the backend role `helpdesk_staff`, and `Jane`, who has the backend role `phone_operator`. `John` wants to create a policy that performs a rollup job on an index named `airline_data`, so `John` would need a backend role that has permissions to access that index, create relevant policies, and execute relevant actions, and `Jill` would be able to access the same index, policy, and job. However, `Jane` cannot access or edit those resources or actions. diff --git a/_includes/head_custom.html b/_includes/head_custom.html index 91ee8a17..1a18be03 100755 --- a/_includes/head_custom.html +++ b/_includes/head_custom.html @@ -6,3 +6,9 @@ {% endif %} + +{% if jekyll.environment == "development" %} + +{% else %} + +{% endif %} diff --git a/_layouts/default.html b/_layouts/default.html index d433719d..c5408662 100755 --- a/_layouts/default.html +++ b/_layouts/default.html @@ -57,6 +57,10 @@ layout: table_wrappers