Adds an initial limited implementations of geo features to SQL. This implementation is based on the [OpenGIS® Implementation Standard for Geographic information - Simple feature access](http://www.opengeospatial.org/standards/sfs), which is the current standard for GIS system implementation. This effort is concentrate on SQL option AKA ISO 19125-2.
Queries that are supported as a result of this initial implementation
Metadata commands
- `DESCRIBE table` - returns the correct column types `GEOMETRY` for geo shapes and geo points.
- `SHOW FUNCTIONS` - returns a list that includes supported `ST_` functions
- `SYS TYPES` and `SYS COLUMNS` display correct types `GEO_SHAPE` and `GEO_POINT` for geo shapes and geo points accordingly.
Returning geoshapes and geopoints from elasticsearch
- `SELECT geom FROM table` - returns the geoshapes and geo_points as libs/geo objects in JDBC or as WKT strings in console.
- `SELECT ST_AsWKT(geom) FROM table;` and `SELECT ST_AsText(geom) FROM table;`- returns the geoshapes ang geopoints in their WKT representation;
Using geopoints to elasticsearch
- The following functions will be supported for geopoints in queries, sorting and aggregations: `ST_GeomFromText`, `ST_X`, `ST_Y`, `ST_Z`, `ST_GeometryType`, and `ST_Distance`. In most cases when used in queries, sorting and aggregations, these function are translated into script. These functions can be used in the SELECT clause for both geopoints and geoshapes.
- `SELECT * FROM table WHERE ST_Distance(ST_GeomFromText(POINT(1 2), point) < 10;` - returns all records for which `point` is located within 10m from the `POINT(1 2)`. In this case the WHERE clause is translated into a range query.
Limitations:
Geoshapes cannot be used in queries, sorting and aggregations as part of this initial effort. In order to fully take advantage of geoshapes we would need to have access to geoshape doc values, which is coming in #37206. `ST_Z` cannot be used on geopoints in queries, sorting and aggregations since we don't store altitude in geo_point doc values.
Relates to #29872
Backport of #42031
* [ML] adding pivot.size option for setting paging size
* Changing field name to address PR comments
* fixing ctor usage
* adjust hlrc for field name change
This commit slightly reworks the recommendations in the docs about setting the
heap size:
* the "rules of thumb" are actually instructions that should be followed
* the reason for setting `Xmx` to 50% of the heap size is more subtle than just
leaving space for the filesystem cache
* it is normal to see Elasticsearch using more memory than `Xmx`
* replace `cutoff` and `limit` with `threshold` since all three terms are used
interchangeably
* since we recommend setting `Xmx` equal to `Xms`, avoid talking about setting
`Xmx` in isolation
Relates #41954
This processor uses the lucene HTMLStripCharFilter class to remove HTML
entities from a field. This adds to the char filter, so that there is
possibility to store the stripped version as well.
Note, that the characeter filter replaces tags with a newline, so that
the produced HTML will look slightly different than the incoming HTML
with regards to newlines.
The `bulk` threadpool is now called `write`, but `bulk` is still
used in some examples. This commit fixes that.
Also, the only way `threadpool.bulk.write: 30` is a valid increase in the size
of this threadpool is if you have 29 processors, which is an odd number of
processors to have. This commit removes the "more threads" bit.
In cases where node names and transport addresses can be muddled, it is unclear
that `cluster.initial_master_nodes: master-a:9300` means to look for a node
called `master-a:9300` rather than a node called `master-a` with transport port
`9300`. This commit adds docs to that effect.
Today Elasticsearch accepts, but silently ignores, port ranges in the
`discovery.seed_hosts` setting:
```
discovery.seed_hosts: 10.1.2.3:9300-9400
```
Silently ignoring part of a setting like this is trappy. With this change we
reject seed host addresses of this form.
Closes#40786
Backport of #41404
The settings listed under the "Default values for TLS/SSL settings"
heading are not actual settings, rather they are common suffixes that
are used for settings that exist in a variety of contexts.
This commit changes the way they are presented to reduce this
confusion.
Backport of: #41779
The CircuitBreaker was introduced as means of preventing a
`StackOverflowException` during the build of the AST by the parser.
The ANTLR4 grammar causes a weird behaviour for a Parser Listener.
The `enterEveryRule()` method is often called with a different parsing
context than the respective `exitEveryRule()`. This makes it difficult
to keep track of the tree's depth, and a custom Map was used as an
attempt of matching the contextes as they are encounter during `enter`
and during `exit` of the rules.
This approach had 2 important drawbacks:
1. It's hard to maintain this custom Map as the grammar changes.
2. The CircuitBreaker could often lead to false positives which caused
valid queries to return an Exception and prevent them from executing.
So, this removes completely the CircuitBreaker which is replaced be
a simple handling of the `StackOverflowException`
Fixes: #41471
(cherry picked from commit 1559a8e2dbd729138b52e89b7e80264c9f4ad1e7)
The `path_match` and `path_unmatch` parameters in dynamic templates match on
object fields in addition to leaf fields. This is not obvious and can cause
surprising errors when a template is meant for a leaf field, but there are
object fields that match. This PR adds a note to the docs to describe the
current behavior.
We received some feedback that it is not completely clear why `_doc` is present
in the typeless document APIs:
> The new index APIs are PUT {index}/_doc/{id} in case of explicit ids and POST
{index}/_doc for auto-generated ids."_ Isn't this contradicting? Specifying
*types in requests is deprecated*, but we are supposed to still mention *_doc*
in write requests?
This PR updates the 'removal of types' documentation to try to clarify that
`_doc` now represents the endpoint name, as opposed to a type.
Add a TIP on how to use CASE to achieve custom bucketing
with GROUP BY.
Follows: #41349
(cherry picked from commit eb5f5d45533c5f81e57dd0221d902a73ec400098)
As negative scores will now cause an error, and it is easy to
accidentally produce negative scores with some of the built-in modifiers
(especially `ln` and `log`), this adjusts the documentation to more
strongly recommend the use of `ln1p` and `log1p` instead.
Also corrects some awkward formatting on the note sections following the
table.
Today's `docker-compose` docs are missing the `discovery.seed_nodes` config on
one of the nodes. With today's configuration the cluster can still form the
first time it is started, because `cluster.initial_master_nodes` requires both
nodes to bootstrap the cluster which ensures that each discover the other.
However if `es02` is elected master it will remove `es01` from the voting
configuration and then when restarted it will form a cluster on its own without
needing to do any discovery. Meanwhile `es01` doesn't know how to find `es02`
after a restart so will be unable to join this cluster.
This commit fixes this by adding the missing configuration.
Relates #41394, which fixes a different `docker-compose.yml` in the same way.