Commit Graph

1352 Commits

Author SHA1 Message Date
David Pilato 6c7a44ccd9 Fix test in mapper attachments plugin 2016-04-29 15:02:04 +02:00
David Pilato 2636703afa Merge branch 'master' into pr/attachments-add-test-forced-values 2016-04-29 14:55:42 +02:00
David Pilato 6ef81c5dcd S3 repositories credentials should be filtered
When working on #18008 I found while reading the code that we don't filter anymore `repositories.s3.access_key` and `repositories.s3.secret_key`.

Also fixed a typo in REST test
2016-04-27 14:11:17 +02:00
Alexander Reelsen f71eb0b888 Version: Set version to 5.0.0-alpha2 2016-04-26 09:30:26 +02:00
Xu Zhang 3e4b470f83 Fix icu IndexScope setting 2016-04-22 15:03:02 -07:00
Ryan Ernst d12a4bb51d Merge pull request #17933 from rjernst/camelcase4
Remove camelCase support
2016-04-22 13:46:43 -07:00
xuzha cd527c5b92 Add support for customizing the rule file in ICU tokenizer
Lucene allows to create a ICUTokenizer with a special config argument
enabling the customization of the rule based iterator by providing
custom rules files.

This commit enable this feature. Users could provide a list of RBBI rule
files to ICU tokenizer.

closes #13146
2016-04-22 12:39:20 -07:00
Ryan Ernst 55388590c1 Remove camelCase support
Now that the current uses of magical camelCase support have been
deprecated, we can remove these in master (sans remaining issues like
BulkRequest). This change removes camel case support from ParseField,
query types, analysis, and settings lookup.

see #8988
2016-04-22 09:18:10 -07:00
Martijn van Groningen c5ad2e2865 Changed indexed scripts to be stored in the cluster state instead of the `.scripts` index.
Also added max script size soft limit for stored scripts.

Closes #16651
2016-04-22 13:42:55 +02:00
Martijn van Groningen dd2184ab25 ingest: Streamline option naming for several processors:
* `rename` processor, renamed `to` to `target_field`
* `date` processor, renamed `match_field` to `field` and renamed `match_formats` to `formats`
* `geoip` processor, renamed `source_field` to `field` and renamed `fields` to `properties`
* `attachment` processor, renamed `source_field` to `field` and renamed `fields` to `properties`

Closes #17835
2016-04-21 13:40:43 +02:00
Jun Ohtani 9eb242a5fe Analyze API : Rename filters/token_filters/char_filter to filter/token_filter/char_filter
Closes #15189
2016-04-21 18:05:11 +09:00
Ryan Ernst 523b071836 Internal: Remove XContentBuilderString
This was previously used by xcontentbuilder to support camelCase.
However, it is no longer used, and can be replaced with just String.
2016-04-18 14:32:18 -07:00
Nik Everett ff9b28d806 Deprecate remaining readXYZ|writeXYZ methods 2016-04-18 16:19:45 -04:00
Adrien Grand d84c643f58 Use the new points API to index numeric fields. #17746
This makes all numeric fields including `date`, `ip` and `token_count` use
points instead of the inverted index as a lookup structure. This is expected
to perform worse for exact queries, but faster for range queries. It also
requires less storage.

Notes about how the change works:
 - Numeric mappers have been split into a legacy version that is essentially
   the current mapper, and a new version that uses points, eg.
   LegacyDateFieldMapper and DateFieldMapper.
 - Since new and old fields have the same names, the decision about which one
   to use is made based on the index creation version.
 - If you try to force using a legacy field on a new index or a field that uses
   points on an old index, you will get an exception.
 - IP addresses now support IPv6 via Lucene's InetAddressPoint and store them
   in SORTED_SET doc values using the same encoding (fixed length of 16 bytes
   and sortable).
 - The internal MappedFieldType that is stored by the new mappers does not have
   any of the points-related properties set. Instead, it keeps setting the index
   options when parsing the `index` property of mappings and does
   `if (fieldType.indexOptions() != IndexOptions.NONE) { // add point field }`
   when parsing documents.

Known issues that won't fix:
 - You can't use numeric fields in significant terms aggregations anymore since
   this requires document frequencies, which points do not record.
 - Term queries on numeric fields will now return constant scores instead of
   giving better scores to the rare values.

Known issues that we could work around (in follow-up PRs, this one is too large
already):
 - Range queries on `ip` addresses only work if both the lower and upper bounds
   are inclusive (exclusive bounds are not exposed in Lucene). We could either
   decide to implement it, or drop range support entirely and tell users to
   query subnets using the CIDR notation instead.
 - Since IP addresses now use a different representation for doc values,
   aggregations will fail when running a terms aggregation on an ip field on a
   list of indices that contains both pre-5.0 and 5.0 indices.
 - The ip range aggregation does not work on the new ip field. We need to either
   implement range aggs for SORTED_SET doc values or drop support for ip ranges
   and tell users to use filters instead. #17700

Closes #16751
Closes #17007
Closes #11513
2016-04-14 17:56:23 +02:00
Yannick Welsch 80cf9fc761 Add EC2 discovery tests to check permissions of AWS Java SDK (#17677) 2016-04-13 10:01:49 +02:00
Adrien Grand 3bf6f4076c Do not set analyzers on numeric fields.
When it comes to query parsing, either a field is tokenized and it would go
through analysis with its search_analyzer. Or it is not tokenized and the
raw string should be passed to termQuery(). Since numeric fields are not
tokenized and also declare a search analyzer, values would currently go through
analysis twice...
2016-04-12 17:47:29 +02:00
Adrien Grand 013acf9179 Remove MappedFieldType.value. #17557
This commit removes `MappedFieldType.value` and simplifies
`MappedFieldType.valueforSearch`. `valueforSearch` was used to post-process
values that come for stored fields (eg. to convert a long back to a string
representation of a date in the case of a date field) and also values that
are extracted from the source but only in the case of GET calls: it would
not be called when performing source filtering on search requests.

`valueforSearch` is now only called for stored fields, since values that are
extracted from the source should already be formatted as expected.
2016-04-12 09:12:56 +02:00
Adrien Grand 496c7fbd84 Upgrade Lucene 6 Release
* upgrades numerics to new Point format
* updates geo api changes
  * adds GeoPointDistanceRangeQuery as XGeoPointDistanceRangeQuery
  * cuts over to ES GeoHashUtils
2016-04-11 16:50:04 -05:00
Ryan Ernst 31ca8fa411 Merge branch 'master' into placeholder 2016-04-11 13:44:59 -07:00
Yannick Welsch b08d453a0a Fix EC2 Discovery settings (#17651)
Fixes two bugs introduced by the settings refactoring in #16602
2016-04-11 16:17:55 +02:00
Alexander Reelsen da19ddf3e6 Ingest Attachment: Allow to prevent base64 conversions by using raw bytes (#16601)
CBOR is natively supported in Elasticsearch and allows for byte arrays.
This means, that by using CBOR the user can prevent base64 conversions
for the data being sent back and forth.

This PR adds support to extract data from a byte array in addition to
a string. This also required to add a ByteArrayValueSource class.
2016-04-11 14:14:56 +02:00
Adrien Grand 42526ac28e Remove Settings.settingsBuilder.
We have both `Settings.settingsBuilder` and `Settings.builder` that do exactly
the same thing, so we should keep only one. I kept `Settings.builder` since it
has my preference but also it is the one that we use in examples of the Java API.
2016-04-08 18:10:02 +02:00
David Pilato c6b1beb083 Add a test for forced values in mapper-attachments plugin
This PR just adds a new test where we check that we forcing a value in the JSON document actually works as expected:

```json
{
     "file": {
        "_content": "BASE64"
        "_name": "12-240.pdf",
        "_language": "en",
        "_content_type": "pdf"
    }
}
```

Note that we don't support forcing all values. So sending:

```json
{
     "file": {
        "_content": "BASE64"
        "_name": "12-240.pdf",
        "_title": "12-240.pdf",
        "_keywords": "Div42 Src580 LGE Mechtech",
        "_language": "en",
        "_content_type": "pdf"
    }
}
```

Will have absolutely no effect on fields `title` and `keywords`.

Note that when `_language` is set, it only works if `index.mapping.attachment.detect_language` is set to `true`.

Related to https://discuss.elastic.co/t/mapper-attachments/46615/4
2016-04-08 10:07:21 +02:00
Chris Earle d97d5ebb8b Remove hostname from NetworkAddress.format
This removes the inconsistent output of IP addresses. The format was parsing-unfriendly and it makes it hard
to reason about API responses, such as to _nodes.

With this change in place, it will never print the hostname as part of the default format, which has the
added benefit that it can be used consistently for URIs, which was not the case when the hostname might
appear at the front with "hostname/ip:port".
2016-04-07 17:27:59 -04:00
javanna b9f9b2e3ee Merge branch 'master' into enhancement/discovery_node_one_getter 2016-03-30 17:22:40 +02:00
javanna f8b5d1f5b0 Remove DiscoveryNodes#masterNodeId in favour of existing DiscoveryNodes#getMasterNodeId 2016-03-30 15:28:06 +02:00
Adrien Grand 068c788ec8 Disable fielddata on text fields by defaults. #17386
`text` fields will have fielddata disabled by default. Fielddata can still be
enabled on an existing index by setting `fielddata=true` in the mappings.
2016-03-30 14:35:32 +02:00
javanna 8fc9dbbb99 Merge branch 'master' into enhancement/remove_node_client_setting 2016-03-29 14:27:04 +02:00
Clinton Gormley 579d976e90 The source parameter should not be defined in the delete-by-query REST spec 2016-03-29 11:45:20 +02:00
javanna 93ce36a198 separated attributes from node roles in DiscoveryNode
Node roles are now serialized as well, they are not part of the node attributes anymore. DiscoveryNodeService takes care of dividing settings into attributes and roles. DiscoveryNode always requires to pass in attributes and roles separately.
2016-03-25 20:14:27 +01:00
Jason Tedor 7f0134e725 Revert "Merge pull request #16843 from xuzha/s3-encryption"
This reverts commit 37a183d9ed, reversing
changes made to 08903f1ed8.
2016-03-24 17:11:02 -04:00
Xu Zhang 38923b89c2 Update Format, add new settings into the setting test 2016-03-24 12:16:57 -07:00
Ryan Ernst 3adaf09675 Settings: Cleanup placeholder replacement
This change moves placeholder replacement to a pkg private class for
settings. It also adds a null check when calling replacement, as
settings objects can still contain null values, because we only prohibit
nulls on file loading. Finally, this cleans up file and stream loading a
bit to not have unnecessary exception wrapping.
2016-03-24 11:54:05 -07:00
Xu Zhang 7499e3aa4a Update and rebase the init implementation.
Also removes the MD5 checks from our side, AWS S3 SDK java is doing the
check.
2016-03-24 11:21:40 -07:00
Nicolas Trésegnie ea78fd6560 Add client-side encryption
The Java Cryptography Extension (JCE) has to be installed to use this feature.
2016-03-24 11:13:37 -07:00
David Pilato 4b1ae331f0 Update after review 2016-03-23 17:32:51 +01:00
David Pilato e907b7c11e Check that S3 setting `buffer_size` is always lower than `chunk_size`
We can be better at checking `buffer_size` and `chunk_size` for S3 repositories.
For example, we know that:

* `buffer_size` should be more than `5mb`
* `chunk_size` should be no more than `5tb`
* `buffer_size` should be lower than `chunk_size`

Otherwise, setting `buffer_size` is useless.

For the record:

`chunk_size` is a Snapshot setting whatever the implementation is.
`buffer_size` is an S3 implementation setting.

Let say that you are snapshotting a 500mb file. If you set `chunk_size` to `200mb`, then Snapshot service will call S3 repository to snapshot 3 files with the following sizes:

* `200mb`
* `200mb`
* `100mb`

If you set `buffer_size` to `100mb` (AWS maximum size recommendation), the first file of `200mb` will be uploaded on S3 using the multipart feature in 2 chunks and the workflow is basically the following:

* create the multipart request and get back an `id` from AWS S3 platform
* upload part1: `100mb`
* upload part2: `100mb`
* "commit" the full upload using the `id`.

Closes #17244.
2016-03-23 10:39:54 +01:00
Simon Willnauer 1988b8b387 [TEST] Reuse EsTestCase#createAnalysisService in KuromojiAnalysisTests 2016-03-22 13:45:20 +01:00
Jun Ohtani a9a0f262af Analysis Kuromoji: Add nbest option and NumberFilter
Add nbest_cost and nbest_examples parameter to KuromojiTokenizerFactory
Add KuromojiNumberFilterFactory
2016-03-22 20:09:56 +09:00
Ryan Ernst f71f0d6010 Revert "Build: Switch to maven-publish plugin"
This reverts commit a90a2b34fc.
2016-03-18 17:22:25 -07:00
Ryan Ernst 6af4c43c4f Merge pull request #17128 from rjernst/maven_publish
Build: Switch to maven-publish plugin
2016-03-17 11:53:50 -07:00
Simon Willnauer e91a141233 Prevent index level setting from being configured on a node level
Today we allow to set all kinds of index level settings on the node level which
is error prone and difficult to get right in a consistent manner.
For instance if some analyzers are setup in a yaml config file some nodes might
not have these analyzers and then index creation fails.

Nevertheless, this change allows some selected settings to be specified on a node level
for instance:
 * `index.codec` which is used in a hot/cold node architecture and it's value is really per node or per index
 * `index.store.fs.fs_lock` which is also dependent on the filesystem a node uses

All other index level setting must be specified on the index level. For existing clusters the index must be closed
and all settings must be updated via the API on each of the indices.

Closes #16799
2016-03-17 14:42:18 +01:00
Ryan Ernst a90a2b34fc Build: Switch to maven-publish plugin
The build currently uses the old maven support in gradle. This commit
switches to use the newer maven-publish plugin. This will allow future
changes, for example, easily publishing to artifactory.

An additional part of this change makes publishing of build-tools part
of the normal publishing, instead of requiring a separate upload step
from within buildSrc. That also sets us up for a follow up to enable
precomit checks on the buildSrc code itself.
2016-03-15 19:16:37 -07:00
Jason Tedor 618441aea3 Merge pull request #17088 from jasontedor/simplify-bootstrap-settings
Bootstrap does not set system properties
2016-03-15 19:25:16 -04:00
Jason Tedor 66ba044ec5 Use setting in integration test cluster config 2016-03-15 17:45:17 -04:00
Yannick Welsch f5e6db4090 Remove System.out.println and Throwable.printStackTrace from tests 2016-03-15 15:40:37 +01:00
Yannick Welsch d14ae5f8b6 Remove Python and Javascript Benchmark classes 2016-03-15 15:02:50 +01:00
David Pilato 84c862b825 Merge remote-tracking branch 'origin/master' 2016-03-15 09:25:26 +01:00
David Pilato a3bf57d116 Upgrade azure SDK to 0.9.3
We are ATM using azure SDK 0.9.0.

Azure latest release is now 0.9.3 (released in February 2016).

<img width="1024" alt="the central repository search engine google chrome aujourd hui at 08 41 12" src="https://cloud.githubusercontent.com/assets/274222/13662836/a806ba3a-e69d-11e5-8655-4a838db2ef47.png">

Artifacts are on [maven central](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.microsoft.azure%22%20AND%20(a%3Aazure-serviceruntime%20OR%20a%3Aazure-servicebus%20OR%20a%3Aazure-svc-*))

Change log:

## 2016.2.18 Version 0.9.3

* Fix enum bugs in azure-svc-mgmt-websites

## 2016.1.26 Version 0.9.2

* Fix HTTP Proxy for Apache HTTP Client in Service Clients
* Key Vault: Fix KeyVaultKey to not attempt to load RSA Private Key

## 2016.1.8 Version 0.9.1

* Support HTTP Proxy
* Fix token expiration issue #557
* Service Bus: Add missing attributes: partitionKey, viaPartitionKey
* Traffic Manager: Update API version, add MinChildEndpoints for NestedEndpoints
* Media: Add support for Widevine (DRM) dynamic encryption

Closes #17042.
2016-03-15 09:18:34 +01:00
Simon Willnauer 345e988bbc Merge pull request #17072 from s1monw/add_backwards_rest_tests
Add infrastructure to run REST tests on a multi-version cluster

This change adds the infrastructure to run the rest tests on a multi-node
cluster that users 2 different minor versions of elasticsearch. It doesn't implement
any dedicated BWC tests but rather leverages the existing REST tests.

Since we don't have a real version to test against, the tests uses the current version
until the first minor / RC is released to ensure the infrastructure works.

Given the amount of problems this change already found I think it's worth having this run with our test suite by default. The structure of this infra will likely change over time but for now it's a step into the right direction. We will likely want to split it up into integTests and integBwcTests etc. so each plugin can have it's own bwc tests but that's left for future refactoring.
2016-03-15 09:17:43 +01:00
Areek Zillur c3078f4d65 adapt tests to use index uuid as folder name 2016-03-14 23:24:24 -04:00
Simon Willnauer 554bf2c282 [TEST] Test that all processors are available 2016-03-14 22:35:25 +01:00
Simon Willnauer 6f28c173e2 [TEST] Test that all processors are available 2016-03-14 21:42:37 +01:00
Adrien Grand 5596e31068 Upgrade to lucene-6.0.0-f0aa4fc. #17075 2016-03-14 07:58:52 +01:00
Jason Tedor 8a05c2a2be Bootstrap does not set system properties
Today, certain bootstrap properties are set and read via system
properties. This action-at-distance way of managing these properties is
rather confusing, and completely unnecessary. But another problem exists
with setting these as system properties. Namely, these system properties
are interpreted as Elasticsearch settings, not all of which are
registered. This leads to Elasticsearch failing to startup if any of
these special properties are set. Instead, these properties should be
kept as local as possible, and passed around as method parameters where
needed. This eliminates the action-at-distance way of handling these
properties, and eliminates the need to register these non-setting
properties. This commit does exactly that.

Additionally, today we use the "-D" command line flag to set the
properties, but this is confusing because "-D" is a special flag to the
JVM for setting system properties. This creates confusion because some
"-D" properties should be passed via arguments to the JVM (so via
ES_JAVA_OPTS), and some should be passed as arguments to
Elasticsearch. This commit changes the "-D" flag for Elasticsearch
settings to "-E".
2016-03-13 20:09:15 -04:00
David Pilato 9acb0bb28c Merge branch 'master' into pr/16598-register-filter-settings
# Conflicts:
#	core/src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java
#	core/src/main/java/org/elasticsearch/common/settings/IndexScopedSettings.java
#	core/src/main/java/org/elasticsearch/common/settings/Setting.java
2016-03-13 14:52:10 +01:00
Ryan Ernst 591fb8f028 Merge branch 'master' into cli-parsing 2016-03-11 10:45:05 -08:00
Yannick Welsch 04e55ecf6b Make logging message String constant to allow static checks 2016-03-11 10:30:59 +01:00
Yannick Welsch 718876a941 Fix wrong placeholder usage in logging statements 2016-03-11 10:30:59 +01:00
Ryan Ernst 42a6869bb1 Merge pull request #17059 from elastic/fix/16864-attachment-doctypes
Fix attachments plugins with docx
2016-03-10 17:27:02 -08:00
Ryan Ernst 2f3efc3fe1 Add doc and docx rest test to mapper attachment along with
getClassLoader permission
2016-03-10 13:28:19 -08:00
Ryan Ernst 51d87d94dc Add getClassLoader perm for tika in ingest 2016-03-10 11:17:25 -08:00
thefourtheye 304cbbbf31 fix redundant stack in comments 2016-03-11 00:31:38 +05:30
David Pilato 6deabac8e8 Can not extract text from Office documents (`.docx` extension)
Add REST test for:

* `.doc`
* `.docx`

The later fails with:

```
==> Test Info: seed=DB93397128B876D4; jvm=1; suite=1
Suite: org.elasticsearch.ingest.attachment.IngestAttachmentRestIT
  2> REPRODUCE WITH: gradle :plugins:ingest-attachment:integTest -Dtests.seed=DB93397128B876D4 -Dtests.class=org.elasticsearch.ingest.attachment.IngestAttachmentRestIT -Dtests.method="test {yaml=ingest_attachment/30_files_supported/Test ingest attachment processor with .docx file}" -Des.logger.level=WARN -Dtests.security.manager=true -Dtests.locale=bg -Dtests.timezone=Europe/Athens
FAILURE 4.53s | IngestAttachmentRestIT.test {yaml=ingest_attachment/30_files_supported/Test ingest attachment processor with .docx file} <<< FAILURES!
   > Throwable #1: java.lang.AssertionError: expected [2xx] status code but api [index] returned [400 Bad Request] [{"error":{"root_cause":[{"type":"parse_exception","reason":"Error parsing document in field [field1]"}],"type":"parse_exception","reason":"Error parsing document in field [field1]","caused_by":{"type":"tika_exception","reason":"Unexpected RuntimeException from org.apache.tika.parser.microsoft.ooxml.OOXMLParser@7f85baa5","caused_by":{"type":"illegal_state_exception","reason":"access denied (\"java.lang.RuntimePermission\" \"getClassLoader\")","caused_by":{"type":"access_control_exception","reason":"access denied (\"java.lang.RuntimePermission\" \"getClassLoader\")"}}}},"status":400}]
   >    at __randomizedtesting.SeedInfo.seed([DB93397128B876D4:53C706AB86441B2C]:0)
   >    at org.elasticsearch.test.rest.section.DoSection.execute(DoSection.java:107)
   >    at org.elasticsearch.test.rest.ESRestTestCase.test(ESRestTestCase.java:395)
   >    at java.lang.Thread.run(Thread.java:745)
```

Related to #16864
2016-03-10 10:57:59 +01:00
Simon Willnauer 7a53a396e4 Remove Unneded @Inject annotations 2016-03-09 12:10:47 +01:00
Ryan Ernst 80ae2b0002 Fix more licenses 2016-03-09 00:10:59 -08:00
Ryan Ernst 1dafead2eb Fix precommit 2016-03-08 22:55:24 -08:00
Ryan Ernst fdce9d7c4d Merge branch 'master' into cli-parsing 2016-03-08 14:18:20 -08:00
Ryan Ernst e5c852f767 Convert bootstrapcli parser to jopt-simple 2016-03-08 13:39:37 -08:00
Simon Willnauer a40587b377 Merge pull request #16982 from s1monw/remove_old_version
Remove old and unsupported version constants

All version <= 2.0 are not supported anymore. This commit removes all
uses of these versions.
2016-03-07 16:03:14 +01:00
Martijn van Groningen 82d01e4315 Added ingest info to node info API, which contains a list of available processors.
Internally the put pipeline API uses this information in node info API to validate if all specified processors in a pipeline exist on all nodes in the cluster.
2016-03-07 14:44:50 +01:00
Simon Willnauer fdfb0e56f6 Remove bw compat from size mapper 2016-03-07 12:48:02 +01:00
Simon Willnauer f96900013c Remove bw compat from murmur3 mapper 2016-03-07 12:44:53 +01:00
Robert Muir 54018a5d37 upgrade to lucene 6.0.0-snapshot-bea235f
Closes #16964

Squashed commit of the following:

commit a23f9d2d29220991aa498214530753d7a5a148c6
Merge: eec9c4e 0b0a251
Author: Robert Muir <rmuir@apache.org>
Date:   Mon Mar 7 04:12:02 2016 -0500

    Merge branch 'master' into lucene6

commit eec9c4e5cd11e9c3e0b426f04894bb2a6dae4f21
Merge: bc67205 675d940
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 13:45:00 2016 -0500

    Merge branch 'master' into lucene6

commit bc67205bdfe1526eae277ab7856fc050ecbdb7b2
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 09:56:31 2016 -0500

    fix test bug

commit a60723b007ff12d97b1810cef473bd7b553a0327
Author: Simon Willnauer <simonw@apache.org>
Date:   Fri Mar 4 15:35:35 2016 +0100

    Fix SimpleValidateQueryIT to put braces around boosted terms

commit ae3a49d7ba7ced448d2a5262e5d8ec98671a9090
Author: Simon Willnauer <simonw@apache.org>
Date:   Fri Mar 4 15:27:25 2016 +0100

    fix multimatchquery

commit ae23fdb88a8f6d3fb7ba60fd1aaf3fd72d899aa5
Author: Simon Willnauer <simonw@apache.org>
Date:   Fri Mar 4 15:20:49 2016 +0100

    Rewrite DecayFunctionScoreIT to be independent of the similarity used

    This test relied a lot on the term scoring and compared scores
    that are dependent on the similarity. This commit changes the base query
    to be a predictable constant score query.

commit 366c2d518c35d31251033f1b6f6a93f6e2ae327d
Author: Simon Willnauer <simonw@apache.org>
Date:   Fri Mar 4 14:06:14 2016 +0100

    Fix scoring in tests due to changes to idf calculation.

    Lucene 6 uses a different default similarity as well as a different
    way to calculate IDF. In contrast to older version lucene 6 uses docCount per field
    to calculate the IDF not the # of docs in the index to overcome the sparse field
    cases.

commit dac99fd64ac2fa71b8d8d106fe68825e574c49f8
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 08:21:57 2016 -0500

    don't hardcoded expected termquery score

commit 6e9f340ba49ab10eed512df86d52a121aa775b0f
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 08:04:45 2016 -0500

    suppress deprecation warning until migrated to points

commit 3ac8908424b3fdad44a90a4f7bdb3eff7efd077d
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 07:21:43 2016 -0500

    Remove invalid test: all commits have IDs, and its illegal to do this.

commit c12976288124ad1a26467e7e848fb810548e7eab
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 07:06:14 2016 -0500

    don't test with unsupported back compat

commit 18bbfe76128570bc70883bf91ff4c44c82d27817
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 07:02:18 2016 -0500

    remove now invalid lucene 4 backcompat test

commit 7e730e572886f0ef2d3faba712e4256216ff01ec
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 06:58:52 2016 -0500

    remove now invalid lucene 4 backwards test

commit 244d2ab6868ba5ac9e0bcde3c2833743751a25ec
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 06:47:23 2016 -0500

    use 6.0 codec

commit 5f64d4a431a6fdaa1234adca23f154c2a1de8284
Author: Robert Muir <rmuir@apache.org>
Date:   Fri Mar 4 06:43:08 2016 -0500

    compile, javadocs, forbidden-apis, etc

commit 1f273cd62a7fe9ca8f8944acbbfc5cbdd3d81ccb
Merge: cd33921 29e3443
Author: Simon Willnauer <simonw@apache.org>
Date:   Fri Mar 4 10:45:29 2016 +0100

    Merge branch 'master' into lucene6

commit cd33921ac742ef9fb351012eff35f3c7dbda7264
Author: Robert Muir <rmuir@apache.org>
Date:   Thu Mar 3 23:58:37 2016 -0500

    fix hunspell dictionary loading

commit c7fdbd837b01f7defe9cb1c24e2ec65604b0dc96
Merge: 4d4190f d8948ba
Author: Robert Muir <rmuir@apache.org>
Date:   Thu Mar 3 23:41:53 2016 -0500

    Merge branch 'master' into lucene6

commit 4d4190fd82601aaafac6b8254ccb3edf218faa34
Author: Robert Muir <rmuir@apache.org>
Date:   Thu Mar 3 23:39:14 2016 -0500

    remove nocommit

commit 77ca69e288b1a41aa9595c921ed166c272a00ea8
Author: Robert Muir <rmuir@apache.org>
Date:   Thu Mar 3 23:38:24 2016 -0500

    clean up numericutils vs legacynumericutils

commit a466d696fbaad04b647ffbc0857a9439b583d0bf
Author: Robert Muir <rmuir@apache.org>
Date:   Thu Mar 3 23:32:43 2016 -0500

    upgrade spatial4j

commit 5412c747a8cfe638bacedbc8233163cb75cc3dc5
Author: Robert Muir <rmuir@apache.org>
Date:   Thu Mar 3 23:19:28 2016 -0500

    move to 6.0.0-snapshot-8eada27

commit b32bfe924626b87e540692375ece09e7c2edb189
Author: Adrien Grand <jpountz@gmail.com>
Date:   Thu Mar 3 11:30:09 2016 +0100

    Fix some test compile errors.

commit 6ccde35e9840b03c68d1a2cd47c7923a06edf64a
Author: Adrien Grand <jpountz@gmail.com>
Date:   Thu Mar 3 11:25:51 2016 +0100

    Current Lucene version is 6.0.0.

commit f62e1015d931b4cc04c778298a8fa1ba65e97ad9
Author: Adrien Grand <jpountz@gmail.com>
Date:   Thu Mar 3 11:20:48 2016 +0100

    Fix compile errors in NGramTokenFilterFactory.

commit 6837c6eabf96075f743649da9b9b52dd39611c58
Author: Adrien Grand <jpountz@gmail.com>
Date:   Thu Mar 3 10:50:59 2016 +0100

    Fix the edge ngram tokenizer/filter.

commit ccd7f070de5efcdfbeb34b9555c65c4990bf1ba6
Author: Adrien Grand <jpountz@gmail.com>
Date:   Thu Mar 3 10:42:44 2016 +0100

    The missing value is now accessible through a getter.

commit bd3b77f9b28e5b05daa3d49683a9922a6baf2963
Author: Adrien Grand <jpountz@gmail.com>
Date:   Thu Mar 3 10:41:51 2016 +0100

    Remove IndexCacheableQuery.

commit 05f3091c347aeae80eeb16349ac51d2b53cf86f7
Author: Adrien Grand <jpountz@gmail.com>
Date:   Thu Mar 3 10:39:43 2016 +0100

    Fix compilation of function_score queries.

commit 81cda79a2431ac78f56b0cc5a5765387f662d801
Author: Adrien Grand <jpountz@gmail.com>
Date:   Thu Mar 3 10:35:02 2016 +0100

    Fix compile errors in BlendedTermQuery.

commit 70994ce8dd1eca0b995870974a38e20f26f96a7b
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Mar 2 23:33:03 2016 -0500

    add bug ID

commit 29d4f1a71f36f646b5a6060bed3db019564a279d
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Mar 2 21:02:32 2016 -0500

    easy .store changes

commit 5e1a1e6fd665fa455e88d3a8987362fad5f44bb1
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Mar 2 20:47:24 2016 -0500

    cleanups mostly around boosting

commit 333a669ec6c305ada5645d13ed1da0e19ec1d053
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Mar 2 20:27:56 2016 -0500

    more simple fixes

commit bd5cd98a1e089c866b6b4a5e159400b110140ce6
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Mar 2 19:49:38 2016 -0500

    more easy fixes and removal of ancient cruft

commit a68f419ee47da5f9c9ce5b372f01d707e902474c
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Mar 2 19:35:02 2016 -0500

    cutover numerics

commit 4ca5dc1fa47dd5892db00899032133318fff3116
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Mar 2 18:34:18 2016 -0500

    fix some constants

commit 88710a17817086e477c6c021ec346d0534b7fb88
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Mar 2 18:14:25 2016 -0500

    Add spatial-extras jar as a core dependency

commit c8cd6726583e5ce3f546ed355d4eca037164a30d
Author: Robert Muir <rmuir@apache.org>
Date:   Wed Mar 2 18:03:33 2016 -0500

    update to lucene 6 jars
2016-03-07 04:12:23 -05:00
David Pilato e35032950e Merge branch 'master' into pr/16598-register-filter-settings 2016-03-05 11:37:03 +01:00
David Pilato 2bb3846d1f Update after review:
* remove `ClusterScope`
* rename `ClusterSettings` to `NodeSettings`
* rename `SettingsProperty` to `Property`
2016-03-04 16:53:24 +01:00
David Pilato 76719341dc Fix after merge 2016-03-04 13:24:39 +01:00
David Pilato c11cf3bf1f Merge branch 'master' into pr/16598-register-filter-settings
# Conflicts:
#	core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java
#	core/src/main/java/org/elasticsearch/common/settings/Setting.java
#	core/src/test/java/org/elasticsearch/common/settings/SettingTests.java
2016-03-04 12:23:10 +01:00
David Pilato f97ce3c728 Deprecate mapper-attachments plugin
See #16910
2016-03-04 11:49:12 +01:00
Simon Willnauer 5008694ba1 Remove support for legacy checksums
Elasticsearch 5.0 doesn't support indices wiht legacy checksums anymore.
The last time we write legacy checksums was in 1.3.0 which was based
on lucene 4.9 already which means that all files have CRC32 checksums.
All indices that Elasticsearch can read today must be written with
lucene version >= 4.8 anyway so we can drop this layer of backwards
compatibility entirely.

Since we are close to upgrading to Lucene 6.0 we should get rid of this
in a more contiained change than the lucene upgrade.
2016-03-03 22:58:18 +01:00
Lee Hinman 6adbbff97c Fix organization rename in all files in project
Basically a query-replace of "https://github.com/elasticsearch/" with "https://github.com/elastic/"
2016-03-03 12:04:13 -07:00
Adrien Grand eef19be072 Deprecate string in favor of text/keyword. #16877
This commit removes the ability to use string fields on indices created on or
after 5.0. Dynamic mappings now generate text fields by default for strings
but there are plans to also add a sub keyword field (in a future PR).

Most of the changes in this commit are just about replacing string with
keyword or text. Some tests have been removed because they existed because of
corner cases of string mappings like setting ignore-above on a text field or
enabling term vectors on a keyword field which are now impossible.

The plan is to remove strings entirely in 6.0.
2016-03-03 10:20:56 +01:00
Daniel Mitterdorfer f70e5aca50 Merge remote-tracking branch 'danielmitterdorfer/simplify-azure-settings' 2016-03-03 10:02:35 +01:00
Daniel Mitterdorfer 52acf0e6e1 Use new settings infra to parse AzureStorageSettings
With this commit we simplify the parsing logic in AzureStorageSettings
by leveraging the new settings infrastructure.

Closes #16363
2016-03-03 10:01:14 +01:00
David Pilato 3f71c1d6a5 Replace `s -> s` by `Function.identity()` 2016-03-02 10:12:40 +01:00
David Pilato e4d9e46508 Fix merge with master 2016-03-02 09:55:09 +01:00
David Pilato 5fbf1b95dc Merge branch 'master' into pr/16598-register-filter-settings
# Conflicts:
#	core/src/main/java/org/elasticsearch/common/logging/ESLoggerFactory.java
#	core/src/main/java/org/elasticsearch/discovery/DiscoveryService.java
#	core/src/main/java/org/elasticsearch/discovery/DiscoverySettings.java
#	core/src/main/java/org/elasticsearch/http/HttpTransportSettings.java
#	plugins/repository-azure/src/main/java/org/elasticsearch/cloud/azure/storage/AzureStorageService.java
2016-03-02 09:43:53 +01:00
Jason Tedor aa8ee74c6c Bump Elasticsearch version to 5.0.0-SNAPSHOT
This commit bumps the Elasticsearch version to 5.0.0-SNAPSHOT in line
with the alignment of versions across the stack.

Closes #16862
2016-03-01 17:03:47 -05:00
Simon Willnauer 80e5c0acf8 Merge pull request #16860 from s1monw/issues/16485
Add setFactory permission to GceDiscoveryPlugin

This commit adds a missing permission and a simple test that
ensures we discover other nodes via a mock http endpoint.

Closes #16485
2016-03-01 08:55:43 +01:00
Nik Everett 95cc3e38fc Check test naming conventions on all modules
The big win here is catching tests that are incorrectly named and will
be skipped by gradle, providing a false sense of security.

The whole thing takes about 10 seconds on my Macbook Air, not counting
compiling the test classes, which seems worth it. Because this runs as
a gradle task with propery UP-TO-DATE handling it can be skipped if the
tests haven't been changed which should save some time.

I chose to keep this in test:framework rather than a new subproject of
buildSrc because ESIntegTestCase and doesn't inroduce any additional
dependencies.
2016-02-29 16:31:49 -05:00
Simon Willnauer 948ee3ee3f Move keystore creation to gradle - this prevents committing a keystore to the source repo 2016-02-29 22:01:39 +01:00
Boaz Leskes 195b43d66e Remove DiscoveryService and reduce guice to just Discovery #16821
DiscoveryService was a bridge into the discovery universe. This is unneeded and we can just access discovery directly or do things in a different way.

One of those different ways, is not having a dedicated discovery implementation for each our dicovery plugins but rather reuse ZenDiscovery.

UnicastHostProviders are now classified by discovery type, removing unneeded checks on plugins.

Closes #16821
2016-02-29 20:23:38 +01:00
Simon Willnauer ecca717339 Add setFactory permission to GceDiscoveryPlugin
This commit adds a missing permission and a simple test that
ensures we discover other nodes via a mock http endpoint.

Closes #16485
2016-02-29 15:48:16 +01:00
David Pilato 7a42014909 Upgrade Azure Storage client to 4.0.0
We are using `2.0.0` today but Azure team now recommends:

```xml
<dependency>
    <groupId>com.microsoft.azure</groupId>
    <artifactId>azure-storage</artifactId>
    <version>4.0.0</version>
</dependency>
```

This new version fix the timeout issues we have seen with azure storage although #15080 adds a timeout support.
Azure storage client 2.0.0 was not passing correctly this value when it was calling Azure services.

Note that the timeout is a server side timeout and not client side timeout.
It means that it will raise only a timeout when:

* upload of blob is complete
* if azure service is not able to process the blob (and store it) within a given time range.

In which case it will raise an exception which elasticsearch can deal with:

```
java.io.IOException
    at __randomizedtesting.SeedInfo.seed([91BC11AEF16E073F:6886FA5308FCE4D8]:0)
    at com.microsoft.azure.storage.core.Utility.initIOException(Utility.java:643)
    at com.microsoft.azure.storage.blob.BlobOutputStream.writeBlock(BlobOutputStream.java:444)
    at com.microsoft.azure.storage.blob.BlobOutputStream.access$000(BlobOutputStream.java:53)
    at com.microsoft.azure.storage.blob.BlobOutputStream$1.call(BlobOutputStream.java:388)
    at com.microsoft.azure.storage.blob.BlobOutputStream$1.call(BlobOutputStream.java:385)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: com.microsoft.azure.storage.StorageException: Operation could not be completed within the specified time.
    at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
    at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:305)
    at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:175)
    at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlockInternal(CloudBlockBlob.java:1006)
    at com.microsoft.azure.storage.blob.CloudBlockBlob.uploadBlock(CloudBlockBlob.java:978)
    at com.microsoft.azure.storage.blob.BlobOutputStream.writeBlock(BlobOutputStream.java:438)
    ... 9 more
```

The following code was used to test this against Azure platform:

```java
public void testDumb() throws URISyntaxException, StorageException, IOException, InvalidKeyException {
    String connectionString = "MY-AZURE-STRING";

    CloudStorageAccount storageAccount = CloudStorageAccount.parse(connectionString);
    CloudBlobClient client = storageAccount.createCloudBlobClient();
    client.getDefaultRequestOptions().setTimeoutIntervalInMs(1000);
    CloudBlobContainer container = client.getContainerReference("dumb");
    container.createIfNotExists();
    CloudBlockBlob blob = container.getBlockBlobReference("blob");

    File sourceFile = File.createTempFile("sourceFile", ".tmp");

    try {
        int fileSize = 10000000;

        byte[] buffer = new byte[fileSize];
        Random random = new Random();
        random.nextBytes(buffer);

        logger.info("Generate local file");
        FileOutputStream fos = new FileOutputStream(sourceFile);
        fos.write(buffer);
        fos.close();
        logger.info("End generate local file");

        FileInputStream fis = new FileInputStream(sourceFile);

        logger.info("Start uploading");
        blob.upload(fis, fileSize);
        logger.info("End uploading");

    }
    finally {
        if (sourceFile.exists()) {
            sourceFile.delete();
        }
    }
}
```

With 2.0.0, the above code was not raising any exception. With 4.0.0, the exception is now thrown correctly.

The default timeout is 5 minutes. See https://github.com/Azure/azure-storage-java/blob/master/microsoft-azure-storage/src/com/microsoft/azure/storage/core/Utility.java#L352-L375

Closes #12567.

Release notes from 2.0.0:

 * Removed deprecated table AtomPub support.
 * Removed deprecated constructors which take service clients in favor of constructors which take credentials.
 * Added support for "Add" permissions on Blob SAS.
 * Added support for "Create" permissions on Blob and File SAS.
 * Added support for IP Restricted SAS and Protocol SAS.
 * Added support for Account SAS to all services.
 * Added support for Minute and Hour Metrics to FileServiceProperties and added support for File Metrics to CloudAnalyticsClient.
 * Removed deprecated startCopyFromBlob() on CloudBlob. Use startCopy() instead.
 * Removed deprecated Credentials and StorageKey classes. Please use the appropriate methods on StorageCredentialsAccountAndKey instead.

 * Fixed a bug in table where a select on a non-existent field resulted in a null reference exception if the corresponding field in the TableEntity was not nullable.
 * Fixed a bug in table where JsonParser was automatically closing the response stream before it was completely drained causing socket exhaustion.
 * Fixed a bug in StorageCredentialsAccountAndKey.updateKey(String) which prevented valid keys from being set.
 * Added CloudBlobContainer.listBlobs(final String, final boolean) method.
 * Fixed a bug in blob where using AccessConditions on block blob uploads larger than 64MB done with the upload* methods or block blob uploads done openOutputStream with would fail if the blob did not already exist.
 * Added support for setting a proxy per request. Proxy can be set on an OperationContext instance and will be used when that instance is passed to the request method.

 * Added support for SAS to the Azure File service.
 * Added support for Append Blob.
 * Added support for Access Control Lists (ACL) to File Shares.
 * Added support for getting and setting of CORS rules to File service.
 * Added support for ShareStats to File Shares.
 * Added support for copying an Azure File to another Azure File or a Block Blob asynchronously, and aborting Azure File copy operations asynchronously.
 * Added support for copying a Blob to an Azure File asynchronously.
 * Added support for setting a maximum quota property on a File Share.
 * Removed deprecated AuthenticationScheme and its getter and setter. In the future only SharedKey will be used.
 * Removed deprecated getter/setters for all request option properties on the service clients. Please use the default request options getter/setters instead.
 * Removed getSubDirectoryReference() for blob directories and file directories. Use getDirectoryReference() instead.
 * Removed getEntityClass() in TableQuery. Please use getClazzType() instead.
 * Added client-side verification for lease duration and break periods.
 * Deprecated the setters in table for timestamp as this property is only modifiable by the service.
 * Deprecated startCopyFromBlob() on CloudBlob. Use startCopy() instead.
 * Deprecated the Credentials and StorageKey classes. Please use the appropriate methods on StorageCredentialsAccountAndKey instead.
 * Deprecated constructors which take service clients in favor of constructors which take credentials.
 * Fixed a bug where the DateBackwardCompatibility flag was not applied if set on the CloudTableClient default request options.
 * Changed library behavior to retry all exceptions thrown when parsing a response object.
 * Changed behavior to stop removing query parameters passed in with the resource URI if that URI contains a SAS token. Some query parameters such as comp, restype, snapshot and api-version will still be removed.
 * Added support for logging StringToSign to SharedKey and SAS.
 * **Added a connect timeout to prevent hangs when establishing the network connection.**
 * **Made performance enhancements to the BlobOutputStream class.**

 * Fixed a bug where maximum execution time was ignored for file, queue, and table services.
 * **Changed the socket timeout to be set to the service side timeout plus 5 minutes when maximum execution time is not set.**
 * **Changed the socket timeout to default to 5 minutes rather than infinite when neither service side timeout or maximum execution time are set.**
 * Fixed a bug where MD5 was calculated for commitBlockList even though UseTransactionalMD5 was set to false.
 * Fixed a bug where selecting fields that did not exist returned an error rather than an EntityProperty with a null value.
 * Fixed a bug where table entities with a single quote in their partition or row key could be inserted but not operated on in any other way.

 * Fixed a bug for all listing API's where next() would sometimes throw an exception if hasNext() had not been called even if there were more elements to iterate on.
 * Added sequence number to the blob properties. This is populated for page blobs.
 * Creating a page blob sets its length property.
 * Added support for page blob sequence numbers and sequence number access conditions.
 * Fixed a bug in abort copy where the lease access condition was not sent to the service.
 * Fixed an issue in startCopyFromBlob where if the URI of the source blob contained certain non-ASCII characters they would not be encoded appropriately. This would result in Authorization failures.
 * Fixed a small performance issue in XML serialization.
 * Fixed a bug in BlobOutputStream and FileOutputStream where flush added data to a request pool rather than immediately committing it to the Azure service.
 * Refactored to remove the blob, queue, and file package dependency on table in the error handling code.
 * Added additional client-side logging for REST requests, responses, and errors.

Closes #15976.
2016-02-29 15:00:34 +01:00
Martijn van Groningen c7b626c615 updated SHAs 2016-02-28 13:20:39 +01:00
David Pilato d77daf3861 Use an SettingsProperty.Dynamic for dynamic properties 2016-02-28 11:06:45 +01:00
David Pilato 31b5e0888f Use an SettingsProperty enumSet
Instead of modifying methods each time we need to add a new behavior for settings, we can simply pass `SettingsProperty... properties` instead.

`SettingsProperty` could be defined then:

```
public enum SettingsProperty {
  Filtered,
  Dynamic,
  ClusterScope,
  NodeScope,
  IndexScope
 // HereGoesYours;
}
```

Then in setting code, it become much more flexible.

TODO: Note that we need to validate SettingsProperty which are added to a Setting as some of them might be mutually exclusive.
2016-02-28 00:48:04 +01:00
Sylwester Lachiewicz 3af735c69e Update MaxMind geoip2 version to 2.6
Update to align with #16801 jackson 2.7.1
2016-02-27 12:57:24 +01:00
Nik Everett ba5be0332d Remove optional logger wrappers
Removes all our logger wrappers except the wrapper for log4j1.2. If you
depend on Elasticsearch's jar in your application you'll need to declare
log4j 1.2 and/or some bridge to your favorite logger.

We did this to simplify our builds and code. No more commons-logging like
log implementation sniffing. No more optional dependency hacks in gradle.

We might one day want to use j.u.l instead of log4j. If we do want that
we can recover its wrapper by studying this commit. We didn't go directly
to j.u.l in this commit because that is a bigger change. Our logging
configuration is based on log4j1.2 and people are used to it. So it'd
be a much more fraught breaking change to do that conversion.
2016-02-26 16:41:07 -05:00
Jason Tedor d94e391e71 Use System#lineSeparator and not system property
This commit replaces a use of the system property "line.separator" and
replaces it with a dedicated method that provides the same value.

Closes #16776
2016-02-25 12:22:57 -05:00