- made sure that clear_scroll all gets converted to the correspoinding shield cluster action in both action filter and transport filter (used to happen only on the action filter before): introduced the context of ShieldActionMapper that allows to convert action names based on an incoming request and its action name (will be useful for analyze api too)
- made sure that potential clear_scroll all errors contain the shield action name rather than the es core original one
- made it clearer that the only indices actions known not to be indices requests are scroll related ones, which we assert on and grant. Everything else gets denied.
- made it clearer that the only indices request whose indices might end up being resolved to an empty set is analyze request, as its index is optional
- simplified permissions check in Permission.Group by asserting on index argument not null
Original commit: elastic/x-pack-elasticsearch@7c01159b03
All three files are auto loaded by shield when modified. The behaviour that we agreed on is that when there's a parse failure in any of these files, we don't prevent the node from starting. Instead we skip the records that we failed to parse as if they don't exist. This is how `roles.yml` is handled today, and this commit makes sure that `users`, `users_roles` and `role_mapping.yml` are aligned with this behaviour.
Also, the same behaviour is applied when the file is modified at runtime (so it's consistent with node start up).
This commit also adds a lot of missing tests for both `LdapGroupToRoleMapper` and `ActiveDirectoryGroupToRoleMapper` classes.
Original commit: elastic/x-pack-elasticsearch@7fdd6bb5cc
Some request are created locally by elasticsearch and therefore are not associated with a remote address (we only associate the remote address with a request that arrives remotely from via the transport layer). An example of such request is the periodic nodes info that is collected by elasticsearch. Also, requests that originate from the REST layer also create transport requests locally.
This commit takes this behaviour into account and makes sure that we'll always log the host in the audit logs. We do that in the following way:
- `host` is replaced by two attributes: `origin_type` and `origin_address`. `origin_type` can be either `rest`, `remote_node` or `local_node`. `origin_address` holds the host address of the origin
- when no remote address is associated with the request, it's safe to assume it was created locally. We'll then output `origin_type=[local_node] origin_address=[<the localhost address>]`
- when a rest request gets in, we'll copy and place its remote address in the context of the request (the context of the rest request is copied to the context of the transport request)
- . in the audit logs, we'll inspect the transport request and look for a `rest_host` in its context. if we find it, we'll log the log entry under `origin_type=[rest], origin_address=[<the remote rest address>]` attributes. This way, the origin of the request won't get "lost" and we'll still differentiate between transport hosts and rest hosts.
- if the request is holds a remote address, it can only come from the transport layer, so we'll output "origin_type=[transport] origin_address=[<remote address]"
While at it, also changed the format of the log entries:
- lowercased the whole message (e.g. `ANONYMOUS_ACCESS` to `[anonymous_access]` (for consistency sake)
- introduced layer categorization for every entry to indicate whether its `[transport]`, `[rest]` or `[ip_filter]` related. I reckon this will make it easier to parse the logs if one wishes to do so.
Fixeselastic/elasticsearch#550
Original commit: elastic/x-pack-elasticsearch@b84f0c5548
This commit moves the es core dependency to 1.4.2, which becomes the minimum version required from now on.
Changes made accordingly to this decision since we can break backwards compatibility and assume es core>=1.4.2
Closeselastic/elasticsearch#562
Original commit: elastic/x-pack-elasticsearch@484b4a2528
This to keep consistent with es core.
Also, where applicable, rephrased log messages to make it clearer.
Original commit: elastic/x-pack-elasticsearch@fae3188b17
Today we require that the `roles.yml` file will be a valid yaml and all the role definitions there must be valid as well. If we can't fully parse this file, we simply throw an exception and ignore its content. After al long discussion, we decided that it would be much better to try and parse whatever we can out of this file and load the valid roles. Those invalid roles will be skipped and immediately removed from the system.
This commit changes the way we parse the `roles.yml`. We first break it down to mini single-role yml constructs and then parse each separately from the others. This way, failing to parse one role, won't impact the others.
Fixeselastic/elasticsearch#313
Original commit: elastic/x-pack-elasticsearch@31e3624594
Meaning, all loggers are now settings aware, so all shield logs are now consistent with the rest of elasticsearch and will follow elasticsearch configuration and output format (printing out the node name by default).
Also:
- Changed the audit log to **not** be based on the elasticsearch settings as it needs to define its own format.
- Added the node name as a prefix to the audit logs by default (can be disabled but setting `shield.audit.logfile.prefix.node_name` to `false`
- As part of this change, the realms now changed and now created with a `RealmConfig`. This construct holds the realm settings, the environment and is served as a logger factory for all realm constructs.
- The only exceptions to the logs are the ssl socket factories.. the logs there are only used for tests by calling `clear`. This behaviour will change in the future such that `clear` will be removed and then there'll be no need for loggers in there.
Fixeselastic/elasticsearch#446
Original commit: elastic/x-pack-elasticsearch@7a1058a54e
The setting was mistyped as 'smpt' when it should have been 'smtp', but
it is better to change it to 'email' to be consistent with the other settings.
Original commit: elastic/x-pack-elasticsearch@0e610d89b5
Since both LDAP and AD realms are caching users. If the groups of the users change on the LDAP side, these changes will not be visible in shield until the relevant cached users will be evicted from cache. This poses a problem, specially when degrading users in terms of their permission (e.g. after degrading them on LDAP, they still have higher privileges until they're evicted from cache). The default cache timeout today is 1 hour. For this reason, a new API is introduced which will enable administrators to force cache evictions.
- Changed the default cache timeout to 20 minute
- `ClearRealmCacheAction was introduced (along with the relevant request and response constructs). This is a cluster action
- the corresponding rest action was introduced as well, under the `_shield/realm/{realm}/cache/clear` URI (where `{realm}` enables clearing specific realms, or all realms when passing `_all`.
- With the introduction of an action, the `ActionModule` now is no longer a node module - it's bound on both node and transport client.
- Added a new Cluster permission - `manage_shield`
- Also cleaned up the `Permission` and `AuthorizationService` class
Original commit: elastic/x-pack-elasticsearch@c59e244435
This checkin ensures all objects allocated by jndi requests are freed up. It does this by wrapping NamingEnumerations with a ClosableNamingEnumeration that is placed in a try-with-resources block.
Original commit: elastic/x-pack-elasticsearch@8bed9585bd
A range is provided for the client profile and the test assumes that the first
port in the range is the port that the transport is bound to, which is not always
true. This change makes the test use the actual port that the client profile is
bound to.
Closeselastic/elasticsearch#531
Original commit: elastic/x-pack-elasticsearch@05962702ed
Hostname verification was previously randomized on a per node level, when it really
should have been a cluster level setting. This change makes hostname verification
randomization a cluster level settings.
Original commit: elastic/x-pack-elasticsearch@2a7da8aaf1
Since the system privilege also mapped to cluster/index monitoring actions, the access granted on those was only logged in `TRACE` level. This commit makes sure that these actions will be treated as any of the other actions, and only keep the *internal* system calls under `TRACE`
Fixeselastic/elasticsearch#554
Original commit: elastic/x-pack-elasticsearch@ffb719f547
Adds a functional test to check for cluster level privileges
Closes elasticsearch/elasticsearch-shield-qaelastic/elasticsearch#11
Original commit: elastic/x-pack-elasticsearch@dd79614c24
We have two ways of resolving wildcards in shield:
1) expanding them to matching authorized indices for the current user, which is used for every request that implements `IndicesRequest.Replaceable`, giving to wildcards a different meaning in the context of shield, abnd replacing the resolved names to the request on the coordinating node.
2) resolving them as es core would by default: we do this only for `IndicesAliasesRequest` since it's the only request that supports wildcards but doesn't allow to replace its indices. This is done on every node that processes the request, no replacement to the request takes place.
Shard/node level requests are a bit of a special case though since they could potentially contain wildcards. They hold the original indices and indices options, thus they effectively support wildcards, but given that wildcards always get replaced on the coordinating node even before shard/node level requests get created, we are sure they will never contain wildcards. Hence we should never even try to explode their wildcards, since they can't contain any.
We should make the above distinctions clearer in code by:
1) having an assert that verifies the IndicesAliasesRequest special case
2) making sure that we explode wildcards as core would only for IndicesAliasesRequest, not touching shard/node level requests
3) adding an assert that verifies that shard/node level requests never contain wildcards
i
Also, the process of going over the indices by using MetaData#convertFromWildcards (what option 2 does) has one side effect other than wildcards resolution: it causes unnecessary exceptions in shield, exceptions that would be thrown by core anyway when needed after the authorization process. This happens because we try and reuse code taken from es core that does wildcards resolution plus indices validation at once (even if there were no wildcards among indices).
In general all of the user requests that support wildcards (based on their indices options) should have their indices replaced on the coordinating node (the only exception being IndicesAliasesRequest, see elastic/elasticsearch#112), using shield specific code. Their subsequent (internal) shard level requests will never contain wildcards. That's why there is no need to go over all of the indices when there's no wildcards, which would cause some needless validation to happen as well.
Side note: the additional validation step caused tribe node failures with requests against indices belonging to multiple tribes (the exact purpose of the tribe node). Each tribe complained because it didn't have all of the indices in its own cluster state, which is perfectly fine (think of `tribe1` that holds `index1` and `tribe2` that holds `index2`, when searching against both indices from a tribe node). Although this commit makes sure that we don't throw any index missing exception for indices that are not available, all of the tribes will still need to authorize the action on all of the indices (`tribe1` requires privileges for `index2` so does `tribe2` for `index1`, otherwise the shard level requests will get rejected.
Closeselastic/elasticsearch#541
Original commit: elastic/x-pack-elasticsearch@dd81ec0177
As es core requests don't implement `toString` at the moment, we can't just render them as they are. Instead, for transport messages we'll only render the class name, and for rest requests we'll render the content if there is one (a rest request without a content will be rendered as an empty string)
Original commit: elastic/x-pack-elasticsearch@fb14b41a28
The SignatureService tried to access the system key file in
the constructor, which could lead to endless loops. This PR
moves the service into a AbstractLifecycleComponent to keep
the constructor dumb.
Relates elastic/elasticsearch#517
Original commit: elastic/x-pack-elasticsearch@b1e5bfe98c
For LDAP hostname verification, we use the "default" SSLContext, which is cached in a map
and re-used. If a secure connection is established then the session is cached for use later. In
the tests, we sometimes run a test that connects without hostname verification and a SSL session
is cached. Then when the hostname verification test runs, it uses the cached session and does
not perform hostname verification causing the test to fail. This fix changes the test to always
use a new SSLContext for each test.
Closeselastic/elasticsearch#521
Original commit: elastic/x-pack-elasticsearch@46ffed34bb
When generating the sysemkey, the permissions are set to owner read/write
only in order to protect the system key. This only works, if the underlying
filesystem supports posix permissions.
Closeselastic/elasticsearch#516
Original commit: elastic/x-pack-elasticsearch@32d6e1d745
SSLEngine will throw various SSLExceptions when the application initiates a write prior
to the handshake being completed. The NettySecuredTransport marks a channel as ready
for use once it is connected, even though the handshake has not completed. A handler
has been added that performs the handshake and queues writes until the handshake has
completed. Additionally, fix SslMultiPortTests to always connect to the proper client
profile port.
Closeselastic/elasticsearch#390. Closeselastic/elasticsearch#393. Closeselastic/elasticsearch#394. Closeselastic/elasticsearch#395. Closeselastic/elasticsearch#414
Original commit: elastic/x-pack-elasticsearch@1bb3218373
Because elasticsearch core does not have a possibility to retrieve the
currently open search contexts across the cluster, there is no possiblity
to check if a user is allowed to close a context, when `_all` is
specified.
This commit introduces a new cluster privilege called
cluster:scroll/clear/_all
which allows to clear all scroll requests.
Closeselastic/elasticsearch#502
Original commit: elastic/x-pack-elasticsearch@5f5ce5de36
SSL and TLS do not require hostname verification, but without it they are susceptible
to man in the middle attacks. This adds support for hostname verification for
transport client connections and for ldaps connections.
Closeselastic/elasticsearch#489
Original commit: elastic/x-pack-elasticsearch@c9380f0319
This extends the connect timeout on windows to give it enought time to complete. It moves the ldap read timeout test to openldap and active directory.
We now have three timeouts configurable. The timeout tests on active directory only work for TCP connect, and TCP read, but not LDAP Search.
Original commit: elastic/x-pack-elasticsearch@ff97396f60
Because this leads to endless loops when starting elasticsearch
some components have been refactored to AbstractLifecycleComponents
so that the exception throwing logic can executed in the
`doStart()` method.
Closeselastic/elasticsearch#505
Original commit: elastic/x-pack-elasticsearch@75d1fd358a
As no test has been marked with the @Network annotation, the test should not
try to connect to example.com (which needs to be resolved and thus requires an
internet connection). We can simply bind a local socket and run into the 1ms
timeout there.
Original commit: elastic/x-pack-elasticsearch@2c2da90607
In order to be more flexible this clean up commit splits the
TransportService into a client and server one. As part of this
we can safely remove the slightly misused TransportFilters class.
Renamed shield.type from server to node, so we can differentiate between node2node and node2client communication.
Original commit: elastic/x-pack-elasticsearch@a3a2f9bf38