This will allow in the future to make this a dynamic setting,
which can also be shared accress the cluster instead of having
to use (and distribute) files.
Another change is, that the order of `deny` and `allow` now does not matter
anymore. Allow will win over deny.
The last change is that `all` now is `_all` in order to align with the
rest of Elasticsearch
Documentation has been updated accordingly.
Original commit: elastic/x-pack-elasticsearch@daa0b18343
Nuked the security filter and separated the different filter to their own constructs:
- Added a shield action package & module that is responsible for binding the shield action filter (and later will hold all shield actions)
- Added a shield rest package & module that is responsible for binding the shield rest filter and registering all the rest actions
- Moved the client & server transport filters to the transport package
General cleanup:
- Code formatting
- moved `ShieldPlugin` to the top level package `org.elasticsearch.shield`
Original commit: elastic/x-pack-elasticsearch@d652041860
Now, on first successful authentication, we put the user in the message header so it'll be send with any subsequent cluster internal requests (e.g. shard level search) to avoid re-authentication on every node in the cluster. We can do that now, as with multi-binding transport we can guarantee isolation of the internal cluster from client communication. While it's generally safe for transmission, the user header that is sent between the nodes is still signed using the `system_key` as yet another security layer.
As part of this change, also added/changed:
- A new audit log entry - anonymous access for Rest request.
- Changed how system user is assumed. Previously, system user was assumed on the receiving node when no user was associated with the request. Now the system user is assumed on the sending node, meaning, when a node sends a system originated request, initially this request won't be associated with a user. Shield now picks those requests up and attaches the system user to the role and then sends it together with the request. This has two advantages: 1) it's safer to assume system locally where the requests originate from. 2) this will prevent nodes without shield from connecting to nodes with shield. (currently, the attached users are signed using the system key for safety, though this behaviour may be disabled in the settings).
- System realm is now removed (no need for that as the system user itself is serialized/attached to the requests)
- Fixed some bugs in the tests
Closeselastic/elasticsearch#215
Original commit: elastic/x-pack-elasticsearch@3172f5d126
Incorporate Feedback:
- verify signature for signed licenses whenever it is read from cluster state
- encrypt trial licenses with default pass phrase when storing it
- moved toSignature & fromSignature to License
Make LicenseManager a Utility class
Refactor:
- renamed LicenseManager to LicenseVerifier
- LicensesMetaData now holds a list of license objects (for signed licenses) and a set of encoded strings (trial licenses)
- minor test cleanup
incorporate feedback
incorporated feedback
switch to a stronger secret key gen algo; clean up build files & LicensesMetaData
cosmetic changes to LicenseSigner
incorporate LicnesesMetaData feedback
Original commit: elastic/x-pack-elasticsearch@0510091d2d
In order to run on different certs per port, we needed to adapt
the logic of starting up.
Also different profiles can now be applied to the N2NAuthenticator, so that
a different profile can allow/deny different hosts.
In addition minor refactorings have been done
* Group keystore/truststore settings instead of using underscores
* Change to transport profile settings instead of using specific shield ones
Documentation has been updated as well
Closeselastic/elasticsearch#290
Original commit: elastic/x-pack-elasticsearch@ad1ab974ea
There is a circular dependency in core 1.4.0 that cause plugins to fail depending on their constructors injection. We have ClusterService in InternalAuthorizationService that triggers this problem, solved for now replacing the dependency with a Provider. The original bug is already fixed in core: https://github.com/elasticsearch/elasticsearch/pull/8415 .
The problem manifested when enablieng a tribe node having shield installed on that node at the same time.
Closeselastic/elasticsearch#363
Original commit: elastic/x-pack-elasticsearch@ac339ef247
This commit adds throttling support for alerts.
If an alert is added with the throttle_state NOT_TRIGGERED
This alert can be ACKed.
If an alert is ACKed no further actions will be performed until the alert stops triggering.
If an alert is added with the throttle_period as a TimeValue alerts will only be triggered at least that TimeValue apart in time.
Original commit: elastic/x-pack-elasticsearch@65dfda7d1a
This prevents the test framework to complain about the fact that threads are lingering around when the test cluster has been shutdown.
Original commit: elastic/x-pack-elasticsearch@315be3f376
Our two transport impls depend on the SSLService at this point. Although we bind the SSLService only if ssl is enabled, it gets loaded anyway as it's a required dependency for the transports. We need to declare the dependency nullable and bind a null service manually when ssl is off.
Also resolved a couple of compiler warnings in SSLService and renamed some of its variables for better readability.
Closeselastic/elasticsearch#359
Original commit: elastic/x-pack-elasticsearch@2c99b2052e
This helps us preventing endless re-loading logic while a node steps down as master while while we in the process of starting alert store and action manager.
Original commit: elastic/x-pack-elasticsearch@e18c8215a9