The filter and the view use different convection for field names, ie the
connection view uses the `sessionID` field name while the connection filter
uses the `SESSION_ID` field name. This commit replace the field names used
by the filter with the field names used by the view preserving the backward
compatibility.
The provider of an SSL key/trust store is different from that store's
type. However, the broker currently doesn't differentiate these and uses
the provider for both. Changing this *may* potentially break existing
users who are setting the provider, but I don't see any way to avoid
that. This is a bug that needs to be fixed in order to support use-cases
like PKCS#11.
Change summary:
- Added documentation.
- Consolidated several 2-way SSL tests classes into a single
parameterized test class. All these classes were essentially the same
except for a few key test parameters. Consolidating them avoided
having to update the same code in multiple places.
- Expanded tests to include different providers & types.
- Regenerated all SSL artifacts to allow tests to pass with new
constraints.
- Improved logging for when SSL handler initialization fails.
When performing concurrent user admin actions (e.g. resetUser, addUser,
removeUser on ActiveMQServerControl) when using the
PropertiesLoginModule with reload=true the underlying user and role
properties files can get corrupted.
This commit fixes the issue via the following changes:
- Add synchronization to the management commands
- Add concurrency controls to underlying file access
- Change CLI user commands to use remote methods instead of modifying
the files directly. This avoids potential concurrent changes. This
change forced me to modify the names of some of the commands'
parameters to disambiguate them from connection-related parameters.
Both authentication and authorization will hit the underlying security
repository (e.g. files, LDAP, etc.). For example, creating a JMS
connection and a consumer will result in 2 hits with the *same*
authentication request. This can cause unwanted (and unnecessary)
resource utilization, especially in the case of networked configuration
like LDAP.
There is already a rudimentary cache for authorization, but it is
cleared *totally* every 10 seconds by default (controlled via the
security-invalidation-interval setting), and it must be populated
initially which still results in duplicate auth requests.
This commit optimizes authentication and authorization via the following
changes:
- Replace our home-grown cache with Google Guava's cache. This provides
simple caching with both time-based and size-based LRU eviction. See more
at https://github.com/google/guava/wiki/CachesExplained. I also thought
about using Caffeine, but we already have a dependency on Guava and the
cache implementions look to be negligibly different for this use-case.
- Add caching for authentication. Both successful and unsuccessful
authentication attempts will be cached to spare the underlying security
repository as much as possible. Authenticated Subjects will be cached
and re-used whenever possible.
- Authorization will used Subjects cached during authentication. If the
required Subject is not in the cache it will be fetched from the
underlying security repo.
- Caching can be disabled by setting the security-invalidation-interval
to 0.
- Cache sizes are configurable.
- Management operations exist to inspect cache sizes at runtime.
In certain cases with shared-store HA a broker's activation can fail but
the broker will still be holding the journal lock. This results in a
"zombie" broker which can't actually service clients and prevents the
backup from activating.
This commit adds an ActivationFailureListener to catch activation
failures and stop the broker completely.
The calculation used by
ActiveMQServerControlImpl.getDiskStoreUsagePercentage() is incorrect. It
uses disk space info with global-max-size which is for address memory.
Also, the existing getDiskStoreUsage() method *already* returns a
percentage of total disk store usage so this method seems redundant.
Add the command `check` to the Command Line utility. This command exposes some
checks for nodes and queues using the management API for most of them.
The checks have been implemented to be modular. Each user can compose his own
health check, ie to produce and consume from a queue the command is
`artemis check queue --name TEST --produce 1 --consume 1`.
it is intentional to compare brokerURL == DEFAULT_BROKER_URL here
so, I added a @SuppressWarnings to clear the false positivie.
And also added some comment on why this is intentional.