Add krb5sslloginmodule that will populate userPrincipal that can be mapped to roles independently
Generalised callback handlers to take a connection and pull certs or peerprincipal based on
callback. This bubbled up into api change in securitystore and security manager
If replication blocked anything on the journal
the processing from clients would be blocked
and nothing would work.
As part of this fix I am using an executor on ServerSessionPacketHandler
which will also scale better as the reader from Netty would be feed immediately.
Core client with netty connector and acceptor doing kerberos
jaas.doAs around sslengine init such that the SSL handshake can do kerberos ticket
generaton and validation.
The kerberos authenticated user is then validated with the security manager before
being populated into the message userId.
The feature is enabled with the kerb5Config property. When lowercase it is the
principal. With a leading uppercase char it is the login.config entry to use.
The MAPPED journal refactoring include:
- simplified lifecycle and logic (eg fixed file size with single mmap memory region)
- supports for the TimedBuffer to coalesce msyncs (via Decorator pattern)
- TLAB pooling of direct ByteBuffer like the NIO journal
- remove of old benchmarks and benchmark dependencies
When a large message is replicated to backup, a pendingID is generated
when the large message is finished. This pendingID is generated by a
BatchingIDGenerator at backup.
It is possible that a pendingID generated at backup may be a duplicate
to an ID generated at live server.
This can cause a problem when a large message with a messageID that is
the same as another largemessage's pendingID is replicated and stored
in the backup's journal, and then a deleteRecord for the pendingID
is appended. If backup becomes live and loads the journal, it will
drop the large message add record because there is a deleteRecord of
the same ID (even though it is a pendingID of another message).
As a result the expecting client will never get this large message.
So in summary, the root cause is that the pendingIDs for large
messages are generated at backup while backup is not alive.
The solution to this is that instead of the backup generating
the pendingID, we make them all be generated in advance
at live server and let them replicated to backup whereever needed.
The ID generater at backup only works when backup becomes live
(when it is properly initialized from journal).
This method name would clash with ServiceComponent
As the real meaning here on this method is just to failover
So I've renamed the method to avoid the clash with my next commit
(I've done this on a separate commit as you may need to redo this
commit from scratch again in other branches instead of lots of clashes on cherry-pick)
When a large message is being diverted, a new copy of the original
message is created and replicated (if there is a backup) to the backup.
In LargeServerMessageImpl.copy(long) it reuse a byte array to copy
message body. It is possible that one block of date is read into
the byte array before the previous read has been replicated,
causing the replicated bytes to corrupt.
If we make a copy of the byte array before replication, the corruption
of data will be avoided.
Before sending of messages to server 0 begins, the test
should wait until consumer is registered at RemoteQueueBindingImpl
on server 0. Otherwise some messages may not be rebalanced
to server 1.
it doesn't really matter the number of files.. as long as the data is valid.
This type of assertion limits the implementation. it's mocking test with too much intrusion
over the implementation. Hence I'm removing these clauses that will fail eventually.
Add extra configuration to address-settings to be able to
control / enable address/queue deletion by pattern,
rather than a global toggle.
Add support in the reload logic to remove address
and/or queues if the address matches an address setting,
where it is enabled.
When a broken packet arrives at client side it causes decoding error.
Currently artemis doesn't handle it properly. It should catch such
errors and disconnect the underlying connection, logging a proper
warning message
Use AcitveMQDestination for subscription naming, fixing and aligning queue naming in the process.
The change is behind a configuration toggle so to avoid causing any breaking changes for uses not expecting.
In a cluster if a node is shut down (or crashed) when a
message is being routed to a remote binding, a internal
property may be added to the message and persisted. The
name of the property is like _AMQ_ROUTE_TOsf.my-cluster*.
if the node starts back, it will load and reroute this message
and if it goes to a local consumer, this property won't
get removed and goes to the client.
The fix is to remove this internal property before it
is sent to any client.
Update ActiveMQConnection to change/alighn behaviour for addtional methods:
- getMetaData
- stop
Adding test to avoid regression.
This is aligning with qpid-jms and openwire clients
Adds headers AMQ_SCHEDULED_DELAY and AMQ_SCHEDULED_TIME to STOMP
protocol handling to allow for delayed and scheduled time of a
message. The AMQ_SCHEDULED_DELAY brings forward the same option
from the 5.x broker and the AMQ_SCHEDULED_TIME option adds a fixed
time of delivery alternative to match that of AMQP and others.
Add test case, to prove the issue, and then obviously ensure it works, post fix.
Apply changes in logic of createQueueName to handle global better and fix the behaviour.
Create queues so names are same as behaviour with core client.
Add topic and queue cache maps in Session.
Add configuration to use cache or not with defaulting to false, which keeps existing behaviour as the default.
Need to configure the WS Handshaker in the test client's netty transport
with the same value given to the proton connection via setMaxFrameSize
so that incoming frames larger than the default 65535 over WS don't
trigger netty to fail the connection.
Building on ARTEMIS-905 JCtools ConcurrentMap replacement first proposed but currently parked by @franz1981, replace the collections with primitive key concurrent collections to avoid auto boxing.
The goal of this is to reduce/remove autoboxing on the hot path.
We are just adding jctools to the broker (should not be in client dependencies)
Like wise targeting specific use case with specific implementation rather than a blanket replace all.
Using collections from Bookkeeper, reduces outside tlab allocation, on resizing compared to JCTools, which occurs frequently on testing.
Port fixes to the AMQP test client recently made in the 5.x version.
Fixes some thread safety issues in the Transport. Ensures more
timely shutdown of the Connection executor. Uses a dynamic Proxy
to generate Read-Only Proton wrappers instead of the hand crafted
versions. Adds additional logging for test data
- NIO/ASYNCIO new TimedBuffer with adapting batch window heuristic
- NIO/ASYNCIO improved TimedBuffer write monitoring with
lightweight concurrent performance counters
- NIO/ASYNCIO journal/paging operations benefit from less buffer copy
- NIO/ASYNCIO any buffer copy is always performed with raw batch copy
using SIMD instrinsics (System::arrayCopy) or memcpy under the hood
- NIO improved clear buffers using SIMD instrinsics (Arrays::fill) and/or memset
- NIO journal operation perform by default TLABs allocation pooling (off heap)
retaining only the last max sized buffer
- NIO improved file copy operations using zero-copy FileChannel::transfertTo
- NIO improved zeroing using pooled single OS page buffer to clean the file
+ pwrite (on Linux)
- NIO deterministic release of unpooled direct buffers to avoid OOM errors
due to slow GC
- Exposed OS PAGE SIZE value using Env class
move classes and methods to their correct location to avoid cyclic dependencies between packages and classes.
ARTEMIS-904 Remove cyclic dependencies from artemis-cli
move classes and methods to their correct location to avoid cyclic dependencies between packages and classes.
ARTEMIS-904 Remove cyclic dependencies from artemis-cli
move classes and methods to their correct location to avoid cyclic dependencies between packages and classes.
Added a wait-for-activation option to shared-store master HA policies.
This option is enabled by default to ensure unchanged server startup behavior.
If this option is enabled, ActiveMQServer.start() with a shared-store master server will not return
before the server has been activated.
If this options is disabled, start() will return after a background activation thread has been started.
The caller can use waitForActivation() to wait until server is activated, or just check the current activation status.
Adds a test that validates that messages that are either lacking a
header or are set to be non-durable are not persisted and are not
recovered on broker restart.
Adding a new ActievMQServerPlugin interface to support adding custom
behavior to the broker at certain events such as connection or session
creation.
https://issues.apache.org/jira/browse/ARTEMIS-898
When creating some AMQP resources (senders, receivers, etc) the broker
can return an error of 'failed' instead of the security error that is
expected in these cases. In the case of a receiver being created and
a security error happening the broker fails to send back a response
causing the client to hang waiting for an attach response.
Refactor the AMQP test suite grouping tests into more logical unit
tests and adding additional coverage in many areas. Adds some negative
validation tests to cover features that were only partially tested.
Brings in tests from ActiveMQ 5.x that were not yet ported to Artemis
to increase coverage amd test scenarios previously seen to have issues
in the 5.x broker.
Improve tests that were failing sporadically due to not waiting for
broker stats to be updated after async calls were made.
Instead of going directly into backup mode within the shared-store
live activation, we just change the HA-policy to slave and return
to the caller - ActiveMQServerImpl.internalStart().
The caller will then handle the backup activation as usual
in a separate thread, such that EmbeddedJMS.start() can return.
Also added a related integration test.
sendMessage() may throw ActiveMQException that causes CNFE
at the management client. Also it should check if headers
in the message is null (to prevent NPE).
Broker should support full qualified queue names (FQQN)
as well as bare queue names. This means when clients access
to a queue they have two equivalent ways to do so. One way
is by queue names and the other is by FQQN (i.e. address::qname)
names. Currently only receiving is supported.
On link attach we currently default out SenderSettleMode to MIXED which
while legal doesn't truly reflect what the client asked for. We instead
now update the link to reflect the mode requested by the client
Also add some tests to ensure that we always return the
ReceiverSettleMode as FIRST since we don't support SECOND.
Adds some new AMQP protocol handling tests brought forward from
ActiveMQ 5.x as well as cleaning up some of th existing tests
code to make adding some other tests easier.
This is fixing an issue introduced on 4b47461f03 (ARTEMIS-822)
The Transactions were being looked up without the readLock and some of the controls for Read and Write lock
were broken after this.
removing JmsTopicWildcardSendReceiveTest::testReceiveWildcardTopicMatchDoubleWildcard
Accordingly to bisect, this test was broken at 21b64b3e4f
and it contradicts the commit done at 21b64b3e4f
So, this is being removed
AIOFileLockManager doesn't work on NFS-mounted share store directories.
Since the GFS2 bug https://bugzilla.redhat.com/show_bug.cgi?id=678585
has been fixed end of 2011, the class AIOFileLockManager is no longer needed and I have removed it.
Broker should support full qualified queue names (FQQN)
as well as bare queue names. This means when clients access
to a queue they have two equivalent ways to do so. One way
is by queue names and the other is by FQQN (i.e. address::qname)
names. Currently only receiving is supported.
Add tests for this management operation with both core and AMQP encoded
messages. Also fix a few problems with the implementation like not
checking the passed-in headers for null and not counting messages
properly.
Fix the getUserID and getTimestamp methods in AMQPMessage to read and
return the correct values. Adds some tests to cover these cases and
cleans up some others.
Ensure that the header value for priority is read and returned in a form
that is scaled such that it won't cause an IndexOutOfBoundsException
from the QueueImpl priority array. Adds some additional testing for
message priority support.
This is moving the smoke tests creates as part of the relication tests.
They are also now based on junit tests.
And to support starting servers I am exposing basedir to unit tests in general.
When populate-validated-user = true AMQP messages can cause exceptions.
This feature isn't particularly applicable to AMQP so this commit
eliminates the exception and leaves the AMQP messages untouched
even if populate-validated-user = true. In other words,
populate-validated-user + AMQP is not supported.
If broker fails to decode any packets from buffer, it should
treat it as a critical bug and disconnect immediately.
Currently broker only logs an error message.
When I added flow control, some tests that were using reflection started to fail.
Also as a precaution I'm using <= on the flow control low credit check
Update the AMQP test client to allow for better inspection of the
delivery updates that happen during normal use. Use those modification
to check that when the broker's sender accepts and settles a non-settled
disposition it adds a proper TransactionState disposition with the
correct outcome and txn-id in that state.
When a message is sent to the broker with a TransactionState indicating
that the message should be included in a transaction the disposition from
the broker indicating acceptance of the message should be done using a
TransactionState value that contained the TX ID and the Accepted
disposition.
The coordinator needs to refill credit on the receiver once it has been
exhausted, otherwise the remote cannot send additional declare or
discharge commands to the broker.
Adjust slow-consumer detection logic to use the number of messages in
the queue and not just the number of messages added since the last
check. This means the getRate() method now returns the rate of messages
which it *could* have dispatched since the last check rather than the
rate at which it received messages. This is a more reliable metric to
ensure the slow-consumer detection logic doesn't flag a consumer as
slow unfairly. Although the reliability will come at a performance cost
since getMessageCount() must lock the queue.
As part of my refactoring on AMQP, the broker shouldn't rely on Application properties
for any broker semantic changes on delivery.
I am removing any access to those now, so we can properly deal with this post 2.0.0.
Artemis expose createQueue() method to management console like Jon.
If the queue to be created already exists it throws an ActiveMQException
back to the console, which will get a ClassNotFoundException when
deserializing the exception.
The same issue goes also with AddressControl.sendMessage() method.
To fix that it should throw a common java exception like IllegalStateException.
with this we could send and receive message in their raw format,
without requiring conversions to Core.
- MessageImpl and ServerMessage are removed as part of this
- AMQPMessage and CoreMessage will have the specialized message format for each protocol
- The protocol manager is now responsible to send the message
- The message will provide an encoder for journal and paging
When openwire sends back an exception response, it doesn't set
the correct correlation id. This causes the client to miss the
response and the exception won't get caught.
To fix it we need to add the correlation id before sending.
When sending an empty ObjectMessage, broker doesn't
write a 'length' field to the message buffer. In delivery
the broker tries to read the length from the buffer, which
causes "IndexOutOfBoundsException".
To fix it, we need to check if the buffer is empty or not,
and only read it if the buffer is not empty.
When a producer sends a messages to a temp destination created from
another connection, it fails. The reason behind it is that the
producer's connection didn't receive the advisory message (notification)
from broker about this temp destination, and it will throw an exception
if it doesn't know this temp destination.
The fix is send the advisory to the client so that it knows this destination.
When creating a 'no-local' openwire consumer, it doesn't work,
meaning it can still receive messages from the same connection.
The fix is similar to what Artemis client does, which is adding
a 'filter' to the consumer/subscription.
The difference is that with OpenWire we have to do it on the
broker side.
Adds tests for handling of Rejected, Released and Modified outcomes for
a delivery sent to a receiver. Tests show that for the Modified outcome
the broker is redelivering the message to the same receiver when the
undeliverable here value is set which violates the AMQP 1.0 specified
handling of that field. Also for Rejected outcome the broker should
be sending the rejected message to the DLQ as Rejected is supposed to
be a terminal outcome.
Small fix included to not adjust the delivery count if the Modified
outcome does not indicate that the delivery failed.
If the SASL plain mechanism arrives with the authzid value set the
mechanism needs to account for its presence and use the correct fields
of the exchange to get the username and password values. Adds some
tests to validate this fix.
Two issues encountered here:
i - ClosingConnectionTest was intermittently breaking other tests, in particular it was breaking PagingLeakTest for no apparent reason
. apparently it was a dead lock from removeAddress on the ServerController
ii - it still showing issues after removing the not needed synchronziation. Since the test is not really needed I am just removing the offending test.
Tests for the management of temporary destinations using the dynamic
node feature. Failing case, the broker return a source or target that
indicates it will honor the lifetime policy of delete on close but the
temporary destination remains in existence after the link it closed.