The issue is that depage should not put pages on the used pages as they were not actually intended to read.
instead I should create a newPageObject and not use the RefCounts caching.
I did not intend to have this difference in the semaphore.
The idea is that we never keep messages pending on this case, otherwise such a huge message would use too much memory.
I was debugging Compacting, looking for a possible issue here in these conditions.
even though I found nothing wrong with the code, I still want to keep the test as there's no such thing as enough testing.
In commit a9a85f98db I removed the code
which modified existing matches. However, I forgot that the matches read
from LDAP are often duplicated so instead of always adding a new match
this commit ensures that the *right* match is modified rather than a
potentially more generic wildcard match (which was the original
problem).
If an AMQP consumer tries to receive a message and the broker is unable
to convert the message from core to AMQP then the consumer is
disconnected and the offending message stays in the queue. When the
consumer reconnects the conversion error will happen again resulting in
a loop that can only be resolved through administrative action (e.g.
deleting the message manually or sending it to a dead letter address).
This commit fixes that problem by detecting the conversion problem and
sending the message to the queue's dead letter address. It also doesn't
disconnect the consumer.
This commit also changes the log messages associated with sending a
message to the dead letter address since this event can now occur
regardless of the delivery attempts.
Before we moved to Java 11 we recommended using the -XX:+AggressiveOpts
JVM tuning option. In Java 8 this was essentially equivalent to setting
these two main parameters:
- -XX:BiasedLockingStartupDelay=500 (4000 by default)
- -XX:AutoBoxCacheMax=20000 (128 by default)
BiasedLockingStartupDelay defaults to 0 in Java 11, but AutoBoxCacheMax
still defaults to 128. Therefore, we should add
-XX:AutoBoxCacheMax=20000 to restore this optimization that's been lost
since removing -XX:+AggressiveOpts.
Incorrect handling of unknown values in selectors.
There is a slight semantic change here due to an error in the way we
were handling null identifiers. This may require a change in selector
syntax to use "IS NULL" or "IS NOT NULL" when using identifiers which
may be null in the message being selected.
This was the case for an internal filter used by the cluster connection
bridge to select which cluster notification messages to consume.
See https://issues.apache.org/jira/browse/AMQ-5281 for more details.