If a delete happens shortly after a declare or other stream change
there is a chance the mnesia update process that is spawned will crash
when the amqqueue record cannot be recovered from durable storage.
This isn't harmful but does pollute the logs.
For booleans, we can prefer the operator policy value
unconditionally, without any safety implications.
Per discussion with @binarin @pjk25
(cherry picked from commit 6edb7396fd)
A channel that first sends a mandatory publish before enabling
confirms mode may not receive confirms for messages published
after that. This is because the publish_seqno was increased
also for mandatory publishes even if confirms were disabled.
But the mandatory feature has nothing to do with publish_seqno.
The issue exists since at least
38e5b687de
The test case introduced focuses for multiple=false. The issue
also exists for multiple=true but it has a different impact:
sending multiple=true,delivery_tag=2 results in both messages
1 and 2 being acked, even if message 2 doesn't exist as far
as the client is concerned. If the message does exist
it might get confirmed earlier than it should have been. The
issue is a bigger problem the more mandatory messages were
sent before enabling confirms mode.
This is copied from https://github.com/rabbitmq/rabbitmq-common/pull/349
If a message is sent to only one queue(in most application scenarios), passing through the 'delegate' is meaningless. Otherwise, it increases the delay of the message and the possibility of 'delegate' congestion.
Here are some test data:
node1: Pentium(R) Dual-Core CPU E5300 @ 2.60GHz
node2: Pentium(R) Dual-Core CPU E5300 @ 2.60GHz
Join node1 and node2 to a cluster. Create 100 queues on node2, and start 100 consumers to receive messages from these queues.
Start 100 publishers on node1 to send messages to the queues of node2. Each publisher will send 10k messages at the rate of 100/s(10k/s theoretically in total), and all the messages for all publishers is 1 million.
Before optimisation:
{1,[{msg_time,812312(=<1ms),177922(=<5ms),9507(=<50ms),221(=<500ms),38(=<1000ms),0,0,0,0,1061,1069,0,0}]}
After optimisation:
{1,[{msg_time,902854(=< 1ms),93993(=<5ms),3038(=<50ms),96(=<500ms),19(=<1000ms),0,0,0,0,1049,1060,0,0}]}
Additional information:
Time counted here is the stay time of a message in the cluster, that is, Time(leaving from node2 at) - Time(reaching node1 at).
"812312(=<1ms)" is the number of messages with time consumption less than or equal to 1ms.
Overall, the optimisation is effective.
Only a couple of fields of the stream consumer record change
very frequently (credits and Osiris log reference), so this commit
introduces a nested record in the main consumer record that
contains the immutable fields. This potentially avoids producing
a lot of garbage, especially when the consumer state contains
several properties (consumer name, or single active consumer information
in the future).
Fixes#3841
Note that Lager itself doesn't handle certain combinations:
* $W0H45 is fine
* $W0D1H45 fails with an error
but hopefully what we have now should be enough for
a minimalistic built-in log rotation feature.