In my experiments I encountered situations where rabbit would not
recover from a high memory alert even though all messages had been
drained from it. By inspecting the running processes I determined that
queue and channel processes sometimes hung on to garbage. Erlang's gc
is per-process and triggered by process reduction counts, which means
an idle process will never perform a gc. This explains the behaviour -
the publisher channel goes idle when channel flow control is activated
and the queue process goes idle once all messages have been drained
from it.
Hibernating idle processes forces a gc, as well as generally reducing
memory consumption. Currently only channel and queue processes are
hibernating, since these are the only two that seemed to be causing
problems in my tests. We may want to extend hibernation to other
processes in the future.
The default 80% is just too low for many systems - I have less than
that on tanto most of the time.
It remains to be seen whether the new figure works ok for most users.
The buffering_proxy:mainloop was unconditionally requesting new
messages from the proxy. It should only do that when it has just
finished handling the messages given to it by the proxy in response to
a previous request, and not after handling a direct message.
This now supports the registration of alertee processes with callback
MFAs. We monitor the alertee process to keep the alertee list current,
and notify alertees of initial high memory conditions, and any
changes.
If the target queue died normally we don't care, and if it died
abnormally the reason is logged by the queue supervisor. In both cases
we treat the message as unrouted.
We don't really need two flavours of call_with_exchange.
In delete_bindings/1 we can assume the exchange exists - this code is
called as part of queue deletion and we don't care about exchanges
that have gone missing.
Also, it is sufficient to use an ordinary read rather than wread.