Starting June 1st, 2024, community support for this series will only be provided to [regularly contributing users](https://github.com/rabbitmq/rabbitmq-server/blob/main/COMMUNITY_SUPPORT.md)
and those who hold a valid [commercial support license](https://tanzu.vmware.com/rabbitmq/oss).
* [Khepri](https://www.youtube.com/watch?v=whVqpgvep90), an [alternative schema data store](https://github.com/rabbitmq/rabbitmq-server/pull/7206) developed to replace Mnesia,
has matured and is now fully supported (it previously was an experimental feature)
* [AMQP 1.0 is now a core protocol](https://www.rabbitmq.com/blog/2024/08/05/native-amqp) that is always enabled. Its plugin is now a no-op that only exists to simplify upgrades.
* The AMQP 1.0 implementation is now significantly more efficient: its peak throughput is [more than double than that of 3.13.x](https://www.rabbitmq.com/blog/2024/08/21/amqp-benchmarks)
* Efficient sub-linear [quorum queue recovery on node startup using checkpoints](https://www.rabbitmq.com/blog/2024/08/28/quorum-queues-in-4.0#faster-recovery-of-long-queues)
* Quorum queues now [support priorities](https://www.rabbitmq.com/blog/2024/08/28/quorum-queues-in-4.0#message-priorities) (but not exactly the same way as classic queues)
* The AMQP 1.0 convention (address format) used for interacting with with AMQP 0-9-1 entities [is now easier to reason about](https://www.rabbitmq.com/docs/next/amqp#addresses)
* Mirroring (replication) of classic queues [was removed](https://github.com/rabbitmq/rabbitmq-server/pull/9815) after several years of deprecation. For replicated messaging data types,
use quorum queues and/or streams. Non-replicated classic queues remain and their development continues
* Classic queue [storage efficiency improvements](https://github.com/rabbitmq/rabbitmq-server/pull/11112), in particular recovery time and storage of multi-MiB messages
* Nodes with multiple enabled plugins and little on disk data to recover now [start up to 20-30% faster](https://github.com/rabbitmq/rabbitmq-server/pull/10989)
RabbitMQ 4.x will not interpret this `x-death` header anymore when clients (re-)publish a message.
Note that RabbitMQ 4.x will continue to set and update the `x-death` header every time a message is dead-lettered, including when a client **rejects** the message.
Applications that rely on RabbitMQ incrementing the `count` fields within the `x-death` header array elements for new messages **(re-)published**
(instead of existing messages being rejected), should introduce and increment [a separate `x-` header](https://github.com/rabbitmq/rabbitmq-server/issues/10709#issuecomment-1997083246),
with a name that would not be updated by RabbitMQ itself.
Several I/O-related metrics are dropped, they should be [monitored at the infrastructure and kernel layers](https://www.rabbitmq.com/docs/monitoring#system-metrics)
### Default Maximum Message Size Reduced to 16 MiB
Default maximum message size is reduced to 16 MiB (from 128 MiB).
The limit can be increased via a `rabbitmq.conf` setting:
```ini
# 32 MiB
max_message_size = 33554432
```
However, it is recommended that such large multi-MiB messages are put into a blob store, and their
IDs are passed around in messages instead of the entire payload.
### AMQP 1.0
RabbitMQ 3.13 `rabbitmq.conf` setting `rabbitmq_amqp1_0.default_vhost` is unsupported in RabbitMQ 4.0.
Starting with Erlang 26, client side [TLS peer certificate chain verification](https://www.rabbitmq.com/docs/ssl#peer-verification) settings are enabled by default in most contexts:
from federation links to shovels to TLS-enabled LDAP client connections.
If using TLS peer certificate chain verification is not practical or necessary, it can be disabled.
Please refer to the docs of the feature in question, for example,
this one [on TLS-enabled LDAP client](http://rabbitmq.com/docs/ldap/#tls) connections,
two others on [TLS-enabled dynamic shovels](https://www.rabbitmq.com/docs/shovel#tls) and [dynamic shovel URI query parameters](https://www.rabbitmq.com/docs/uri-query-parameters).
[Community Docker image](https://hub.docker.com/_/rabbitmq/), [Chocolatey package](https://community.chocolatey.org/packages/rabbitmq), and the [Homebrew formula](https://www.rabbitmq.com/docs/install-homebrew)
are other installation options. They are updated with a delay.
Generic binary builds of `4.0.1` incorrectly report their version as `4.0.0+2`. This also applies to plugin versions. This was [addressed in `4.0.2`](https://github.com/rabbitmq/rabbitmq-server/releases/tag/v4.0.2).
See the [Upgrading guide](https://www.rabbitmq.com/docs/upgrade) for documentation on upgrades and [GitHub releases](https://github.com/rabbitmq/rabbitmq-server/releases)
for release notes of individual releases.
This release series only supports upgrades from `3.13.x`.
This release requires **all feature flags** in the 3.x series (specifically `3.13.x`) to be enabled before upgrading,
In environments where messages can experience 20 redeliveries, the affected queues should have [dead lettering](https://www.rabbitmq.com/docs/dlx)
configured (usually via a [policy](https://www.rabbitmq.com/docs/parameters#policies)) to make sure
that messages that are redelivered 20 times are moved to a separate queue (or stream) instead of
being dropped (removed) by the [crash-requeue-redelivery loop protection mechanism](https://www.rabbitmq.com/docs/next/quorum-queues#poison-message-handling).
* A whole category of issues with binding inconsistency are addressed with the stabilization
of [Khepri](https://github.com/rabbitmq/khepri), a new [metadata store](https://www.rabbitmq.com/docs/metadata-store) that uses a tree of nested objects instead of multiple tables.
With Mnesia, the original metadata store, bindings are stored in two tables, one for durable
bindings (between durable exchanges and durable queues or streams) and another for semi-durable
and transient ones (where either the queue is transient or both the queue and the exchange are).
When a node was stopped or failed, all non-replicated transient queues on that node were deleted
by the remaining cluster peers. Due to high lock contention around these tables with Mnesia, this
could take a while. In the case where the restarted (or failed) node came online before all bindings
were removed, and/or clients could begin to create new bindings concurrently, the bindings table
rows could end up being inconsistent, resulting in obscure "binding not found" errors.
Khepri avoids this problem entirely by only supporting durable entities and using a very different
[tree-based data model](https://github.com/rabbitmq/rabbitmq-server/pull/11225) that makes bindings removal much more efficient and lock contention-free.
Mnesia users can work around this problem by using [quorum queues](https://www.rabbitmq.com/docs/quorum-queues) or durable classic queues
and durable exchanges. Their durable bindings will not be removed when a node stops.
* Single Active Consumer (SAC) implementation of quorum queues now [respects](https://www.rabbitmq.com/blog/2024/08/28/quorum-queues-in-4.0#consumer-priorities-combined-with-single-active-consumer) consumer priorities.
* The AMQP 1.0 implementation is now significantly more efficient: its peak throughput is [more than double than that of 3.13.x](https://www.rabbitmq.com/blog/2024/08/21/amqp-benchmarks)