Prior to this commit, test
```
make -C deps/rabbitmq_web_mqtt ct-proxy_protocol t=http_tests:proxy_protocol
```
was failing with reason
```
exception error: no function clause matching
rabbit_net:sockname({rabbit_proxy_socket,#Port<0.96>,
```
In case of a resource alarm, the server accepts incoming TCP
connections, but does not read from the socket.
When a client connects during a resource alarm, the MQTT CONNECT frame
is therefore not processed.
While the resource alarm is ongoing, the client might time out waiting
on a CONNACK MQTT packet.
When the resource alarm clears on the server, the MQTT CONNECT frame
gets processed.
Prior to this commit, this results in the following crash on the server:
```
** Reason for termination ==
** {{badmatch,{error,einval}},
[{rabbit_mqtt_processor,process_login,4,
[{file,"rabbit_mqtt_processor.erl"},{line,585}]},
{rabbit_mqtt_processor,process_request,3,
[{file,"rabbit_mqtt_processor.erl"},{line,143}]},
{rabbit_mqtt_processor,process_frame,2,
[{file,"rabbit_mqtt_processor.erl"},{line,69}]},
{rabbit_mqtt_reader,process_received_bytes,2,
[{file,"src/rabbit_mqtt_reader.erl"},{line,307}]},
```
After this commit, the server just logs:
```
[error] <0.887.0> MQTT protocol error on connection 127.0.0.1:55725 -> 127.0.0.1:1883: peername_not_known
```
In case the client already disconnected, we want the server to bail out
early, i.e. not authenticating and registering the client at all
since that can be expensive when many clients connected while the
resource alarm was ongoing.
To detect whether the client disconnected, we rely on inet:peername/1
which will return an error when the peer is not connected anymore.
Ideally we could use some better mechanism for detecting whether the
client disconnected.
The MQTT reader does receive a {tcp_closed, Socket} message once the
socket becomes active. However, we don't really want to read frames
ahead (i.e. ahead of the received CONNECT frame), one reason being that:
"Clients are allowed to send further Control Packets immediately
after sending a CONNECT Packet; Clients need not wait for a CONNACK Packet
to arrive from the Server."
Setting socket option `show_econnreset` does not help either because the client
closes the connection normally.
Co-authored-by: Péter Gömöri @gomoripeti
Enable the quorum queue for MQTT only if CleanSession is False.
QQs don't support auto-delete flag so in case Clean session is True
the queue will be a classic queue.
Add another group test non_parallel_tests_quorum.
For Mixed test the quorum_queue feature flag must be enabled.
Add log message
Fixes#2941
This adds proper exception handlers in the right places. And tests
ensure that it indeed provides nice neat logs without large
stacktraces for every amqp operation.
Unnecessary checking for subscribe permissions on topic was dropped,
as `queue.bind` does exactly the same check. Topic permissions tests
were also added, and they indeed confirm that there was no change in
behaviour.
Ideally the same explicit topic permission check should be dropped for
publishing, but it's more complicated - so for now there only a
detailed comment in the source code explaining it.
A few other things were also optimized away:
- Using amqp client to test for queue existence
- Creating queues/starting consumptions too eagerly, even if not yet
requested by client
This line is printed on every new MQTT connection which leads to very chatty logs when there is a lot of connections. Given that the way MQTT uses vhosts is generally static (once set up, always the same for all connections), I think this can be a debug message instead.
Lager strips trailing newline characters but OTP logger with the default
formatter adds a newline at the end. To avoid unintentional multi-line log
messages we have to revisit most messages logged.
Some log entries are intentionally multiline, others
are printed to stdout directly: newlines are required there
for sensible formatting.
We have considered multiple options in preventing a split cluster
scenario when N nodes a started in parallel and are initially unaware of
each other. They all are fairly involved and run various risks, e.g.
of losing consistency for cluster members that need to rejoin a newly
discovered set of members.
A simple delay to see if there may be any peers seems to be a straightfoward
solution that would make a practical difference.
In the future consistent client ID tracking should be a feature the user
can opt out of because it tilts MQTT plugin potentially to far towards
C on the consistency/availability range.
Pair: @kjnilsson
To avoid blocking when registering or unregistering a client id. This is
ok as informing the current connection holder of the client id is
already async. This should be more scalable and provide much better MQTT
connection setup latency.
Changes initial_state/4 to initial_state/5 to add the peer
address that needs to be provided by Web MQTT. This function
was only used locally and by Web MQTT.
This gives a greater chance to the Last Will message to be delivered
because it will implicitely block `rabbit_mqtt_reader` while the
server-side AMQP channel processes the message.
Without this, `rabbit_mqtt_reader` asks the Last Will message to be
sent asynchronously and immediately closes the AMQP connection. The
server-side AMQP channel might thus try to query an already closed
connection.
Fixes#146.
[#150162950]
(cherry picked from commit a2eb8c1be8c52d476ee9ea0ff08ef381a2ca326c)
This gives a greater chance to the Last Will message to be delivered
because it will implicitely block `rabbit_mqtt_reader` while the
server-side AMQP channel processes the message.
Without this, `rabbit_mqtt_reader` asks the Last Will message to be
sent asynchronously and immediately closes the AMQP connection. The
server-side AMQP channel might thus try to query an already closed
connection.
Fixes#146.
[#150162950]
Lager sink parse-transform allows us to use a fake module name
to select sinks.
It's more convenient that to use a helper function for every module.
[#149634975]