* Added option for how many subscriptions to send in a single call to the server. aws iotcore limits this to 8
* Split subscription messages up to 8 at a time when reconnecting
- the Info level for publishing subscriptions is quite noisy
- previously this wasn't a problem because subscription processing was buffered, now the message is logged for nearly every single un/subscription that is made
- all other frequently recurring log events are also on level Verbose
This change kills several birds with one stone:
- by performing the re-publishing of all subscriptions only at reconnect it fixes issue https://github.com/chkr1011/MQTTnet/issues/569 (retained messages are received again with every new subscription that is added)
- not sending the subscriptions again with every (un)subscription is also a performance improvement if subscriptions are modified on a regular basis, because only the updated subscriptions are sent to the broker (previously, if you had 100 subscriptions and added a new one, 100 suscriptions were re-sent to the broker along with the new one, causing a significant delay)
-until now subscriptions were sent to the broker only every ConnectionCheckInterval which caused unnecessary delays, and for request-response patterns could cause reponse messages to be missed due to the subscription delay. Now (un)subscriptions are published immediately
Subscriptions are now cleaned up at logout (after the connection stops being maintained). This is in line with the clearing of the _messageQueue in StopAsync
Explanatory note: the _subscriptionsQueuedSignal could ideally be a ManualResetEvent(Slim) but those do not offer an awaitable Wait method, so a SemaphoreSlim is used. This possibly causes a few empty loops in PublishSubscriptionsAsync due to the semaphore being incremented with every SubscribeAsync call but all subscriptions getting processed in one sweep, but those empty loops shouldn't be a problem
Sometimes, TryPublishQueuedMessageAsync would try to remove a message from the storage queue before PublishAsync added it to the storage queue, resulting in a message being stuck in the storage queue forever. Switched the message queue lock to an async lock and synchronized the storage queue updates with the message queue updates.
We had been seeing an issue in which the queue could grow larger than the configured cap. I examined the code and saw that this could happen if _mqttClient.PublishAsync() throws an exception, in which case a message can be re-enqueued without honoring the cap. Furthermore, I saw that it was possible for the DropOldestQueuedMessage strategy to drop messages that were not actually the oldest ones, because when re-enqueueing the messages in the queue are no longer ordered by the original time they entered the queue. It made sense to us to peek at the message when publishing rather than dequeue it, so that when re-enqueueing after an exception 1) the cap is still honored and 2) the order of queued messages isn't altered. It's ok if another thread removes the message that's currently being published from the queue due to the cap, because all we have to do then is check if it's already been removed before removing it ourselves.