Good day.
We are having memory consumption issues with our MySql Production instance.
Issue details:
Our Server memory consumption grows steadily, and approximately every 20 days Server runs out of memory.
Originally this behavior caused OOM restarts of the server, now we are restarting it manually when the threshold of >95% is reached.
Originally (with default configuration, which was using innodb_buffer_pool_size = 2Gb) it was restarting each 7 days.
The actual problem is that according to all the configuration settings -Server should not require that much memory. And according to all diagnostics of the Server - memory consumption of Database Components does not require these 4 Gigabytes in any way, which ran out regardless of anything, during this 20 days period.
Problem:
We cannot identify what causes that memory consumption, and cannot reproduce same issues on other environments that we have, even of the same Tier.
We need some advisory for identifying, and fixing the root cause for that behavior.
According to our understanding we are facing some memory leakage in components, not directly related to MySql, but rather connected to Azure SaaS. Reason for that assumption is that all MySQL metrics are showing proper memory consumption, which is configured to consume about 70% of the actual memory (now much less, as we have modified innodb pool size).
Details
Instance details:
- Deployed on Azure Database for MySQL flexible server
- Version: 8.0
- Tier: Burstable, B2s, 2 vCores, 4 GiB RAM, 20 storage, 360 IOPS
**Configuration details:
**Pretty much all of the parameters are left as they where configured by the Azure. Including Dynamic. The only notable exception is innodb_buffer_pool_size, which was lowered to 1Gb from the default 2Gb.
- tmp_table_size
- innodb_buffer_pool_size
- 1073741824 (1024Mb)
- Actual usage always stays within 1Gb (usually even when server goes OOM it is still around 1010.X Mb)
- max_connections
- 341.
- Typically Service is having about 5-20 simultaneous connections opened. 5 for common usage, and about 20 during heaviest ETLs.
Additional details:
Our tables are relying on InnoDB engine.
We do not have locks problems.
We do not have issues with "stalled" transactions.
We do have quite complex queries involved in the Database. But this should be completely irrelevant, as configuration of all related subservices of MySql is such, that it should not consume memory without releasing it anyways.
We have ETLs, which are flushing notable amount of data into that Database, working each hour, and each 12-hour period. We can clearly see the effect of those on InnoDb pages amount, but it acts completely healthy (consumes some pages, then clears some of them). According the available metrics - impact is completely manageable.
Thanks in advance.
Br, Seva.
P.S. This is quite a serious issue for us, as it affects our Prod.