I’ve dealt with this exact problem before on Consumption and Flex-Consumption plans.
When you see a sudden surge in “Temporary failure in name resolution” and you are not using VNets, the issue is almost always the same:
Azure rotates the underlying compute hosts for Consumption plans, and sometimes a batch of hosts ends up with broken or degraded DNS resolvers. It doesn’t show in Service Health because these issues are host-level**,** not region-wide, so Microsoft does not classify them as incidents.
How to confirm it: Restart the Function App (or force a redeploy). If the DNS errors disappear immediately after restart and then come back days later, you are landing on bad backend hosts.
What actually fixes it:
- Restart the Function App to force your app onto a different backend host.
- If it continues, switch the app temporarily between Consumption → Flex Consumption → back, which forces a full reallocation.
- Add a simple DNS test in Application Insights to confirm if the resolver is failing before your code even runs.
- If the issue persists for more than a few hours, open a support ticket with your timestamp and region. Microsoft will drain the faulty hosts manually. I had to do this last time.