azure content safety api not working

Susie Park 0 Reputation points
2025-12-03T22:49:49.3333333+00:00

I'm getting {"blocklistsMatch":[],"categoriesAnalysis":[]} for all text moderation calls i'm making.

It used to work before. https://learn.microsoft.com/en-us/azure/ai-services/content-safety/quickstart-text?pivots=programming-language-rest&tabs=visual-studio%2Cwindows i followed this guide.

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Q&A Assist
    2025-12-03T22:49:55.47+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    If you are receiving {"blocklistsMatch":[],"categoriesAnalysis":[]} for all your text moderation calls, it may indicate that the API is not detecting any harmful content based on the input provided. Here are a few things to check:

    1. Input Text: Ensure that the text you are sending to the API contains content that should trigger a response. If the text is benign or does not match any blocklist items, the API will return empty results.
    2. Blocklist Configuration: Verify that the blocklists you are using are correctly set up and that they contain items that should match against the input text. If the blocklist is empty or not properly referenced in your API call, it will not yield any matches.
    3. API Key and Endpoint: Double-check that you are using the correct API key and endpoint for your Azure Content Safety resource. If there have been any changes to your resource or if you are using a different subscription, this could affect the API's functionality.
    4. Service Status: Check the Azure service status to see if there are any ongoing issues with the Content Safety service that might affect its performance.
    5. Rate Limits: Ensure that you are not hitting any rate limits imposed by the API, which could lead to unexpected behavior.

    If everything seems correct and it still does not work, consider reaching out to Azure support for further assistance.


    References:

    0 comments No comments

  2. Anshika Varshney 3,795 Reputation points Microsoft External Staff Moderator
    2025-12-05T15:33:11.2133333+00:00

    Hey **Susie Park,
    **I understand that you’re getting { "blocklistsMatch": [], "categoriesAnalysis": [] } for all your text moderation calls, which is quite frustrating. Let's troubleshoot this together.

    Here are a few things to check:

    1. Verify Resource Configuration: Make sure that your Azure AI Content Safety resource is correctly configured. It might be that the service has been altered or that the content safety capabilities have been turned off for your model.
    2. Examine API Call Structure: Double-check the structure of your API calls. Any discrepancies in the request body could lead to unintended results. Ensure that you are adhering to the API definitions as per the latest Azure AI Content Safety documentation.
    3. Inspect Content: Sometimes, if the prompts are too generic or don’t contain specific terms that would trigger matching, you might not see any results. Consider testing with content that is known to be flagged in the past.
    4. Review Recent Changes: Have there been any recent changes to your models or API? Sometimes updates or modifications can affect performance.
    5. Check API Limits: Ensure that you are not hitting any query limits or quota restrictions as outlined in the service limits documentation.
    6. Monitor Error Codes: If you're experiencing specific error codes, refer to the Azure AI Content Safety error codes documentation to understand what might be happening.

    References:

    I Hope this helps. Do let me know if you have any further queries.


    If this answers your query, please do click Accept Answer and Yes for was this answer helpful.

    Thank you!

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.