If you are receiving {"blocklistsMatch":[],"categoriesAnalysis":[]} for all your text moderation calls, it may indicate that the API is not detecting any harmful content based on the input provided. Here are a few things to check:
- Input Text: Ensure that the text you are sending to the API contains content that should trigger a response. If the text is benign or does not match any blocklist items, the API will return empty results.
- Blocklist Configuration: Verify that the blocklists you are using are correctly set up and that they contain items that should match against the input text. If the blocklist is empty or not properly referenced in your API call, it will not yield any matches.
- API Key and Endpoint: Double-check that you are using the correct API key and endpoint for your Azure Content Safety resource. If there have been any changes to your resource or if you are using a different subscription, this could affect the API's functionality.
- Service Status: Check the Azure service status to see if there are any ongoing issues with the Content Safety service that might affect its performance.
- Rate Limits: Ensure that you are not hitting any rate limits imposed by the API, which could lead to unexpected behavior.
If everything seems correct and it still does not work, consider reaching out to Azure support for further assistance.
References: