Modified Content Filters / Managed Customer access.

Martijn van Halen 0 Reputation points
2025-11-26T11:17:39.9433333+00:00

[URLs modified to not qualify as a backlink]

For our service https://crimeowl(.)ai we use managed Ai instances of various LLM models. Since we are into investigation crime and cold cases we get material that describes crimes as well. We already set content filters to lowest, but we keep having issues. Now we are going into public datasets like Epstein to make it available for Journalists and researchers and we know we will need to remove the content filters. How to get permission? We have been featured on NBC https://www.youtube(.)com/watch?v=MGz035pQnW0

Azure AI Content Safety
Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
{count} votes

1 answer

Sort by: Most helpful
  1. SRILAKSHMI C 10,805 Reputation points Microsoft External Staff Moderator
    2025-11-26T14:09:12.3966667+00:00

    Hello Martijn van Halen,

    Welcome to Microsoft Q&A and Thank you for reaching out.

    I understand that you’re looking for a way to work with highly sensitive investigative material especially public datasets like the Epstein documents and you want to know how to remove or relax Azure/OpenAI content filters so your workflows don’t get blocked.

    Given the nature of your domain, here’s the most accurate guidance:

    What’s possible and what isn’t

    Even at the lowest settings, Azure/OpenAI still enforces a set of mandatory safety protections. These cannot be disabled for any customer, including managed customers. This applies to areas such as:

    sexual content involving minors

    explicit acts of violence

    real-world exploitation

    Because of global compliance requirements, there is no approval path to fully remove those filters not even for research, journalism, or forensic analysis on public datasets.

    What you can do

    Although the mandatory filters can’t be turned off, there are still steps you can take to reduce false positives and get more predictable behavior for professional investigative workloads.

    1. Request Managed Customer Access / Modified Content Filters

    You can apply for Managed Customer Access (MCA), which offers more flexibility for legitimate professional scenarios like yours. This does not remove mandatory protections, but it does:

    reduce unnecessary blocking

    improve handling of contextual descriptions of crimes

    provide more predictable responses with sensitive datasets

    allow certain content normally filtered at standard levels

    You can submit this request through the Azure OpenAI Limited Access Review – Modified Content Filters form. Make sure your application clearly explains:

    your public-interest investigative purpose

    the nature of the datasets

    the need for factual, non-graphic handling

    your internal controls and review process

    A strong justification helps the approval move faster.

    2. Review and adjust your current content filter configuration

    Since you’ve already set filters to the lowest level, also confirm the configuration in Azure OpenAI Studio’s content filter settings. Some categories still need fine-tuning, even when set to “low,” depending on the type of prompts you’re sending.

    3. Monitor and fine-tune based on model behavior

    After adjusting the settings:

    keep logging which prompts get blocked

    validate whether they fall under the mandatory categories

    refine prompts to keep them factual and non-graphic

    mask or preprocess explicit details before sending them to the model

    This helps balance compliance with the need to analyze sensitive evidence.

    You can request Managed Customer Access / Modified Filters for more flexibility, but full removal of safety controls especially around minors or explicit exploitation is not possible for any customer. Careful configuration, prompt structuring, and support-assisted review usually give investigative teams the stability they need.

    I Hope this helps. Do let me know if you have any further queries.

    Thank you!

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.