Hello Martijn van Halen,
Welcome to Microsoft Q&A and Thank you for reaching out.
I understand that you’re looking for a way to work with highly sensitive investigative material especially public datasets like the Epstein documents and you want to know how to remove or relax Azure/OpenAI content filters so your workflows don’t get blocked.
Given the nature of your domain, here’s the most accurate guidance:
What’s possible and what isn’t
Even at the lowest settings, Azure/OpenAI still enforces a set of mandatory safety protections. These cannot be disabled for any customer, including managed customers. This applies to areas such as:
sexual content involving minors
explicit acts of violence
real-world exploitation
Because of global compliance requirements, there is no approval path to fully remove those filters not even for research, journalism, or forensic analysis on public datasets.
What you can do
Although the mandatory filters can’t be turned off, there are still steps you can take to reduce false positives and get more predictable behavior for professional investigative workloads.
1. Request Managed Customer Access / Modified Content Filters
You can apply for Managed Customer Access (MCA), which offers more flexibility for legitimate professional scenarios like yours. This does not remove mandatory protections, but it does:
reduce unnecessary blocking
improve handling of contextual descriptions of crimes
provide more predictable responses with sensitive datasets
allow certain content normally filtered at standard levels
You can submit this request through the Azure OpenAI Limited Access Review – Modified Content Filters form. Make sure your application clearly explains:
your public-interest investigative purpose
the nature of the datasets
the need for factual, non-graphic handling
your internal controls and review process
A strong justification helps the approval move faster.
2. Review and adjust your current content filter configuration
Since you’ve already set filters to the lowest level, also confirm the configuration in Azure OpenAI Studio’s content filter settings. Some categories still need fine-tuning, even when set to “low,” depending on the type of prompts you’re sending.
3. Monitor and fine-tune based on model behavior
After adjusting the settings:
keep logging which prompts get blocked
validate whether they fall under the mandatory categories
refine prompts to keep them factual and non-graphic
mask or preprocess explicit details before sending them to the model
This helps balance compliance with the need to analyze sensitive evidence.
You can request Managed Customer Access / Modified Filters for more flexibility, but full removal of safety controls especially around minors or explicit exploitation is not possible for any customer. Careful configuration, prompt structuring, and support-assisted review usually give investigative teams the stability they need.
I Hope this helps. Do let me know if you have any further queries.
Thank you!