Azure AI Content Safety
An Azure service that enables users to identify content that is potentially offensive, risky, or otherwise undesirable. Previously known as Azure Content Moderator.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
I'm trying to use the Content Safety Studio to train the AI on a custom category. I uploaded my .jsonl file to a blob storage and have 64 examples in the following format in the file:
{"text": "Message: Temporary Access Token One-Time Access Token Use this token to access the temporary file viewer. This token is valid for 1 hour only. xxxx Do not refresh the page after entering the code. ", "isPositive": true}
{"text": "Message: Access Token One-Time Access Token Use this token to access the file viewer. This token is valid for 1 hour only. xxxx Do not refresh the page after entering the code. ", "isPositive": true}
Except every time I try to train the model, the training status fails with 'internal error'. I've tried five times now and I get the internal error every time. Are there certain restricted characters or something? The documentation here is severely lacking and it takes at least two hours for it to error out so it's incredibly difficult to troubleshoot.