Share via


ContentHarmEvaluator(IDictionary<String,String>) Constructor

Definition

An IEvaluator that utilizes the Azure AI Foundry Evaluation service to evaluate responses produced by an AI model for the presence of a variety of harmful content such as violence, hate speech, etc.

public ContentHarmEvaluator(System.Collections.Generic.IDictionary<string,string>? metricNames = default);
new Microsoft.Extensions.AI.Evaluation.Safety.ContentHarmEvaluator : System.Collections.Generic.IDictionary<string, string> -> Microsoft.Extensions.AI.Evaluation.Safety.ContentHarmEvaluator
Public Sub New (Optional metricNames As IDictionary(Of String, String) = Nothing)

Parameters

metricNames
IDictionary<String,String>

A optional dictionary containing the mapping from the names of the metrics that are used when communicating with the Azure AI Foundry Evaluation service, to the Names of the EvaluationMetrics returned by this IEvaluator.

If omitted, includes mappings for all content harm metrics that are supported by the Azure AI Foundry Evaluation service. This includes HateAndUnfairnessMetricName, ViolenceMetricName, SelfHarmMetricName and SexualMetricName.

Remarks

ContentHarmEvaluator can be used to evaluate responses for all supported content harm metrics in one go. You can achieve this by omitting the metricNames parameter.

ContentHarmEvaluator also serves as a base class for HateAndUnfairnessEvaluator, ViolenceEvaluator, SelfHarmEvaluator and SexualEvaluator which can be used to evaluate responses for one specific content harm metric at a time.

Applies to