You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`Relevance`| Evaluates how relevant a response is to a query|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceEvaluator>|
37
-
|`Completeness`| Evaluates how comprehensive and accurate a response is|<xref:Microsoft.Extensions.AI.Evaluation.Quality.CompletenessEvaluator>|
38
-
|`Retrieval`| Evaluates performance in retrieving information for additional context|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RetrievalEvaluator>|
|`Coherence`| Evaluates the logical and orderly presentation of ideas|<xref:Microsoft.Extensions.AI.Evaluation.Quality.CoherenceEvaluator>|
41
-
|`Equivalence`| Evaluates the similarity between the generated text and its ground truth with respect to a query|<xref:Microsoft.Extensions.AI.Evaluation.Quality.EquivalenceEvaluator>|
42
-
|`Groundedness`| Evaluates how well a generated response aligns with the given context|<xref:Microsoft.Extensions.AI.Evaluation.Quality.GroundednessEvaluator>|
43
-
|`Relevance (RTC)`, `Truth (RTC)`, and `Completeness (RTC)`| Evaluates how relevant, truthful, and complete a response is|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceTruthAndCompletenessEvaluator>†|
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceEvaluator>|`Relevance`| Evaluates how relevant a response is to a query |
37
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.CompletenessEvaluator>|`Completeness`| Evaluates how comprehensive and accurate a response is |
38
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RetrievalEvaluator>|`Retrieval`| Evaluates performance in retrieving information for additional context |
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.CoherenceEvaluator>|`Coherence`| Evaluates the logical and orderly presentation of ideas |
41
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.EquivalenceEvaluator>|`Equivalence`| Evaluates the similarity between the generated text and its ground truth with respect to a query |
42
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.GroundednessEvaluator>|`Groundedness`| Evaluates how well a generated response aligns with the given context |
43
+
|<xref:Microsoft.Extensions.AI.Evaluation.Quality.RelevanceTruthAndCompletenessEvaluator>† |`Relevance (RTC)`, `Truth (RTC)`, and `Completeness (RTC)`| Evaluates how relevant, truthful, and complete a response is |
44
44
45
45
† This evaluator is marked [experimental](../../fundamentals/syslib-diagnostics/experimental-overview.md).
46
46
47
47
### Safety evaluators
48
48
49
49
Safety evaluators check for presence of harmful, inappropriate, or unsafe content in a response. They rely on the Azure AI Foundry Evaluation service, which uses a model that's fine tuned to perform evaluations.
|`Groundedness Pro`| Uses a fine-tuned model hosted behind the Azure AI Foundry Evaluation service to evaluate how well a generated response aligns with the given context|<xref:Microsoft.Extensions.AI.Evaluation.Safety.GroundednessProEvaluator>|
54
-
|`Protected Material`| Evaluates response for the presence of protected material|<xref:Microsoft.Extensions.AI.Evaluation.Safety.ProtectedMaterialEvaluator>|
55
-
|`Ungrounded Attributes`| Evaluates a response for the presence of content that indicates ungrounded inference of human attributes|<xref:Microsoft.Extensions.AI.Evaluation.Safety.UngroundedAttributesEvaluator>|
56
-
|`Hate And Unfairness`| Evaluates a response for the presence of content that's hateful or unfair|<xref:Microsoft.Extensions.AI.Evaluation.Safety.HateAndUnfairnessEvaluator>†|
57
-
|`Self Harm`| Evaluates a response for the presence of content that indicates self harm|<xref:Microsoft.Extensions.AI.Evaluation.Safety.SelfHarmEvaluator>†|
58
-
|`Violence`| Evaluates a response for the presence of violent content|<xref:Microsoft.Extensions.AI.Evaluation.Safety.ViolenceEvaluator>†|
59
-
|`Sexual`| Evaluates a response for the presence of sexual content|<xref:Microsoft.Extensions.AI.Evaluation.Safety.SexualEvaluator>†|
60
-
|`Code Vulnerability`| Evaluates a response for the presence of vulnerable code|<xref:Microsoft.Extensions.AI.Evaluation.Safety.CodeVulnerabilityEvaluator>|
61
-
|`Indirect Attack`| Evaluates a response for the presence of indirect attacks, such as manipulated content, intrusion, and information gathering|<xref:Microsoft.Extensions.AI.Evaluation.Safety.IndirectAttackEvaluator>|
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.GroundednessProEvaluator>|`Groundedness Pro`| Uses a fine-tuned model hosted behind the Azure AI Foundry Evaluation service to evaluate how well a generated response aligns with the given context |
54
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.ProtectedMaterialEvaluator>|`Protected Material`| Evaluates response for the presence of protected material |
55
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.UngroundedAttributesEvaluator>|`Ungrounded Attributes`| Evaluates a response for the presence of content that indicates ungrounded inference of human attributes |
56
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.HateAndUnfairnessEvaluator>† |`Hate And Unfairness`| Evaluates a response for the presence of content that's hateful or unfair |
57
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.SelfHarmEvaluator>† |`Self Harm`| Evaluates a response for the presence of content that indicates self harm |
58
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.ViolenceEvaluator>† |`Violence`| Evaluates a response for the presence of violent content |
59
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.SexualEvaluator>† |`Sexual`| Evaluates a response for the presence of sexual content |
60
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.CodeVulnerabilityEvaluator>|`Code Vulnerability`| Evaluates a response for the presence of vulnerable code |
61
+
|<xref:Microsoft.Extensions.AI.Evaluation.Safety.IndirectAttackEvaluator>|`Indirect Attack`| Evaluates a response for the presence of indirect attacks, such as manipulated content, intrusion, and information gathering |
62
62
63
63
† In addition, the <xref:Microsoft.Extensions.AI.Evaluation.Safety.ContentHarmEvaluator> provides single-shot evaluation for the four metrics supported by `HateAndUnfairnessEvaluator`, `SelfHarmEvaluator`, `ViolenceEvaluator`, and `SexualEvaluator`.
In this quickstart, you create an MSTest app to evaluate the chat response of an OpenAI model. The test app uses the [Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) libraries.
11
+
In this quickstart, you create an MSTest app to evaluate the quality of a chat response from an OpenAI model. The test app uses the [Microsoft.Extensions.AI.Evaluation](https://www.nuget.org/packages/Microsoft.Extensions.AI.Evaluation) libraries.
12
12
13
13
> [!NOTE]
14
14
> This quickstart demonstrates the simplest usage of the evaluation API. Notably, it doesn't demonstrate use of the [response caching](../conceptual/evaluation-libraries.md#cached-responses) and [reporting](../conceptual/evaluation-libraries.md#reporting) functionality, which are important if you're authoring unit tests that run as part of an "offline" evaluation pipeline. The scenario shown in this quickstart is suitable in use cases such as "online" evaluation of AI responses within production code and logging scores to telemetry, where caching and reporting aren't relevant. For a tutorial that demonstrates the caching and reporting functionality, see [Tutorial: Evaluate a model's response with response caching and reporting](../tutorials/evaluate-with-reporting.md)
@@ -49,9 +49,9 @@ Complete the following steps to create an MSTest project that connects to the `g
49
49
50
50
```bash
51
51
dotnet user-secrets init
52
-
dotnet user-secrets set AZURE_OPENAI_ENDPOINT <your-azure-openai-endpoint>
52
+
dotnet user-secrets set AZURE_OPENAI_ENDPOINT <your-Azure-OpenAI-endpoint>
53
53
dotnet user-secrets set AZURE_OPENAI_GPT_NAME gpt-4o
54
-
dotnet user-secrets set AZURE_TENANT_ID <your-tenant-id>
54
+
dotnet user-secrets set AZURE_TENANT_ID <your-tenant-ID>
55
55
```
56
56
57
57
(Depending on your environment, the tenant ID might not be needed. In that case, remove it from the code that instantiates the <xref:Azure.Identity.DefaultAzureCredential>.)
0 commit comments