Azure AI Foundry Agent Ignores Knowledge Sources and Generates Hallucinated Responses

JESUS GABRIEL MARIN CANTERO 0 Reputation points
2025-12-05T14:00:40.8466667+00:00

I am building an enterprise agent in Azure AI Foundry (Preview) and I am facing a critical issue: the agent does not use the connected knowledge sources at all and instead generates hallucinated answers, even when the information is fully available in Azure AI Search.

Here are the details:


1. What I have configured

An Azure AI Foundry Agent

A data source connected to Azure Cognitive Search

The index contains ~800 JSON documents with fields like:

id, numero, title, fecha (DateTimeOffset), para, de, asunto, content

Retrieval mode: Hybrid (Vector + BM25)

"Always retrieve" option enabled

A Vector Store is also available (optional)


2. What the agent is supposed to do

The agent should:

Query the Azure AI Search index to retrieve real documents

Use the retrieved grounding information in its answers

Avoid hallucinating content

Respond accurately when asked for:

A specific document (e.g. “Circular 1432”)

  The latest document by date

  
     Information contained inside the indexed JSON files
```---
## **3. What is actually happening**

Instead of using the Search index:

### ❌ The agent **completely ignores** the knowledge source

### ❌ It generates answers that are not based on my documents

### ❌ It invents content, titles, dates, and summaries

### ❌ It **never performs a retrieval query**, even when “Always retrieve” is enabled

### ❌ It does not call any tool or grounding mechanism

### ❌ It behaves as if the knowledge source was not connected at all

This occurs even when:

The question clearly requires data from the index

The index contains the exact information

The grounding mode is set to “Default knowledge source”

The system prompt instructs the agent to ALWAYS use the knowledge source

---
## **4. Attempts made (none solved the problem)**

I have already tried:

Re-creating the index

Re-uploading documents

Switching retrieval modes

Re-configuring the agent

Creating a vector profile

Adjusting chunk sizes

Using different embedding models

Enforcing strict rules in the system prompt such as:

```sql
Always use the knowledge source.
Never answer without retrieved documents.
Do not hallucinate.

Despite all of this, the agent still does not perform any retrieval and continues answering based on its own model knowledge.


5. Expected Behavior

The agent should:

Query Azure Search for each user request

Return responses strictly based on retrieved documents

Avoid generating information that does not exist

Use the grounding mechanism properly


6. Actual Behavior

The agent never performs a retrieval query

The “Inspector” view shows no grounding

Answers are entirely hallucinated

No documents from Azure Search appear in debug logs

The system prompt is ignored


7. Question

Is this a known limitation or bug in Azure AI Foundry (Agents)? What is the correct way to force an agent to always use the linked Azure Search index when responding?

Additionally:

  • Is it possible that AI Foundry Agents require an explicit OpenAPI tool instead of relying on Search grounding alone?

Are there known issues with multi-field JSON documents or DateTimeOffset types inside indexes?

Could the issue be related to the fact that only one Azure Search index is allowed per agent?

Any guidance, known issues, or recommended configurations would be extremely helpful.I am building an enterprise agent in Azure AI Foundry (Preview) and I am facing a critical issue:

the agent does not use the connected knowledge sources at all and instead generates hallucinated answers, even when the information is fully available in Azure AI Search.

Here are the details:


1. What I have configured

An Azure AI Foundry Agent

A data source connected to Azure Cognitive Search

The index contains ~800 JSON documents with fields like:

id, numero, title, fecha (DateTimeOffset), para, de, asunto, 

Retrieval mode: Hybrid (Vector + BM25)

"Always retrieve" option enabled

A Vector Store is also available (optional)


2. What the agent is supposed to do

The agent should:

Query the Azure AI Search index to retrieve real documents

Use the retrieved grounding information in its answers

Avoid hallucinating content

Respond accurately when asked for:

A specific document (e.g. “Circular 1432”)

  The latest document by date

  
     Information contained inside the indexed JSON files
```---
## **3. What is actually happening**

Instead of using the Search index:

### ❌ The agent **completely ignores** the knowledge source

### ❌ It generates answers that are not based on my documents

### ❌ It invents content, titles, dates, and summaries

### ❌ It **never performs a retrieval query**, even when “Always retrieve” is enabled

### ❌ It does not call any tool or grounding mechanism

### ❌ It behaves as if the knowledge source was not connected at all

This occurs even when:

The question clearly requires data from the index

The index contains the exact information

The grounding mode is set to “Default knowledge source”

The system prompt instructs the agent to ALWAYS use the knowledge source

---
## **4. Attempts made (none solved the problem)**

I have already tried:

Re-creating the index

Re-uploading documents

Switching retrieval modes

Re-configuring the agent

Using different embedding models

Enforcing strict rules in the system prompt such as:

Always


Despite all of this, the agent **still does not perform any retrieval** and continues answering based on its own model knowledge.

---
## **5. Expected Behavior**

The agent should:

Query Azure Search for each user request

Return responses strictly based on retrieved documents

Avoid generating information that does not exist

Use the grounding mechanism properly

---
## **6. Actual Behavior**

The agent **never performs a retrieval query**

The “Inspector” view shows no grounding

Answers are entirely hallucinated

No documents from Azure Search appear in debug logs

The system prompt is ignored

---
## **7. Question**

**Is this a known limitation or bug in Azure AI Foundry (Agents)?**  
 What is the correct way to force an agent to always use the linked Azure Search index when responding?

Additionally:

Is it possible that AI Foundry Agents require an explicit **OpenAPI tool** instead of relying on Search grounding alone?

Are there known issues with multi-field JSON documents or DateTimeOffset types inside indexes?

Could the issue be related to the fact that only **one** Azure Search index is allowed per agent?

Any guidance, known issues, or recommended configurations would be extremely helpful.
Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
0 comments No comments
{count} votes

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.