Ask AI

Just the Facts: Grounding & Anti-Hallucination technology

Sapience is built for enterprises, and they need facts, not lies.

Answers Grounded in Facts: How Sapience Fights Hallucinations

Last Updated: 2026.01.27


The Problem: AI Hallucinations

You've probably experienced this: you ask an AI a question and get a confident, articulate answer that turns out to be completely wrong. The AI "hallucinated" - it generated plausible-sounding text that wasn't grounded in reality.

This happens because traditional AI models generate responses based solely on patterns in their training data. They don't actually "know" facts - they predict what text should come next. When they don't have the right information, they fill in gaps with confident-sounding guesses.

For business use, this is unacceptable. You need answers you can trust, verify, and act on.


The Solution: Grounded AI

Sapience takes a fundamentally different approach. Instead of letting AI models generate answers from their training data alone, Sapience grounds every response in specific, verifiable data sources that you control.

Grounded AI means the agent must:

  1. Retrieve relevant information from your designated sources
  1. Base its answer on that retrieved content
  1. Cite exactly where each piece of information came from
  1. Acknowledge when it doesn't have the information to answer

This is called Retrieval-Augmented Generation (RAG), and it's the foundation of trustworthy enterprise AI.


How Sapience Grounds Your Agents

When you build an agent in Sapience, you explicitly define what data sources it can use. This creates a "knowledge boundary" - the agent answers based on what's inside that boundary, not from its general training.

Data Sources You Can Add

Source Type
What It's For
Example
URLs
Web pages, documentation, online resources
https://docs.mycompany.com/policies
Files
PDFs, Word docs, PowerPoints, spreadsheets
Q1-2026-Financial-Report.pdf
Vector Stores
Large document collections, knowledge bases
Company wiki, product documentation

Example: Building a Grounded HR Agent

Let's say you're creating an AI assistant to answer employee questions about company policies.

Without grounding (dangerous): The AI might make up policies based on what "sounds right" from its training on general HR content. An employee could get incorrect information about their benefits or rights.

With Sapience grounding (safe):

  1. You upload your actual policy documents: Employee Handbook, Benefits Guide, PTO Policy
  1. You add URLs to your internal HR portal pages
  1. The agent ONLY answers based on these specific sources
  1. Every answer includes citations pointing to the exact document and section

Citations: Trust but Verify

Every claim your Sapience agent makes includes a citation. This isn't optional - it's how the system is designed.

What Citations Look Like

When your agent responds, you'll see inline citations like:

"Employees are eligible for 20 days of PTO after completing one year of service [1]. Unused PTO can be carried over up to a maximum of 5 days [1]. PTO requests must be submitted at least 2 weeks in advance for approval [2]."

Why This Matters

Benefit
How It Helps
Verification
You (or anyone) can check the original source
Accountability
The agent can't make claims without backing
Auditability
Track which sources informed which decisions
Trust
Users learn to rely on verifiable answers

Scope Levels: Project, User, Org, and Global

Sapience lets you control data sources at different scope levels, so agents only access what's appropriate.

Project Agents

Visibility: whoever is added to the Project

Use Case: this will be the type of Agent you create and use the most. Its baked-in to every Project

Project Agents by default are grounded into just the Project data. This includes core Project data like description and success criteria. Most importantly, the Agent has access to all Goals, Tasks, Notes and Files added to a Project. This is what makes Project Agents so powerful. Focussed, grounded Agents which exist to help you get that Project to done! Learn more here:

User-Scoped Agents

Visibility: Only you  Use case: Personal research, individual projects

You might create a user-scoped agent with:

  • Your project files
  • Personal notes and research
  • URLs specific to your work

Org-Scoped Agents

Visibility: Everyone in your organization  Use case: Team knowledge bases, company-wide assistants

An org-scoped agent might have:

  • Company policy documents
  • Product documentation
  • Team wikis and knowledge bases
  • Approved external resources

Global Agents

Visibility: All Sapience users  Use case: Platform-provided assistants, shared utilities

Global agents are carefully curated by Sapience with vetted data sources.


Transparency: Seeing Where Data Comes From

Sapience doesn't hide how agents work. You can always see:

For Any Agent

  • Assigned Files: What documents the agent can access
  • Assigned URLs: What web resources it can reference
  • Vector Stores: What knowledge bases it's connected to
  • Last Updated: When each source was last refreshed

In Every Response

  • Inline Citations: Numbered references to specific sources
  • Source List: Full details on each cited source
  • Confidence Indicators: When the agent is less certain

This transparency means you're never wondering "where did it get that from?"


What Happens When the Agent Doesn't Know

A grounded agent will tell you when it can't answer. This is a feature, not a bug.

Example response when information is missing:

"I don't have information about the company's parental leave policy in my assigned sources. The documents I have access to (Employee Handbook, Benefits Guide) don't cover this topic. You may want to contact HR directly or ask an administrator to add the relevant policy document to my knowledge base."

This is dramatically better than an AI confidently making up a policy that doesn't exist.


Best Practices for Grounding Your Agents

Sapience has built-in safeguards to make sure that Agents you create won’t misbehave, but there are some best-practices you should keep in mind as well.

1. Be Specific About Sources

Don't dump every document into an agent's knowledge base. Choose sources that directly relate to what the agent should answer.

Instead of...
Do this...
"All company documents"
Specific policy documents the agent needs
"The whole website"
Specific pages relevant to the use case
"Everything in SharePoint"
Curated knowledge base with vetted content

2. Keep Sources Current

Grounding only works if the sources are accurate. Establish a process to:

  • Update documents when policies change
  • Refresh URL content periodically
  • Archive outdated materials

3. Test Edge Cases

Before deploying an agent widely, test what happens when:

  • Users ask questions outside the agent's knowledge
  • Information conflicts between sources
  • Sources are ambiguous or unclear

4. Set Expectations with Users

Let users know that the agent is grounded in specific sources. This helps them:

  • Understand the boundaries of what it can answer
  • Know to check citations for important decisions
  • Report when sources need updating

Comparing Grounded vs. Ungrounded AI

Aspect
Ungrounded AI
Sapience Grounded AI
Source of answers
Training data patterns
Your specific documents and URLs
Citations
Usually none
Always included
Hallucination risk
High
Dramatically reduced
Verifiability
Difficult
Easy - check the source
Control
None
You choose the sources
Consistency
Varies
Based on stable source content
Updates
Requires retraining
Just update the documents

Real-World Impact

Organizations using grounded AI report:

  • 96%+ accuracy on domain-specific questions (compared to ~60-70% with ungrounded AI)
  • Reduced escalations because users trust initial answers
  • Faster onboarding with reliable self-service information
  • Audit compliance with full citation trails
  • Increased adoption when users verify answers are trustworthy

Summary

Sapience agents are grounded by design. This means:

  1. You control the data - Agents only use sources you explicitly assign
  1. Every answer has citations - Users can verify any claim
  1. Transparency is built in - See exactly what sources each agent uses
  1. Honesty about limits - Agents admit when they don't have the information

This approach fights hallucinations not by hoping the AI "gets it right," but by fundamentally changing how answers are generated. Instead of predicting plausible text, your agents retrieve verified information and cite their sources.

The result: AI you can actually trust for business decisions.


Have questions about grounding your agents? Ask the Sapience Agent: "How do I add data sources to my agent?"

Did this answer your question?
😞
😐
🤩