How We Solved AI Hallucinations in Chatbots (The Simple Way)

How We Solved AI Hallucinations in Chatbots (The Simple Way)

Samuel Vrablik
ai chatbotsproduct updatehallucinationscustomer experience

TLDR: We fixed AI hallucinations in our chatbots using simple regex validation after sophisticated LLM tweaking didn't work. Sometimes the old-school approach is the best approach.

The Problem

On Chatisto, you can train chatbots from your website content - your products, articles, documentation, whatever you've got. It works pretty well most of the time.

But here's what kept happening: in a small percentage of cases, the AI would just decide to generate completely new products that weren't in the knowledge base. This usually happened when customers asked for specific product recommendations and the model couldn't find anything it liked in the actual catalog.

So instead of saying "sorry, we don't have exactly what you're looking for," it would confidently recommend "The Perfect Winter Jacket - Model XZ-2024" with a link that led straight to a 404 page.

As a business owner, you could take this as market research for new products to introduce. But you definitely don't want your customers landing on broken links.

What I Tried First

I threw everything I could think of at this problem:

  • Temperature adjustments: Made the AI more conservative
  • System prompt engineering: Rewrote instructions multiple ways
  • Better search algorithms: Improved content retrieval
  • More prompt refinements: Because why not try again?

Despite all the fancy LLM settings and sophisticated prompting techniques, the hallucinations kept happening. Sometimes AI just wants to be helpful to the point of making stuff up.

The Solution That Actually Worked

When the sophisticated approaches failed, I went with something embarrassingly simple: regex-based link validation.

Here's how it works:

Allowed Domains & URL Patterns

You configure exactly which domains and URL patterns your chatbot can link to. Your main store, specific product categories, particular landing pages - whatever you want to allow.

Filtering Options

  • Link-only filtering: Removes just the bad links, keeps the text
  • Sentence-level filtering: Removes entire sentences containing invalid links

Pattern Matching

Old-school regex matching against your allowed patterns. If a link doesn't match, it gets filtered based on your settings.

Why This Works

The beauty is in the simplicity:

  • No more 404 pages from chatbot recommendations
  • You maintain complete control over where customers get directed
  • Easy to configure and understand
  • Works regardless of how creative the AI gets
  • Customers actually find what was recommended to them

Real Results

Since implementing this:

  • Zero customer complaints about broken product links
  • Business owners can trust their AI representatives
  • Better conversion rates because recommendations actually work
  • No more embarrassing "this product doesn't exist" moments

What's Next

This regex approach is working well, but I'm curious - do you have other solutions for this problem?

I'd love to hear how others are tackling AI hallucinations in customer service. The simple approaches often work best, but maybe there's something I'm missing.


Dealing with similar issues? Check out our AI Chatbot features to see how link validation can keep your customers from hitting dead ends.