Testing your AI Knowledge Base ensures your chatbot can answer customer questions accurately and consistently. It also gives you and your company confidence in the results when it goes live to customers.
There are two ways to test your AI Knowledge Base:
- Manual testing inside the chatbot, best for quickly checking a small number of questions.
- Knowledge Base Test Runs, an automated bulk testing tool that saves time, ensures consistency, and helps you identify gaps before going live.
With automated testing, you no longer need to manually check each question one by one. You can re-test everything after every update in minutes and get clear, actionable results fast.
Recommendation: Enable AI Insight Reports first to identify the most valuable questions for your test runs.
🧪 Using AI Knowledge Base Test Runs
🎥 Video Walkthrough: Using the Knowledge Base Test Runs Tool
Step 1: Open AI Knowledge Bases
Head to ‘AI Knowledge Bases’ in the left-hand menu, then click on the knowledge base you’d like to test.
Scroll to the bottom and find Testing your knowledge base.
- Select Run a new test suite.
- To see past results, choose Review previous test runs.

Step 2a: Test Run → Blank Slate
Selecting Blank Slate lets you create all your own test questions from scratch. This is especially useful for testing edge cases or specific scenarios you want to verify. Follow these steps:

a. Select Blank Slate.

b. Select your AI model & add your questions.

c. Answer will populate, add notes.
Step 2b: Test Run → From AI Insights Reports
If you have AI Insights Reports enabled, you can quickly generate 20 test questions based on what real customers have been asking. Follow these steps:

a. Select Generate questions based on AI Insights report.

c. Select your AI model & review questions.

b. Choose the report you want to use and click continue.

d. Answer will populate, add notes.
Step 3: Reviewing & Improving Your Knowledge Base
Your test runs are automatically saved, so you can review them and re-run the same test whenever needed. When reviewing results:
- Check answers for accuracy, completeness, and tone.
- Add missing knowledge for any questions where no suitable answer was found.
- Refine your AI Knowledge Base custom prompt if answers need better structure or alignment with your brand voice.
Make testing a regular part of your Knowledge Base update process to maintain accuracy and a consistently high standard of responses.

⚙️ Manual Testing With AI Chatbot
Step 1: Select Chatbot and Knowledge Base
Go to Chatbots in the left-hand menu, select your chatbot, and click the Preview button in the top right. This will start a test interaction where you can ask questions and simulate the customer journey.

Make sure you’ve added the correct Knowledge Base to your AI Lookup card before testing.
Manual testing is ideal when you want to review the formatting and presentation of answers, as well as understand the full customer journey through your chatbot.
👥 Wider Team Testing
Once you’ve completed both manual testing inside the chatbot and bulk testing using Test Runs, it’s a good idea to involve a wider group in your company. This helps identify any issues you might have missed and gives fresh perspective on the customer experience.
You don’t need to add the widget to a UAT site or invite more members into your Talkative account, instead, use the Share Widget feature:
- Publish your chatbot
- In your chatbot editor, click Publish to make sure all changes are live and the chatbot is linked to your chosen chat widget.
- Open your chat widget
- Go to Chat Widgets in the left-hand menu.
- Select the widget you plan to use publicly.
- Generate a share link
- Click Share Config: [Widget Name] in the widget menu.
- Copy the generated URL.
- Share with your wider team
- Send the link to colleagues who don’t have Talkative accounts, they can preview and test the chatbot without logging in.


❓FAQ’s: Testing Your AI Knowledge Base
How often should I run tests?
We recommend testing after every major update to your Knowledge Base, and scheduling regular tests (e.g. monthly or quarterly) to maintain accuracy.
Where can I get ideas for test questions?
If you have AI Insights Reports enabled, you can generate test questions from real customer conversations.
If you don’t have AI Insights set up, you can use ChatGPT, simply give it your website URL and ask it to create 20 questions a customer might ask on a chatbot for your site. This is a quick way to build a starting question set.
What should I do if the AI gives an incorrect answer?
First, identify whether the issue is with formatting/tone or with the accuracy of the answer:
- If it’s a formatting or tone issue → Update your custom prompt in the Knowledge Base to guide the AI’s style and presentation.
- If it’s an incorrect answer → Review your URLs and use the Knowledge Base data download to check what information might be causing the error. You might need to update a website page if the information provided is out of date.
- If there’s no answer at all → Review your Knowledge Base content and ensure the relevant information has been added and processed correctly.
If you’re unsure, contact support@gettalkative.com or your CSM for further advice.
How do I report a problem with my AI answers?
If you find an issue with an AI answer during testing, contact your CSM or email support@gettalkative.com with the following details:
- Interaction ID – Found on the Interaction Logs page.
- URL – A link to the relevant test run or interaction.
- Description of the problem – Explain why the answer is incorrect.
- Correct answer – Provide the exact response you believe should be given.
- Source content – Share any webpage links or documents where the correct information is shown.
Before sending this information, go to Settings > Talkative Support Access and grant access for a set time so we can review the transcripts.
Giving this information will help our team investigate and resolve the issue quickly.