In the context of AI, “hallucination” refers to an AI model generating false or misleading information, presented as accurate, due to limitations in its training data or design.This is a real problem. Hallucinations undermine trust in AI models.
Hallucinations in AI results are more common in open AI models, models with a low cost and that draw knowledge from not only the expertise of the business offering access to the AI chat.
Here at Tower Systems we have offered AI chat solutions for years and are grateful to leverage that experience and the leading-edge customer service specific AI LLM that we have partnered with for quite some time now. This LLM has only beed trained by us. It is closed too, meaning that it is not drawing any knowledge or experience outside of that in which we rain it.
The approach we have taken is more expensive and more complete, more certain.
For well over a year of using this professional help desk model we have reviewed responses and used what we have learned from the human review of responses to further train the AI to provide even better responses.
Being years into live use provides us with an advantage for which we are genuinely grateful.
The pay off for us is when we see fewer responses from the AI that could be improved.
None of this detracts from our human delivered personal customer service. We love talking with retailer, chatting retail and learning from how we can better serve their needs. What the AI does is provide access to support in the middle of the night covering all manner of queries from the simple to the complex. Its;s track record in offering this help is excellent. It has a record of accurate resolution we are proud of and that customers have let us know they are grateful for.
While many websites offer chatbot access, the majority today are not using a closed LLM such as what we pay to access. In our case we looked at the best practice approach of AI facilitated customer service. The quality of what we deliver matters more than the cost of delivery as we know this is what our customers will appreciate. We are glad to see no hallucination in our AI chatbot responses.
Recent Comments