Generative AI like ChatGPT confusingly providing faulty information, know all about AI Hallucination
Generative AI tools such as hatGPT became quite popular in the recent years. In early 2023 several giant tech companies announced Generative AI Tools। The special thing about these generative AI chatbots is that they can answer your questions like humans. It first became quite popular among young people with OpenAI launching ChatGPT. Students started using it to make projects and assignments for their school and college. However, later many such cases have come to light in which these generative AI have been found giving wrong information. Since then, a new term named ‘AI Hallucination’ has came to light.
A few days ago, Canada’s Civil Resolution Tribunal, while hearing a case related to AI Hallucination, has imposed a heavy fine on the service provider company. A Canadian citizen named Jake Moffat (Jeff Moffatt) has demanded compensation from Air Canada for sharing misleading information through AI chatbots in a case. However, this is not the first case of AI hallucination. Even before this, in June last year, an American lawyer has been fined $5,000 for using an AI chatbot to explain the case to the client. AI chatbot had shared wrong information related to the case. Apart from this, many such cases have also come to light in which AI chatbots have shared wrong information. Looking at these cases, AI hallucination can be said to have become the biggest problem for tech companies.
In common language, hallucination is a state of mind in which a person have illusionary ideas about a topic. Hallucination for AI means sharing incorrect information by chatbot. In September 2023 a research report named AutoHall: Automated Hallucination DataSet Generation for Large Language Models has been shared, which found the Hallucination rate of Generative AI to be from 20 to 30 percent. According to experts, AI is a machine, not a human being, due to which misunderstandings can be normal.