Study uses large language models to sniff out hallucinations

Study uses large language models to sniff out hallucinations

HomeNews, Other ContentStudy uses large language models to sniff out hallucinations

Researchers in computing and linguistics have devised a new way to detect errors in large language models, which relies on hiring more LLMs.

Why large language models hallucinate

Applying statistical machine learning to languages at an ever-increasing scale has become popular with technology vendors and investors alike, but it is well known that such language models are prone to error. In the much-hyped world of LLM, deficits that might otherwise be considered malfunctions are euphemistically called "hallucinations."

Generating content that is either incorrect or meaningless still plagues the industry, so the race is on to spot them with greater accuracy and presumably try to eliminate them.

This week, researcher Sebastian Farquhar and colleagues in the British science journal Nature, Oxford University and DeepMind proposed a method to quantify the degree of hallucinations generated by an LLM while showing how accurate the generated content can be.

Tagged:
Study uses large language models to sniff out hallucinations.
Want to go more in-depth? Ask a question to learn more about the event.