Microsoft intros AI correction tool to address AI hallucinations

Microsoft intros AI correction tool to address AI hallucinations

Posted on


Microsoft has a new AI correction tool that tries to stop AI models from generating factually incorrect content. Simply put, Microsoft is deploying AI to stop or limit another AI from spewing garbage, a phenomenon commonly known as AI hallucinations.

Microsoft’s “Correction” feature attempts to address AI hallucinations

Generative Artificial Intelligence (Gen AI) has witnessed phenomenal adoption. However, quite a few of these Gen AI chatbots often confidently generate content that isn’t accurate.

Microsoft is trying to address AI hallucinations with its new “Correction” feature. It is based on Microsoft’s existing ‘Groundedness Detection’. The new AI correcting AI tool essentially cross-references AI-generated text to a supporting document input by the user.

Microsoft will offer the new Correction tool as part of Microsoft’s Azure AI Safety API. Users of Gen AI platforms like OpenAI’s GPT-4o and Meta’s Llama should experience the new feature soon.

The Corrections feature will attempt to flag any content that might not be factually accurate. It will then fact-check the text by comparing it to a source of truth.

Microsoft claims this feature will rely on a grounding document, which could involve user-uploaded transcripts. This could mean the responsibility of certifying the accuracy of a document need not be on the end-user. The AI would be able to tell what to view as fact based on grounding documents.

Microsoft also deploys ‘Evaluations as a “confidential inference” layer

Microsoft has implied its AI correction tool could address hallucinations in real-time. Besides addressing AI hallucinations, Microsoft is also deploying ‘Evaluations’.

According to Microsoft, ‘Evaluations’ is a proactive risk assessment tool. It will perform as a confidential inference layer. In other words, the Gen AI platform will pass the content through this layer before making any predictions.

The new tool will reportedly try to ensure sensitive information remains secure and private during the inference process. This could help filter out potential privacy concerns. With these new tools, Microsoft seems to be addressing two common issues with Gen AI: privacy and accuracy.



Source Link Website

Gravatar Image
My John Smith is a seasoned technology writer with a passion for unraveling the complexities of the digital world. With a background in computer science and a keen interest in emerging trends, John has become a sought-after voice in translating intricate technological concepts into accessible and engaging articles.

Leave a Reply

Your email address will not be published. Required fields are marked *