Microsoft Unveils Correction Tool to Fight AI Inaccuracies

On Wednesday, Microsoft revealed its new Correction feature that can identify, flag, and rewrite inaccurate artificial intelligence (AI) works.

According to reports, users utilizing Microsoft Azure to power their AI system can leverage the capability to auto-detect and rewrite incorrect bot outputs. The company also claims that this tool is to combat AI inaccuracies.

The Correction feature is in preview within Azure AI Studio, a safety tool that detects vulnerabilities and hallucinations and blocks harmful prompts. Once activated, it scans auto-bot outputs for inaccuracies by comparing them with customer-provided source material.

In addition, the tool is compatible with any text-generating AI model, including Meta’s Llama and OpenAI’s GPT-4.

Furthermore, the system will flag errors, explain why they are wrong, and rewrite the content before the user sees it. While this appears helpful in addressing AI-generated flawed content, it may not be entirely reliable.

On the other hand, a Microsoft spokesperson explained that the correction system utilizes small and large language models to align outputs with grounding documents, but it can still make mistakes.

Moreover, the statement emphasizes the tool’s primary role in aligning AI outputs with reference documents rather than ensuring complete accuracy in every case.

Google Offers Vertex AI Akin to Microsoft’s Correction

Google’s Vertex AI, a cloud platform for AI development, offers a similar grounding feature to auto-bot models.

The biggest search engine firm aims to use Vertex AI to check outputs against Google Search, company data, and soon third-party datasets.

However, analysts have warned that these grounding approaches fail to tackle the underlying cause of hallucinations in AI models.

Nevertheless, unlike Microsoft’s tool, which uses small and large language models to align outputs with reference documents, Google’s system does not automatically correct identified errors.

According to reports, the cloud tech company solution has also involved two cross-referencing, copy-editor-like meta-models built to detect, highlight, and rewrite hallucinations.

Sending
User Review
0 (0 votes)

RELATED POSTS

Leave a Reply