Google’s DataGemma Set to Elevate AI Accuracy with Superior Data Sources

Google has unveiled DataGemma (based on Gemini), featuring two new open-weight models grounded in real-world data from Google’s Data Commons. This innovative release is the first to tackle hallucinations in large language models by leveraging precise statistical data. DataGemma promises to enhance AI accuracy and reliability

Google-DataGemma

What Makes DataGemma Stand Out?

Hallucinations are a major challenge for LLMs, especially when dealing with precise statistical or numerical data. Google’s DataCommons, a repository with over 240 billion data points from trusted sources like the United Nations and the CDC, aims to address this issue by providing reliable data.

By utilizing this extensive statistical dataset, DataGemma enhances the model’s accuracy by grounding its outputs in reliable, real-world information.

Google researchers:

“Researchers have identified several causes for these phenomena, including the fundamentally probabilistic nature of LLM generations and the lack of sufficient factual coverage in training data.”

Google researchers documented this in a paper.

DataGemma employs two techniques: Retrieval-Interleaved Generation (RIG) and Retrieval-Augmented Generation (RAG). Both methods minimize hallucinations by incorporating real-world data into the generation process.

RIG improves factual accuracy by comparing the model’s output with relevant statistics from Data Commons. It involves generating natural language queries, converting them into structured data queries, and retrieving accurate information with citations. 

RIG builds on the Toolformer technique, while RAG, a widely used method, helps models integrate relevant information beyond their initial training data.

In this process, the fine-tuned Gemma model creates a natural language query based on the original statistical question. This query is run against Data Commons to retrieve relevant data. The extracted values and the original query are then used to prompt Gemini 1.5 Pro, resulting in a highly accurate final answer.

Considerable Progress in Initial Test

When tested on 101 carefully crafted queries, DataGemma variants fine-tuned with RIG boosted the factual accuracy of the base model to around 58%, marking a 5-17% improvement. While RAG also enhanced performance, its results were not as notable as RIG’s but still significantly better than the baseline.

DataGemma models answered 24-29% of queries with statistical data from Data Commons. While the LLM was highly accurate with numbers (99%), it often struggled with drawing correct inferences, missing 6-20% of the time. Both RIG and RAG effectively enhance model accuracy for statistical queries. 

RIG is faster but less detailed, retrieving and verifying individual statistics, whereas RAG offers more comprehensive data but is limited by data availability and context-handling needs.

Google anticipates stronger models and advanced research with the release of DataGemma.

“Our research is ongoing, and we’re committed to refining these methodologies further as we scale up this work, subject it to rigorous testing, and ultimately integrate this enhanced functionality into both Gemma and Gemini models, initially through a phased, limited-access approach,” the company declared in a blog post today.

Source:

https://blog.google/technology/ai/google-datagemma-ai-llm

Related News

Leave a Reply

Google’s DataGemma Set to Elevate AI Accuracy with Superior Data Sources

AIChief Rating

Free Trial

Paid Plan

Google’s DataGemma Set to Elevate AI Accuracy with Superior Data Sources

AIChief Rating

Free Trial

Paid Plan