Newsletters




IBM Expands Granite Model Family with New Multi-Modal and Reasoning AI


IBM is debuting the latest version of its Granite large language model (LLM) family, Granite 3.2—continuing to deliver small, efficient, practical enterprise AI for real-world impact.

All Granite 3.2 models are available under the permissive Apache 2.0 license on Hugging Face. Select models are available now on IBM watsonx.ai, Ollama, Replicate, and LM Studio, and expected soon in RHEL AI 1.5—bringing advanced capabilities to businesses and the open-source community.

Highlights include:

  • A new vision language model (VLM) for document understanding tasks which demonstrates performance that matches or exceeds that of significantly larger models on the essential enterprise benchmarks DocVQA, ChartQA, AI2D, and OCRBench. In addition to robust training data, IBM used its own open-source Docling toolkit to process 85 million PDFs and generated 26 million synthetic question-answer pairs to enhance the VLM's ability to handle complex document-heavy workflows.
  • Chain of thought capabilities for enhanced reasoning in the 3.2 2B and 8B models, with the ability to switch reasoning on or off to help optimize efficiency. With this capability, the 8B model achieves double-digit improvements from its predecessor in instruction-following benchmarks like ArenaHard and Alpaca Eval without degradation of safety or performance elsewhere. Furthermore, with the use of novel inference scaling methods, the Granite 3.2 8B model can be calibrated to rival the performance of much larger models like Claude 3.5 Sonnet or GPT-4o on math reasoning benchmarks such as AIME2024 and MATH500.
  • Slimmed-down size options for Granite Guardian safety models that maintain performance of previous Granite 3.1 Guardian models at 30% reduction in size. The 3.2 models also introduce a new feature called verbalized confidence, which offers more nuanced risk assessment that acknowledges ambiguity in safety monitoring.

According to the company, IBM's strategy to deliver smaller, specialized AI models for enterprises continues to demonstrate efficacy in testing, with the
Granite 3.1 8B model recently yielding high marks on accuracy in the Salesforce LLM Benchmark for CRM.

The Granite model family is supported by a robust ecosystem of partners, including leading software companies embedding the LLMs into their technologies.

Granite 3.2 is an important step in the evolution of IBM's portfolio and strategy to deliver small, practical AI for enterprises. For simpler tasks, the model can operate without reasoning to reduce unnecessary compute overhead. Additionally, other reasoning techniques like inference scaling have shown that the Granite 3.2 8B model can match or exceed the performance of much larger models on standard math reasoning benchmarks. Evolving methods like inference scaling remains a key area of focus for IBM's research teams, the company said.

Alongside Granite 3.2 instruct, vision, and guardrail models, IBM is releasing the next generation of its TinyTimeMixers (TTM) models (sub 10M parameters), with capabilities for longer-term forecasting up to two years into the future.

IBM said, these make for powerful tools in long-term trend analysis, including finance and economics trends, supply chain demand forecasting and seasonal inventory planning in retail.

"The next era of AI is about efficiency, integration, and real-world impact—where enterprises can achieve powerful outcomes without excessive spend on compute," said Sriram Raghavan, VP, IBM AI Research. "IBM's latest Granite developments focus on open solutions demonstrate another step forward in making AI more accessible, cost-effective, and valuable for modern enterprises."

For more information about this news, visit www.ibm.com


Sponsors