Newsletters




IBM watson.ai Adds the Ability to Access Custom Foundation Models


IBM is adding a new feature update to watsonx.ai that delivers an open framework, giving users access to a catalogue of built-in models and patterns that can be seamlessly extended through a “build your own model” capability.

Users can now do even more with IBM’s enterprise AI studio; they can upload and deploy their own custom foundation models across software and SaaS environments.

There are many reasons to import a custom foundation model, all driven by the unique needs of the organization. Ultimately, it boils down to a specific foundation model that is optimal for the task at hand but currently resides outside of watsonx.ai.

The custom foundation model approach provides users greater flexibility in how they select and use the right model to meet specific generative AI use cases and technical tasks.

In addition to working with foundation models that are curated by IBM, including open source, third party, and IBM-developed watsonx foundation models, users can now upload and deploy their own custom foundation model to accomplish a range of industry or domain-specific generative AI tasks.

For instance, a common task for many clients is summarizing customer service transcripts or generating personalized outbound emails. Another popular use case is to tune a large language model (LLM) for a specific language or with specialized labeled data to customize a model to an industry or business domain.

By deploying this custom model into watsonx.ai software or SaaS environments, you can use it within your applications and have access to the platform’s enterprise-ready governance features. Further, with our on-premises solution, you’re bringing the model closer to where your data resides, mitigating your risk exposure.

In both SaaS and software environments, watsonx.ai will initially support the base versions or customizations of foundation models for natural language and programming language generation within supported model architecture types.

Or you can bring in models that you’ve already fine-tuned from your own environment. Note that you cannot further tune the custom model once it’s deployed in watsonx.ai as part of this release.

The supported model architecture types that can be imported into watsonx.ai include the following:

  • bloom
  • codegen
  • falcon
  • gpt_bigcode
  • gpt_neox
  • gptj
  • llama2
  • mixtral
  • mistral
  • mt5
  • mpt
  • t5

When the model is imported and deployed, prompt engineers, developers and AI model builders can interact with the custom model as they would with other models in the watsonx.ai AI studio, for instance:

  • Using the Prompt Lab to build and test prompts, including creating reusable prompt templates.
  • Programmatically accessing the model by using REST API calls.

For more information about this news, visit www.ibm.com.


Sponsors