Guardrails AI, the open and trusted AI assurance company, formally debuted during the opening keynote at the AI in Production conference. The company also introduced Guardrails Hub, an open source product that lets developers build, contribute, share, and re-use advanced validation techniques, known as validators.
These validators can be used with Guardrails, the company’s open source product that acts as a critical reliability layer for building AI applications that ensures they adhere to specified guidelines and norms, according to the company.
“With the launch of Guardrails in 2023, we made a foundational commitment that responsible AI development must be transparent and involve multiple stakeholders. As we navigate the evolving landscape of AI risks, Guardrails Hub will serve as an open and collaborative platform, accelerating the discovery and adoption of groundbreaking tools and methodologies for safely adopting GenAI technologies,” said Shreya Rajpal, co-founder and CEO of Guardrails AI.
Developers have been leveraging Guardrails to gain the necessary assurance for confidently deploying their AI applications.
Guardrails’ safety layer surrounds the AI application and is designed to enhance the reliability and integrity of AI applications via validation and correction mechanisms.
These validators can be defined by the user which can be simple rules or more advanced AI checks. Use cases include:
- Reducing hallucinations by confirming factuality for AI information extraction
- Ensuring chatbot communications behave in an expected way like being on brand and message
- Enforcing policies and regulations in AI automated workflows
The hub already has 50 pre-built validators including many contributed by a growing community of individuals and organizations.
By combining validators together like building blocks into guards, developers can explicitly enforce the fact-checking compliances and risk boundaries that are essential.
With Guardrails Hub, developers can:
- Build validators
- Contribute and collaborate:
- Re-use validators
- Combine validators into guards
- Enforce correctness guarantees and risk boundaries
In addition to the company launch, Guardrails AI also announced that it closed a $7.5 million seed funding round, led by Zetta Venture Partners, that will be used to expand the company's engineering and product teams as well as to continue to advance its products.
Bloomberg Beta, Pear VC, Factory and GitHub Fund as well as AI angels like Ian Goodfellow from DeepMind, Logan Kilpatrick from OpenAI, and Lip-bu Tan also participated in the round.
"With Guardrails AI, we see not just a company but a movement towards securing AI's future in enterprise. Their commitment to open source and collaborative innovation in AI risk management will ensure that the evolution towards safe and reliable AI applications is accessible to all, not just a select few,” said Apoorva Pandhi, managing partner at Zetta Venture Partners.
For more information about this news, visit www.guardrailsai.com.