Guardrails and traces for LLM applications

Protect against jailbreaks, hallucinations, and malicious inputs, while monitoring LLM behavior with detailed traces, spans, and metrics.

Coming Soon
Hero

Lightening fast evaluators

Check user inputs and LLM outputs with our built-in evaluators, or create your custom guardrails. We support text, conversations, RAG, and even tools.

Jailbreak & Prompt Injection

Prevent your LLM from getting jailbroken using prompt injection and other latest threat vectors.

Sensitive Topics

Detect political, legal, medical, or even religious content from being discussed.

Profanity & Hate

Filter out offensive and threatening language as well as hate speech from your LLM.

PII Detection

Catch personally identifiable information from your LLM outputs, especially when using RAG.

Competitor Blocklist

Block competitors from being mentioned in your LLM outputs and avoid embarrassement.

Tone & Mood

Identify and grade the tone and mood of your LLM outputs to ensure they are appropriate.

Language Detection

Ensure the LLM output matches the user input to avoid language confusion.

Topical Relevance

Compare input and output embeddings to ensure the LLM is on topic and relevant.

Emotion Analysis

Detect the emotions in the content and ensure they are appropriate.

Helpfulness

SOON

Check how helpful the replies have been using an LLM-as-judge.

Our evaluators integrate with the world's leading AI providers.

Meta
Meta
Meta
Client 04
Client 05
Client 05

Use our SDK to integrate Modelmetry into your application.

const modelmetry = new ModelmetryClient() const guardrails = modelmetry.guardrails() const result = await guardrails.check("grd_jaohsfzgcbd523hbt1grwmvp", { Input: { Text: "What does the employee handbook say about vacation time during a busy period?", }, }) if (result.failed) { // handle the failure return "Sorry user, I cannot help you with this query at the moment." for (const entry of result.summarisedEntries) { // You can have access to more data for debugging (scores, evaluation(s) that failed) in the Check console.log(entry) } } // carry on as normal
Download @modelmetry/sdk on NPMDownload modelmetry-sdk on PyPIView @modelmetry on Github

LLMs Are Unpredictable. User Inputs & RAG, too.

Modelmetry provides lightening fast advanced guardrails to keep your company safe from LLM risks.

Users can wreak havoc on apps using LLMs.
A malicious user can
use prompt inejction and attempt model jailbreaks
A competitor can
make your chatbot compliment their own product
An angry customer can
use insults and inappropriate language
A bad actor can
get your AI to respond with hate speech, threats, and harmful content
LLM models inevitably will respond inappropriately.
Your LLM can
demonstrate bias, toxicity, or other harmful behavior
Your RAG can
leak sensitive information, PII, and violate privacy
Your Support Agent can
write too much, too little, or too vague answers
Your Chatbot can
end up in an infinite loop or provide irrelevant answers
Use LLMs with confidence

State-of-the-art LLM guardrails

Check user inputs and LLM outputs with our wide range of evaluators. We offer a wide range of guardrails to help you deploy your models with confidence.

Carousel 01

What is Modelmetry?

Modelmetry is an advanced platform designed to enhance the safety, quality, and appropriateness of data and models in applications utilizing Large Language Models (LLMs) like chatbots. It offers a comprehensive suite of evaluators to assess critical aspects such as emotion analysis, PII leak detection, text moderation, relevancy, and security threat detection.

With customizable guardrails, early termination options, and detailed metrics and scores, Modelmetry ensures that your LLM applications meet high standards of performance and safety. This robust framework provides actionable insights, safeguarding the integrity and effectiveness of your AI-driven solutions.

Who is Modelmetry for?

Modelmetry is ideal for developers and software engineers aiming to ensure their AI-driven applications are safe, reliable, and compliant with regulations.

Modelmetry also benefits higher-level stakeholders, including product managers, compliance officers, and CEOs, by offering a robust framework to monitor and enhance application performance and security, ensuring high standards of safety and quality while mitigating risks.

Is Modelmetry open source?

Absolutely, our client SDKs are open source. Our backend is proprietary because, well, it's our secret sauce. We can expert all your data upon request.

How does Modelmetry handles data privacy and security?

Modelmetry is committed to protecting your data privacy and security. We do not access payloads on your behalf, ever. We are a security-focused company and have implemented robust measures to ensure the confidentiality and integrity of your data. We use encryption, secure connections, and other industry-standard security practices to safeguard your data.

Do you store inputs and outputs?

We do not access payloads on your behalf, ever. And we do store inputs and outputs so you can review them alongside metrics and scores.