AI Governance
Govern all models. Centralize model management across all generative AI and predictive AI models, regardless of build origin or environment. Deploy LLMs and ML models seamlessly via the UI or API, and leverage built-in generative AI metrics and interventions for OSS LLMs.
Govern models across all ecosystems: cloud, private cloud, or on edge
Your choice of DataRobot UI or API
Central hub for LLMs and ML models
Deploy any model from DataRobot
Automate compliance with regulations and industry standards. Save time and ensure compliance with automated documentation that meets country or industry requirements (EU AI Act, SR-117, HIPAA, BAA, etc.). Effortlessly run compliance tests for generative AI use cases like PII protection and toxicity, and generate detailed, customizable reports with a single click.
AI compliance: EU AI Act, NYC Law No. 144, Colorado Law SB21-169, California Law AB-2013, SB-1047
Standards & guidelines: EEOC AI Guidance, DIU Responsible AI Guidelines
AI management frameworks: NIST AI RMF, SR-117
Industry best practices such as The Data & Trust Alliance
Model risk and policy management. Minimize AI-related risk and uphold MRM policies and security requirements across all AI projects. Leverage built-in Global Model templates, approval workflows, versioning, and performance monitoring to manage changes and reduce risk during model development and as models move to production.
Flexible platform options to comply with country or industry standards (E.g., BAA, HIPAA)
Support for self-managed on premises, cloud, and STS deployments
Secure, ready-for-deployment models for all users, standardizing model governance across use cases
Real-time LLM guards and compliance monitoring
Real-time intervention and moderation. Protect your models from vulnerabilities like PII leakage, prompt injection attacks, and inaccurate responses with DataRobot’s world-class guard models. Access a full suite of ready-to-use and customizable techniques from NVIDIA, Microsoft, DataRobot, and more to continuously monitor and address issues in LLM and predictive models.
Privacy threats: PII leakage, privacy infringement
Coherence threats: Veering off-topic, hallucinations, off-policy
Malicious threats: toxicity, bias, disinformation
Correctness threats: rouge, faithfulness
Security shield for external models. Secure and govern all models, including externally built LLMs, with ease. Integrate DataRobot into your CI/CD pipeline or MLFlow registry for automatic testing and validation. Monitor and moderate your LLMs in real-time with comprehensive governance policies and custom metrics. Automate these defenses to ensure robust protection for every model in your organization.
Bring governance to OpenAI and LangChain models
Govern and monitor external LLMs with one line of code
MLFLow Registry Sync to test and validate models
Connect DataRobot to any external model in production for governance add-on
Pre-deployment AI red-teaming. Ensure your models are robust and secure by red-teaming your AI before deployment. Test with synthetic or custom datasets to spot jailbreaks, bias, inaccuracies, toxicity, and compliance issues to identify and address vulnerabilities early. Once in production, maintain protection with a broad library of guards for ongoing security.
Red team your AI before deployment
Automate and customize alerts. Strengthen your security by automating and configuring guards to quickly report, block, and respond to threats. Modify or block prompt responses to speed up detection and resolution. Get real-time metrics and custom alerts in your data science applications or SIEM tools. Automatically block prompts if guards are delayed, ensuring consistent protection.
Detect model errors and latency
Automate guard actions
Report, block, and respond to threats in real-time
Custom alerts straight to your applications
Reduce deployment complexity. Centralize your predictive and generative AI assets by organizing, deploying, and versioning them from one registry. Automatically serialize your data and feature engineering pipeline, package LLMs, vector databases, and prompting strategies, and deploy a production-ready REST API endpoint with a single click.
Cloud deployment
Edge Deployment
Embed into business applications
Deploy generative AI applications
Automate resource scaling. Cut operational costs with serverless deployments that auto-adjust compute based on workload and scale-to-zero settings for idle times. Accelerate vector database updates, guard performance, and prediction time with autoscaling.
Scale-to-zero option for idle times
Secure your AI pipelines with CI/CD testing. Secure your AI development pipelines with standardized CI/CD testing. Automate testing and authentication, streamline approval workflows, and switch production models seamlessly without service interruptions.
Enable monitoring without changing code
Easily change approval workflows and history
Integrate RAG quality metrics into CI/CD workflow
Integrate with GitHub Actions
Keep your pipelines at peak performance. Ensure high quality across all deployments–databases, generative AI responses, and predictive models. Gain insight into how well AI responses align with your vector database, and leverage DataRobot’s insights for targeted training opportunities.
Generative AI performance insights via Streamlit app
Customize retraining policies for any model, set triggers on any metric including custom
Use parameters, network access, and key values to build the ideal model on DataRobot or external infrastructure.
Automate champion-challenger experiments to ensure best model stays in production