
Hamilton
A declarative Python micro-framework for modular, testable, and self-documenting dataflows.

The Open-Source Orchestration Framework for Seamless MLOps Automation.

MLRun is a high-performance, open-source MLOps framework designed to automate the machine learning lifecycle from research to production. Developed originally by Iguazio (now part of McKinsey & Company), MLRun's architecture centers around the concept of 'Serverless Functions' and 'Data Items,' allowing data scientists to write code once and run it anywhere—be it local development environments or large-scale Kubernetes clusters. Its 2026 market position is solidified as a critical bridge between data science experimentation and enterprise-grade deployment. The platform features an integrated Feature Store, automated experiment tracking, and real-time model serving via Nuclio. By abstracting infrastructure complexities, MLRun enables teams to build scalable pipelines with minimal DevOps overhead. Its deep integration with the broader Kubernetes ecosystem and support for hybrid/multi-cloud deployments make it a preferred choice for organizations seeking to avoid vendor lock-in while maintaining the rigors of enterprise security and compliance. The framework's transition to McKinsey's QuantumBlack ecosystem has further enhanced its capabilities in operationalizing AI for complex, high-stakes business transformations.
MLRun is a high-performance, open-source MLOps framework designed to automate the machine learning lifecycle from research to production.
Explore all tools that specialize in feature engineering. This domain focus ensures MLRun delivers optimized results for this specific requirement.
A centralized repository to define, create, and serve features across training and serving layers.
High-performance serverless event and data processing platform optimized for data-intensive tasks.
Automatically captures data snapshots, plots, and models during execution without manual boilerplate.
Abstracted execution layer that runs identically on AWS, Azure, GCP, or On-prem.
Integrated real-time monitoring that compares production data to training data distributions.
Ability to chain multiple functions (preprocessing, model, post-processing) into a directed acyclic graph (DAG).
Native support for distributed computing engines to handle petabyte-scale data processing.
Install MLRun client via 'pip install mlrun' in your Python environment.
Configure connection to an MLRun API service or run locally in 'mock' mode.
Define a Python function or import an existing notebook as an MLRun function.
Annotate function parameters and data inputs using the MLRun SDK.
Run the function as a job to track parameters, inputs, and artifacts automatically.
Ingest data into the MLRun Feature Store for versioned, reusable datasets.
Compose complex workflows using Kubeflow Pipelines (KFP) or MLRun's built-in scheduler.
Deploy trained models as real-time serverless functions using the Nuclio engine.
Set up monitoring for deployed models to track performance and data drift.
Integrate with CI/CD tools like GitHub Actions for automated retraining and deployment.
All Set
Ready to go
Verified feedback from other users.
"Highly praised for its ability to unify the research and production environments, though users note a steep learning curve for those unfamiliar with Kubernetes."
Post questions, share tips, and help other users.

A declarative Python micro-framework for modular, testable, and self-documenting dataflows.

The AI developer platform to build AI agents, applications, and models with confidence.

Real-time machine learning deployment with enhanced observability for any AI application or system, managed your way.

The Enterprise ModelOps Platform for Scalable, Secure, and Explainable AI Deployment at the Edge and Cloud.

The open-source standard for machine learning model versioning, metadata tracking, and reproducibility.

The Open-Source Collaborative MLOps Platform for Reproducible Machine Learning.