This dictionary/glossary covers terms from MLOps, data engineering, and feature stores, but does not cover terms from the broader ML (Machine Learning) algorithms and frameworks space. MLOps is the roadmap you follow to go from training models in notebooks to building production ML systems. MLOps is a set of principles and practices that encompass the entire ML System lifecycle, from ideation to data management, feature creation, model training, inference, observability, and operations. MLOps is based on three principles: observability, automated testing, and versioning of ML artifacts.
Observability for ML systems refers to the ability to gain insights into the behavior and performance of production machine learning models. Automated testing will enable you to build ML systems with confidence that tests will catch any potential bugs in your data or code. Versioning will enable you to safely operate ML systems by supporting upgrades and rollback without affecting system operations. MLOps should help tighten your ML development iteration loop by enabling you to roll out fixes and improvements to ML systems faster. Finally, the Feature Store is often called the data layer for MLOps. It acts as a data platform that enables ML pipelines to be decomposed into smaller more manageable pipelines for feature engineering, model training, and model inference.