This dictionary/glossary covers terms from MLOps, LLMOps, data engineering, and Feature Stores, but does not cover terms from the broader ML algorithms and frameworks space.
MLOps is the roadmap you follow to go from training models in notebooks to building production ML systems. It is a set of principles and practices that encompass the entire ML System lifecycle, from ideation to data management, feature creation, model training, inference, observability, and operations.
MLOps is based on three principles: observability, automated testing, and versioning of ML artifacts. Observability for ML systems refers to the ability to gain insights into the behavior and performance of production machine learning models. Automated testing will enable you to build ML systems with confidence that tests will catch any potential bugs in your data or code. Versioning will enable you to safely operate ML systems by supporting upgrades and rollback without affecting system operations. MLOps should help tighten your ML development iteration loop by enabling you to roll out fixes and improvements to ML systems faster. Finally, the Feature Store is often called the data layer for MLOps. It acts as a data platform that enables ML pipelines to be decomposed into smaller more manageable pipelines for feature engineering, model training, and model inference.
LLMOps is MLOps for Large Language Models (LLMS) and it is a set of practices for the operationalization of applications that use LLMs to provide intelligent language-based services. This involves the management of fine-tuning LLMs, prompt engineering, integration with or external vector databases and/or feature stores for in-context learning, and infrastructure for training, deploying, and serving LLMs.