Hopsworks 4.2: Supercharged Observability for Smarter ML Applications
Our latest update introduces enhanced observability for the Online Feature Store, along with package upgrades and bug fixes, offering more flexibility and power for building and scaling your LLMs.
.png)
Building Tomorrow’s AI Infrastructure with Supermicro!
This April, we teamed up with Supermicro to develop a next-generation AI infrastructure platform, tailored to meet the increasing demand for sovereign data requirements. By leveraging our combined expertise, we provide high-performance, optimized AI solutions and efficient GPU management.
.png)
What's New in Our GPU Management Features
✅ Harness GPU Power with Hopsworks
Unlock real-time GPU usage, job metrics, system load, and more, all while integrating Hopsworks effortlessly with your current infrastructure. Together with NVIDIA, Supermicro, and OVHcloud, we’re bringing you unparalleled GPU performance.
▶️ Missed our webinar on GPU Optimization?
Watch it anytime to learn how to maximize GPU usage at scale with Hopsworks, enabling smart GPU sharing across teams and projects.
Interesting Content 📚
📌 Blog Alert: How we Secure your Data with Hopsworks!
Discover how Hopsworks tackles information security featuring our novel project-based multi-tenant security model and the suite of multi-tenant services we now support.
🔍 Introducing MCP: A Key Update to Our MLOps Dictionary
This quarter, we've refreshed our ML dictionary to include the latest buzzwords shaping the AI landscape. Explore the emerging concept of the Model Context Protocol (MCP) and what it means for the future of machine learning.