In this blog we present an end to end Git based workflow to test and deploy feature engineering, model training and inference pipelines.
Learn how to connect Hopsworks to Snowflake and create features and make them available both offline in Snowflake and online in Hopsworks.
Learn how to set up customized alerts in Hopsworks for different events that are triggered as part of the ingestion pipeline.
Learn how to publish (write) and subscribe to (read) streams of events and how to interact with the schema registry and use Avro for data serialization.
This tutorial gives an overview of how to work with Jupyter on the platform and train a state-of-the-art ML model using the fastai python library.
Learn how to train a ML model in a distributed fashion without reformatting our code on Databricks with Maggy, open source tool available on Hopsworks.
This tutorial will show an overview of how to install and manage Python libraries in the platform.
Use open-source Maggy to write and debug PyTorch code on your local machine and run the code at scale without changing a single line in your program.
Learn how to design and ingest features, browse existing features, create training datasets as DataFrames or as files on Azure Blob storage.
Connect the Hopsworks Feature Store to Amazon Redshift to transform your data into features to train models and make predictions.
Learn how to integrate Kubeflow with Hopsworks and take advantage of its Feature Store and scale-out deep learning capabilities.
This blog introduces the Hopsworks Feature Store for Databricks, and how it can accelerate and govern your model development and operations on Databricks.