
Blog
Supercharging Scikit-Learn and Pandas in Snowflake with GPUs
Learn how to accelerate model development cycles for scikit-learn, pandas, UMAP and HDBSCAN with GPUs — no code changes required.
USA Variation
Snowflake's annual user conference is returning to San Francisco. Register today and save on a full conference pass.


Overview
Develop, deploy and monitor ML features and models with a fully integrated platform that brings together tools, real-time and batch workflows, and scalable compute infrastructure to the data.
Unify model pipelines end to end with any open source model on the same platform where your data lives.
Scale ML pipelines over CPUs or GPUs with built-in infrastructure optimizations — no manual tuning or configuration required.
Discover, manage and govern features and models in Snowflake across the entire lifecycle.





ML Workflow
Model Development
Optimize data loading and distribute model training from Snowflake Notebooks or any IDE of choice with ML Jobs.


Feature Management
Create, manage and serve ML features with continuous, automated refresh on batch or streaming data in under 30 milliseconds using the Snowflake Feature Store.
Promote discoverability, reuse and governance features across training and inference.
Easily search for and visually trace features across the pipeline via the integrated Feature Store UI.
Production
Serve models in under 100 milliseconds to power low-latency, online use cases, such as personalized recommendations and fraud detection.

igs energy
paypal
Previously, the process to train all these models and generate predictions took a half hour. The unified model on Snowflake is super quick; we’re talking minutes to generate forecasts for hundreds of thousands of customers. This speed and simplicity will help unlock additional capabilities for the business like simulation and scenario forecasting.”
Dan Shah
Manager of Data Science


“By delivering fully automated, real-time fee estimates with 97-99% accuracy in Snowflake, we are able to enhance cost transparency and strengthen confidence in settlement and reporting processes.”
Venkata Kalyan Sanaboyina
Senior Engineering Lead

Learn more about the integrated features for development
and production in Snowflake ML
Get Started
End-to-end ML
Yes, data scientists and ML engineers can build and deploy models with distributed processing in CPUs or GPUs. This is enabled by the underlying Ray-based modern container infrastructure that powers the Snowflake ML platform.
Yes, Snowflake ML handles both online and batch workloads. For real-time needs, our online feature store and online model inference are generally available to power use cases such as personalized recommendations, fraud detection, pricing optimization and anomaly detection.
No, you can bring models built anywhere externally to run in production on Snowflake data. During inference, you can take advantage of integrated MLOps features such as ML observability and RBAC governance.
Yes, Snowflake ML is fully compatible with any open-source library. Securely access to open source repositories via pip and bring in any model from hubs such as Hugging Face.
Snowflake operates on a consumption-based pricing model with the latest credit pricing table here.
Yes, you can try any of our ML quickstarts directly from the free trial experience.
Subscribe to our monthly newsletter
Stay up to date on Snowflake’s latest products, expert insights and resources—right in your inbox!
* Private preview, † Public preview, ‡ Coming soon