Snowflake provides industry-leading features that ensure the highest levels of governance for your account and users, as well as all the data you store and access in Snowflake.
DataOps.live is the leading provider of Snowflake environment management, end-to-end orchestration, CI/CD, automated testing & observability, and code management, wrapped in an elegant developer interface.
What are Data Professionals saying?
“ I speak to our biggest customers all over the globe frequently, and many of them ask me about agility and governance. They ask questions like... how can I do CI/CD for Snowflake so I can deliver value faster? DataOps.live are at the leading edge of the DataOps movement and are amongst a very few world authorities on automation and CI/CD within and across Snowflake.”
“DataOps.live and Snowflake got us from vision to production in frankly terrifyingly short time. Most importantly they have done that while organization, the systems, and the data is constantly changing. Creating clarity and repeatability in this hugely dynamic context, is their unique advantage!!.”
Data Product Management
Managing data like a product means treating data not just as a byproduct of business operations, but as a valuable resource that can drive growth, innovation, strategic value and differentiation. This requires a mindset shift to managing the build and lifecycle of that data product, managing the attributes of that data product, and the dependencies between data products.
DataOps.live is the ONLY platform that combines all these capabilities with a seamless developer and operator experience.
What is a Data Product?
Data products are core building blocks of EVERY data platform.
Data products can take many forms
- simple business reports that provide insights into sales performance or customer behavior
- more complex machine learning models that can predict future trends or make recommendations based on past data.
Data product manifest
DataOps.live provides a Data product registry and a Data Product Manifest object for each Data Product. The manifest is an organizing document that acts as an aggregation of all the essential attributes of a Data Product. The manifest describes a list of the entities in the data product, gives a detailed schema description for each entity, known relationships among entities, Service Level Indicators and so much more.
Build data products 10x faster
Improve your developer experience with one-click cloud-hosted IDE with full support for
- Code Development (SQL, Scala, Java, Python, Snowpark, Streamlit, dbt, etc.)
- Code Validation
- Testing and review
- Curated libraries and resolved version conflicts
Start immediately, even for the most complex use cases with an environment similar to the runtime environment
Enterprise Gitflow development expereince
We make the developer experience simple and easy with 100% standard Gitflow processes. Use standalone or fully mirror our backend git repository with your enterprise source code repository (Github, Bitbucket, Gitlab, etc.)
Full branch and merge git flows for dev, test, prod and feature branches.
Maximise agility with concurrent data product development
Any number of developers can work independently in their own safe sandboxes without stepping on each other's toes.
Massively increasing developer efficiency, and the speed of creation of new data products or enhancements.
Automated regression testing for EVERY deployment!
Run automated regression testing to assure the quality of the flowing data at every point.
Add metadata reporting to every data pipeline to monitor data quality KPIs over time.
Build and rebuild your DEV, TEST & PROD environments!
Snowflake has become the first fully programmable Data Cloud allowing companies to build code and configuration templates external to Snowflake and then “run” that code on the Snowflake Data Cloud.
Manage the build and deploy of your data applications and data products on Snowflake with the same agility and governance you have with software applications.
Manage all Snowflake objects as code and configuration!
Beyond tables and views, you can also lifecycle manage ALL the additional 30+ objects in Snowflake using code and configuration.
Automatically add new data warehouses with a few lines of code.
Something always goes wrong!
It's not IF but WHEN. A new feature breaks, bad data arrives, you name it.
Our new declarative approach to building Snowflake environments allows you to roll the Snowflake data and infrastructure backwards if there is a major failure.
React to failures with speed – rollback changes immediately without impacting the data – or recover from complete failures and rebuild in a fresh Snowflake tenant in minutes and hours instead of days, weeks, or months.
Model and transform your data with ease.
The “T” in ELT is a critical element in every data pipeline. This provides the capability to model and transform your raw data to enable you to work with it.
Just use our enhanced dbt-based modeling engine with advanced security and backward compatibility with all your existing dbt models.
Embed governance inline with your modeling
Drive advanced Snowflake Governance with built-in enhanced support for Grants, Row Access Policies, Dynamic Masking Policies, TAGs, and so much more.
Model | TEST | Model | TEST
Create advanced data transformation flows and interleave transformation jobs, testing jobs, and other data processing jobs in the same pipeline.
Data Vault 2.0
Automate your data vault 2.0 modeling with embedded support for dbt-vault macros.
Extend this even further with our native support for VaultSpeed
Orchestrate every part of your data pipelines
Leverage the power of advanced Orchestration capabilities for ALL your data integration and data movement platforms with enhanced connectors for your most popular tools.
Enterprise pipeline scheduling capabilities
Employ the power of our advanced scheduling engine and operational management capabilities to remove disconnected dependencies between systems.
Complex DAG workflows
Build complex DAG routing flows and logic to embed the rules of what runs, when it runs, what depends on, and what depends on it.
Observability and automated testing are crucial concepts from the DevOps world that deliver trust and reliability to stakeholders of applications developed that way. But the data world has always suffered without these capabilities.
We have the vision to change the way enterprises make decisions about their data by providing a unified and harmonized view of relevant metadata.
We are empowering every business owner with complete control and understanding all elements of their data products. And we are making building observability to be a part of a data engineer’s everyday tool.
Advanced alerting and integration
Rich webhooks for alerting and automation allow for advanced integration into hundreds of other tools and platforms. Alert engineers, create tickets and post chat messages.
Two way integration also allows DataOps to be fully controlled from other platforms via standard REST APIs.
Observability for Data Product Owners
- Assess the cost performance of Data Products
- Monitor usage and quality of Data Products
- Impact assessment and audit reporting on Data Products
- Manage data teams and data domain budgets
- Evaluate the complexity of data initiatives
Observability for Data Product Developers
- Monitor the health of data pipelines
- Monitor and optimize infrastructure costs
- Investigate the blast radius of data issues
- Access the breakdown of data pipelines
Observability for Data Engineers
- Get access to rich metadata in your Development Environment
- Investigate issues and errors
- Increase code re-usability
- Build better Data Products faster
- Compare logs between jobs runs
“It's all about the architecture!!” These were the first words we ever heard about Snowflake from Bob Muglia in London in 2017. And he was right – IT IS all about the architecture – and the same is true of DataOps.
We separate cloud and on-premises. Everything in the cloud is Kubernetes… everything on the client is a simple Docker image that runs behind the client’s firewall. Zero data leakage to the cloud.
Leverage our cloud / on-premises delivery to model for maximum data security.
Inheriting policies and governance
Enterprise Inheritance structures to enable governance and policy enforcement with local autonomy and agility.
Grounded in pure #DEVOPS (#TRUEDATAOPS)
Grounded in core DevOps capabilities with only the necessary changes added to support the nuances of building, testing and deploying data platforms with terabytes of data and more.
Register for all the latest thinking at www.truedataops.org.