Skip to content Platform

Build, test & deploy  Data Products on  Snowflake is the leading provider of Snowflake environment management, end-to-end orchestration, CI/CD, automated testing & observability, and code management, wrapped in an elegant developer interface.
Spendview for Snowflake FREE

Spendview  for  Snowflake

An inexpensive, quick and easy way to build beautiful responsive website pages without coding knowledge.
Getting Started
Docs- New to DataOps.liveNew to Start learning by doing, create your first project, and set up your DataOps execution environment.
Join the Community
Join the CommunityFind answers to your DataOps questions, collaborate with your peers, share your knowledge!
#TrueDataOps Podcast
#TrueDataOps PodcastWelcome to the #TrueDataOps podcast with your host Kent Graziano, The Data Warrior!
DataOps AcademyEnroll in the Academy to take advantage of training courses. These courses will help you make the most out of
Resource Hub
An inexpensive, quick and easy way to build beautiful responsive website pages without coding knowledge

Customer stories
Be a part of our community, whether you join us virtually or in person. Connect with fellow professionals, expand your network, and gain knowledge from our esteemed product and industry experts.
#TrueDataOps.Org#TrueDataOps is defined by seven key characteristics or pillars:
Featured Blogs
Stay informed with the latest insights from the DataOps team and the vibrant DataOps Community through our engaging DataOps blog. Explore updates, news, and valuable content that keep you in the loop about the ever-evolving world of DataOps.
In The News

In The News

Stay up-to-date with the latest developments, press releases, and news.
About Us
About UsFounded in 2020 with a vision to enhance customer insights and value, our company has since developed technologies focused on DataOps.


Join the team today! We're looking for colleagues on our Sales, Marketing, Engineering, Product, and Support teams.
DataOpsJan 4, 2021 6:18:00 PM2 min read

PART 1: The Challenges of Repeatable and Idempotent Schema Management: Introduction

This is the first in a series of blog posts discussing automating the lifecycle events (e.g. creation, alteration, deletion) of database objects which is critical in achieving repeatable DataOps processes. Rather than the database itself being the source of truth for the data structure and models, instead the code within the central repository should define these.

However, applying changes (creations, alterations, deletions) to database objects requires the use of SQL, which is an imperative language. This means that a sequence of operations needs to be executed in a certain order from a known database state in order to reach the target configuration.


In the diagram above, the target is State N, starting from State 0. Operations A, B and C are applied in sequence, transiting through states 1 and 2 in order to reach the target state.

Take the following example:


The above code expects the starting state to be a certain way, i.e. no database, schema, table etc. But if the database, for example, was to already exist when the code was run, there would be an error.

In general, imperative procedures suffer from a number of limitations and drawbacks, including:

  • The starting point must be well defined and known in advance, otherwise changes may fail (e.g. if an object to be altered does not exist).
  • Some changes must be executed in sequence (e.g. applying a grant to a table after it has been created), leading to complex serial/parallel branching processes (DAGs).
  • A failure part-way through an imperative process can leave the database in an unknown state.
  • The process author will need to be familiar with the low-level database commands and operations, including limiting factors such as execution times and potential race conditions.

Ideally, it would be possible to apply changes to a database in an idempotent way. This means that a sequence of operations is still applied, but it is insensitive to the initial state, and could therefore be applied one or more times with the same successful effect. If idempotent code could be used, a DataOps process would not need to know the current state of the database, worry about when the process was last executed (or what the result of the last execution was), or care if external processes have updated the database, as the result should always be the same.


In the diagram above, an idempotent approach can move the database from State X , where X is an unknown state that could be state 0, 1 or 2, to the target State N.

The next two blog posts will examine different DDL operations used in SQL-based relational databases for the ability to be executed in an idempotent way, particularly in the context of common DataOps use cases, highlighting key exceptions and methods for handling them. Snowflake has been chosen as a popular cloud-based database platform, commonly used in DataOps applications.



Born out of nearly a decade of professional services and hundreds of successful data projects, was built to meet the real-life needs of modern, data-driven companies using Snowflake. removes the need for enterprises to balance governance and agility, delivering fundamental improvements in both. The platform brings agile DevOps automation (#TrueDataOps) and IoT data compression to the Snowflake cloud data platform. is a single platform for 100% of an organization's DataOps lifecycle needs around Snowflake. It provides full Snowflake environment management, end-to-end orchestration, CI/CD, automated testing, pipeline observability and release management wrapped in an elegant user interface. Faster development, parallel collaboration, increased efficiencies, reduced costs, data assurance, simplified orchestration and full data product lifecycle management are the result.