Following on from IMPERATIVE VS DECLARATIVE FOR DATA. In this blog post, we will look at different ways that the Imperative Approach can be implemented and also give an overview on how a basic Declarative Approach could work.
Blog - DataOps
Posts about Schema Management:
Let's now consider this in the context of Data and Databases. The most typical example of changing the state of a database is creating a table. We would all initially jump to something like:
Thanks to everyone who attended the Technical Masterclass on CI/CD and DataOps for Snowflake 2 weeks ago. And to everyone who has since watched the recording since.
This series of blog posts builds on our previous set on The Challenges of Repeatable and Idempotent Schema Management:
Over the previous 2 blog posts, we have seen that managing the lifecycle of database objects in an idempotent manner is impacted by the imperative nature of most SQL statements, which require a known initial state for changes to be applied repeatably.
It may appear that most of this should be possible with native SQL statements and indeed some DDL operations in Snowflake are naturally idempotent, whereas others have impacts on data and object state, and some are not idempotent at all. Let’s look at some of the ways SQL tries to help us with this and the problems that remain.
This is the first in a series of blog posts discussing automating the lifecycle events (e.g. creation, alteration, deletion) of database objects which is critical in achieving repeatable DataOps processes. Rather than the database itself being the source of truth for the data structure and models, instead the code within the central repository should define these.