Share to gain more social capita

Written by — Mika Heino, Data Architect
SnowConvert is a code conversion tool designed to facilitate migrations from databases like Oracle, SQL Server, Teradata, and Amazon Redshift to Snowflake's cloud data platform. It claims to automate up to 96% of code and object conversion, but how does it actually work?
Written by — Mika Heino, Data Architect
Share to gain more social capita
Snowflake released SnowConvert, a tool to help code conversion from other popular data warehouse products such as Oracle, SQL Server, Teradata and Redshift. SnowConvert is not a new product, but now it has been released to the public and anybody can download the tool and experiment with it (like I did).
In this text I’ll go through what the tool does, what it does not do and test it out using my own SQL Server -installation.
Even though the tool is free, to be able to test out the tool, you need to enroll into “SnowConvert for Developers On-Demand Training” from training.snowflake.com to get started. The course acts like a tutorial for the tool, you’ll need to go-through the material and pass a questionnaire exam to receive the download link for the SnowConvert tool. At first this sounded somewhat overkill, but having gone through the material, it already answered many of the questions that I had in my mind and in the end I didn't mind the approach.
![]() SnowConvert course can be found from training.snowflake.com |
As the tool is meant for database object migration, it is crucial to understand that the current version is limited only to a subset of databases in the market. Today, following conversions are currently available in SnowConvert:
As noted above, Redshift conversion is on public preview. Redshift conversion will also eventually be different from others, as it will offer data migration capability (live connection source database). As of today, 6.2.2025, that feature is still under private preview. All of these specific source databases have their limitations when it comes to migration capabilities. The newest addition, Redshift, for example is currently covered only from tables and views point-of-view.
The latest up-to-date list of features can be found from here, and when you dig the official documentation a bit more, it states for Redshift “...Assessment and translation are available for TABLES and VIEWS, and although SnowConvert can recognize other types of statements, they are not fully supported yet.” This highlights the fact that the tool can do more than is actually “officially” promised and you do test run yourself to see how much it can migrate out-the-box.
As advertised, the SnowConvert converts objects and code. That is to say, SnowConvert deals with DDL and DMLs. You provide the code you want to migrate, which means that you need to manually extract the code from the source database or from the ETL tool of your choice. Snowflake and SnowConvert provide you instructions and/or code on how to extract necessary data from specific databases in their open Github repository.. For example, for SQL Server, this means detailed instructions on how to generate scripts using SQL Server Management Studio. Github repository also reveals some support exists for other source databases, such as Databricks, Netezza, BiqQuery and list goes on.
![]() List from SnowConvert GitHub |
When it comes to extraction from ETL/ELT tools, SnowConvert does not provide any tools to extract code. This is understandable due to the variety of tools in the market. In the case of ETL/ELT tools, you need to extract the scripts by yourself in your chosen manner. Some tools provide for meta databases what to crawl, but for others this might mean manual click-ops. Methods are free to choose, as long as the input for the SnowConvert is SQL -file.
The actual conversion is not just straightforward search and replace, as SnowConvert understands dependencies with objects and will warn you if it can find dependencies. This is visible when I tried converting a simple INSERT INTO clause taken from the web. SnowConvert gives me an FDM (Functional Difference Message) of missing objects. This is understandable, because I did not provide DDL for the Customers -table.
Input:
INSERT INTO Customers (CustomerName, ContactName, Address, City, PostalCode, Country) VALUES ('Cardinal', 'Tom B. Erichsen', 'Skagen 21', 'Stavanger', '4006', 'Norway'); |
Output:
--** SSC-FDM-0007 - MISSING DEPENDENT OBJECT "Customers" ** INSERT INTO Customers (CustomerName, ContactName, Address, City, PostalCode, Country) VALUES ('Cardinal', 'Tom B. Erichsen', 'Skagen 21', 'Stavanger', '4006', 'Norway'); |
SnowConvert calls this feature code completeness in the report and I very much like it, as the whole project is just a bunch of SQL files with a high possibility of losing a file or two.
SnowConvert (at its current version) does not tap into the source system, deploy the converted code and migrate the data. This means that the SnowConvert solves only a part of the whole migration process and even this part is very much a manual process. You rinse and repeat the codes until you’re satisfied with the results e.g. you’ve solved the error codes raised by SnowConvert. This process is highlighted by the fact that SnowConvert is a desktop-based tool.
You can run it through Dockerfile, but you can’t by default create a pipeline that takes the input of the error messages and modifies the original SQL files accordingly until done. This might though be solved by Agentic workflows as we are dealing with text content.
Knowing the capabilities and limitations of the tool, we can now dig in the migration process when using SnowConvert.
As with all migrations, SnowConvert expects you to have a plan before going ahead. You must decide “Why migrate”, “What is essential to migrate” and “Whom”. Because SnowConvert is a traditional desktop based tool, the project management part comes crucial. The project manager must split the database into pieces, split the migration task per developer and track the progress using external tools. This is by no means nothing new. Every migration project has to handle this. SnowConvert does not take this burden away.
Actual conversion starts by extracting data from the source database. For example in my example case, I was instructed to general scripts with specific settings listed on the instructions (fig 1.) In SQL Server's case, SnowConvert works with any T-SQL -files. The end result with default settings is a folder containing all your objects within a given schema (fig 2.)
![]() Fig 1. - Generate Scripts |
![]() Fig 2. Output |
When you have the files, the next step is to open up SnowConvert and define the project name, source database, location of the original scripts and location of the output scripts (fig 3.).
![]() Fig 3. - SnowConvert UI |
Depending on the on source database, you have a few settings to tweak on how SnowConvert will convert the files (fig 4. and fig 5.). These options vary by source database, Teradata and Oracle do have more options due, mainly because SQL Server is a more “simpler database”. Currently there are no options for Redshift.
![]() Fig 4. |
![]() Fig 5. |
After you are satisfied with the settings, you can run the actual conversion which will give you assessment reports and converted SQL files (fig 6.)
![]() Fig 6. |
Once the conversion is done, you’re presented with a bunch of assessment reports detailing the quality of the conversion (video 1.), starting with the code completeness figure. When you drill down the reports, you’re given a breakdown of the converted code in detail.
SnowConvert uses the following levels of messages to break down.
Video 1. |
SnowConvert also provides detailed CSVs of the errors,, which you can use to build a task list for migration (Video 2). They detail the same converted code, using error level messages, such as SSC-EWI-0030, marking that “The statement below has usages of dynamic SQL” and a file where the error was found.
|
The actual converted result is outputted into the folder that you chose. As an example, here are a few examples (fig 6. and fig 7.) to showcase the capabilities of SnowConvert in more “complex” procedures. Simple create tables clauses it handles without any issues, so there is no need to showcase those. As you can already see, SnowConvert adds comments to the code, where you must remedy the issue. Once you have solved the issue, you need to run the conversion again to get update assessment results. This is the process that you rinse and repeat, until you're happy with the results.
![]() Fig 6. |
![]() Fig 7. |
SnowConvert is not a tool that you can trust to solve all the issues, rather it’s an addition to your migration toolset. It can solve the majority of easy issues, giving you time to focus on the remaining 20% percent of the code to be migrated, which will eventually take most of your time. Given also the fact that the tool is free to use, it works like a charm and eases a lot, given the fact that the alternative would be the traditional “search and replace” -pattern.
Speaking of search and replace -pattern, some might argue the value of SnowConvert, as you can nowadays request code conversion from ChatGPT. Where this pattern also works, but the value of SnowConvert comes in consistency and volume. You can migrate several hundreds of scripts at once, you can trust that the migration was rule-based. I would use ChatGPT (or similar tool) to help me to migrate the code that SnowConvert can’t solve.
Using SnowConvert together with ChatGPT, dbt and dbt model generator would be an interesting combination for a migration (assuming that you are comfortable with dbt). Reason for choosing dbt, is the fact you can create the models by code and leverage LLM’s in the process. As a process, this means that you could migrate the SQL files with SnowConvert and ChatGPT. Then you could create with ChatGPT with the existing dbt model generator(s) the necessary loads and you would end up automating a lot of the migration tasks.