This page provides you with instructions on how to extract data from Microsoft Azure and load it into Panoply. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Microsoft Azure?
Microsoft Azure is a cloud services platform that developers can use to build, deploy, and manage applications. Several databases can run on the Azure platform, including Microsoft Azure SQL Database, Azure Database for MySQL, and Azure Database for PostgreSQL.
What is Panoply?
Panoply provides end-to-end data management-as-a-service. It uses machine learning and natural language processing (NLP) to learn, model, and automate standard data management activities from source to analysis. It can import data with no schema, no modeling, and no configuration. Users can quickly spin up an Amazon Redshift instance and run analysis, SQL, and visualization tools just as they would on a Redshift data warehouse they created on their own.
Getting data out of Azure
In most cases, the easiest way to retrieve data from relational databases is by writing SQL queries. Alternatively, you can use SQL Server Server Management Studio to export data in bulk as delimited text, CSV files, or SQL queries that would restore the database if run.
Preparing Azure data
If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in the response, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them. Azure's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes.
Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. This means you'll likely have to create additional tables to capture the unpredictable cardinality in each record.
Loading data into Panoply
Once you've identified the columns you want to insert, you can use the Redshift CREATE TABLE statement to set up a table to receive all of the data.
To populate that table, you might be tempted to use INSERT statements to add data to your Redshift table row by row. Don't do that; Redshift isn't optimized for inserting data one row at a time. If you have a high volume of data to be inserted, a better approach is to load the data into Amazon S3 and use the COPY command to migrate it into Redshift.
Keeping data from Azure up to date
At this point you've successfully moved data into your data warehouse. But how will you load new or updated data? It's not a good idea to replicate all of your data each time you have updated records. That process would be painfully slow and resource-intensive.
Instead, identify key fields that your script can use to bookmark its progression through the data and use to pick up where it left off as it looks for updated data. Auto-incrementing fields such as updated_at or created_at work best for this. When you've built in this functionality, you can set up your script as a cron job or continuous loop to get new data as it appears in Azure.
And remember, as with any code, once you write it, you have to maintain it. If Azure sends a field with a datatype your code doesn't recognize, you may have to modify the script. If your users want slightly different information, you definitely will have to.
Other data warehouse options
Panoply is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, Snowflake, or Microsoft Azure Synapse Analytics, which are RDBMSes that use similar SQL syntax. Others choose a data lake, like Amazon S3 or Delta Lake on Databricks. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Snowflake, To Azure SQL Data Warehouse, To S3, and To Delta Lake.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from Microsoft Azure to Panoply automatically. With just a few clicks, Stitch starts extracting your Microsoft Azure data, structuring it in a way that's optimized for analysis, and inserting that data into your Panoply data warehouse.