Use Spark Interims to Troubleshoot and Polish Low-Code Spark Pipelines: Part 2

Authors:
Anya Bida

In Part 1 we learned an easy way to troubleshoot a data pipeline using historical read-only metadata. Now I want to dig in and polish my individual spark dataframes (or RDDs). Here I have disabled column pruning temporarily so we can sample the data output from each dataframe.

Fig 1. Each interim represented in the Spark UI (right) corresponds to the data sample in the Prophecy pipeline (left) with the matching color arrow. 

Let's see how the data pipeline could be improved. Interims show me some sample data for each step of my pipeline. Let's iterate

Now I understand how my individual dataframes behave and I’m happy with my pipeline. As usual, I can view my pySpark code changes and push them to my git repo.

Interim data sampling makes my troubleshooting easier - I can conceptualize the visual flow, compare historical runs (see Part 1 of this blog), and inspect individual dataframes ALL in a low-code interface for Spark. Finally, spark has a visual IDE!

How can I try Prophecy?

Prophecy is available as a SaaS product where you can add your Databricks credentials and start using it with Databricks. Or you can use an Enterprise Trial with Prophecy's Databricks account for a couple of weeks to kick the tires with examples. We also support installing Prophecy in your network (VPC or on-prem) on Kubernetes. Sign up for your 14 day free trial account here.