Announcing Prophecy 3.0: low-code SQL transformations

Announcing Prophecy 3.0: low-code SQL transformations

Discover how Prophecy 3.0 arms all data users with low-code SQL, so they can quickly and easily build scalable data pipelines, resulting in highly impactful data products.

Discover how Prophecy 3.0 arms all data users with low-code SQL, so they can quickly and easily build scalable data pipelines, resulting in highly impactful data products.

Maciej Szpakowski
Assistant Director of R&D
Texas Rangers Baseball Club
April 26, 2023
May 24, 2023

Table of Contents

We at Prophecy have been talking to data practitioners on the business teams for quite some time, and it’s clear these two current, main approaches for data transformation leave many dissatisfied:

  1. Code-based (e.g., using dbt Core™) — Enables modern software practices but is limited to experienced developers and generates code that’s difficult to maintain
  2. Legacy ETL products (e.g., Alteryx) — Empower more users but struggle to scale with data, lack extensibility, and make it challenging to deploy and and operationalize pipelines

Both approaches fall short on their promise. We believe data teams deserve a better way to develop and maintain data transformations.

That’s why we’re thrilled to announce our new product, Prophecy 3.0, a release delivering game-changing, low-code SQL capabilities tailored to data users on business teams.

By adding Prophecy low-code SQL to our platform, we’re bridging this gap, by combining the best of the two current data transformation approaches. Our unique Visual = Code technology allows both coders & visual users to collaborate on the same data models and quickly build scalable, production-ready pipelines. Engineers can use a SQL editor, with Prophecy visualizing their code for them on an editable canvas. Meanwhile, business users can use the visual environment which automatically turns their work into high-quality code based on open-source standards, enabling software engineering best practices.

Prophecy democratizes data transformations within organizations and enables all data users on business teams to be productive — without sacrificing data quality or performance.


Let’s deep-dive into why we’ve built this low-code SQL product and the new possibilities it unlocks for data teams.

Pre-Prophecy 3.0 — Two main approaches for data modeling

1. dbt Core — A build tool for SQL

SQL is the most popular programming language for data transformations. But SQL by itself works only for small, ad-hoc queries that are quick to write and easy to understand. Over time, as we’ve partnered with both small and large organizations, we noticed legacy data transformation projects are often a maze of hundreds of .sql files, each tens of thousands of lines long. And that’s where dbt Core comes in.

dbt Core introduces much-needed, well-thought-through standardization for SQL codebases. With dbt Core, you can structure your SQL code using models, which define views or tables and can be incrementally written to.

This approach has helped many enterprises structure their code, much like how Maven allowed developers to standardize their Java codebase into a format we’re all familiar with today.

dbt Core allows users who thrive with CLI tools and love the customizability of Python/Jinja templating to become productive analytical engineers. However, also similar to Maven — with which most engineers have a love/hate relationship — it’s hard to make dbt Core work for more complex projects.

It also doesn't solve the problems of business data users or data scientists who don’t want to open a terminal window whenever they need to extract insight from their data. They can’t keep coming back to a single engineer who wrote the code to answer their questions.

Organizations needed to find an even better way to move faster.

2. GUI-based tools — The visual approach

During the early 90s, users faced mounting frustration with manually writing code for data transformations. Therefore, in addition to a variety of code-based frameworks and build tools, enterprises sought to develop alternative solutions to enable less-technical users. This is when GUI-based tools were born.

These GUI-based tools have secured a significant following. Today, every single enterprise company uses some sort of vendor-based or custom-built GUI product, with tools like Alteryx used by of thousands of enterprises.

But organizations are still looking for better alternatives.

That’s because, while GUI-based tools lowered the barrier to entry for data practitioners to build data pipelines, these legacy products still haven’t entirely delivered on their central promise. GUI-based tools:

  1. Work well for data exploration but fail when trying to productionize data pipelines, often requiring an engineering team to rewrite pipelines as code;
  2. Function as a black box, making it very difficult to debug and optimize pipelines, which increases cost;
  3. Do not scale as their underlying engines are not distributed and are not built for the cloud; and
  4. Provide only limited out-of-the-box functionalities and are difficult to extend.

Prophecy 3.0: A better approach with low-code

With the release of Prophecy 3.0, we continue to double-down on our goal of combining the best of both worlds: high-quality code based on software engineering best practices with a complete, easy-to-use visual environment.

Visual = Code for easy collaboration

Visual = Code allows both SQL coders and business users to easily collaborate on the same project. Business users can visually create their data models, with all their work automatically turning into high-quality code on Git. Engineers can use SQL and advanced macros through a code-based editor, with Prophecy parses their code and visualizes it on an editable canvas and ensuring both views remain in sync at all times.

Interactive development

At any step of the process, data users can interactively run their models to make sure they're going in the right direction. Models can be additionally tested to ensure robustness over-time. Power users can also extend the visual canvas through custom gems; making even the most complex logic easily accessible in the visual view.

Deployment from code on Git

Projects built through Prophecy are stored in the open-source dbt Core format as repositories on Git, which allows data teams to follow best software engineering practices like CI/CD.

Maintenance is simple since Prophecy gems turn into code on Git that’s always up-to-date with the latest version of the warehouse or lakehouse used. And, to ensure the best performance at all times, Prophecy is smart about which code construct (subquery or CTE) to use.

Sharing of data products

With Prophecy 3.0, sharing data products has never been easier. Data users can import an existing dbt Core project or start from scratch. They can publish those projects to other teams or subscribe to already existing ones. Published projects contain models allowing for easy governance and functions and gems allowing for code reusability.

Summary

Today we’ve announced a  Prophecy 3.0, a new low-code SQL product that offers a new way to develop data models. It features a Visual = Code interface, which enables both no-code users and engineering data users in business teams.

In subsequent blog posts, we will deep-dive into all the features of low-code SQL, walking through a complete journey — from development, through testing, to productionalization of data models.

Register for our upcoming Prophecy 3.0 webinar

Interested in learning more about how Prophecy 3.0’s low-code SQL capabilities can help data users  rapidly create high-quality, 100% open-source SQL code based on engineering best practices?

Join us at our upcoming webinar (Thursday, May 18th at 9am PT | 12pm ET) where industry thought leaders Sanjeev Mohan, principal at SanjMo and former Gartner research VP and Prophecy Co-Founders Raj Bains and Maciej Szpakowski will discuss:

  • The common pitfalls to operationalizing data for analytics and machine learning
  • Ways to evaluate tools that combine the power of visual ETL with DataOps and governance
  • How a low-code approach to SQL can empower business data users to be self-sufficient and significantly more productive with their data

Space is limited, so register today!

What’s next?

Prophecy 3.0’s low-code SQL enables data teams across the business to collaboratively build, test and deploy data models. But they still run into many other challenges in their daily work. Many organizations, for instance, use Apache Airflow — which reaches beyond the data warehouse to orchestrate a whole data stack, from ingestion through transformation to reporting — as a primary interface for scheduling. But business teams find it’s very difficult to use and maintain Airflow. Additionally, preserving high data quality due to frequent human errors, system failures and upstream changes is also a challenge.

That’s why Prophecy is already beta-testing smart, low-code solutions to address the issues our customers have with Airflow and data quality. If you’d like to hear more, contact us — we’d love to tell you about it.

Ready to give Prophecy a try?

You can create a free account and get full access to all features for 21 days. No credit card needed. Want more of a guided experience? Request a demo and we’ll walk you through how Prophecy can empower your entire data team with low-code ETL today.

Ready to give Prophecy a try?

You can create a free account and get full access to all features for 14 days. No credit card needed. Want more of a guided experience? Request a demo and we’ll walk you through how Prophecy can empower your entire data team with low-code ETL today.

Get started with the Low-code Data Transformation Platform

Meet with us at Gartner Data & Analytics Summit in Orlando March 11-13th. Schedule a live 1:1 demo at booth #600 with our team of low-code experts. Request a demo here.

Related content

PRODUCT

A generative AI platform for private enterprise data

LıVE WEBINAR

Introducing Prophecy Generative AI Platform and Data Copilot

Ready to start a free trial?

Visually built pipelines turn into 100% open-source Spark code (python or scala) → NO vendor lock-in
Seamless integration with Databricks
Git integration, testing and CI/CD
Available on AWS, Azure, and GCP
Try it Free

Lastest blog posts

Events

Gartner Data & Analytics Summit 2024 - that’s a wrap!

Ashleigh Blalock
March 21, 2024
March 20, 2024
March 21, 2024
March 20, 2024
March 21, 2024
March 20, 2024
Events

Johnson & Johnson Takes Center Stage with Prophecy at Gartner Data & Analytics 2024

Mitesh Shah
February 13, 2024
February 13, 2024
February 13, 2024