PROPHECY FOR DATA ENGINEERS

Powerhouse data engineering

Address even the most complex, demanding data engineering tasks found in large enterprises while efficiently managing tens of thousands of production pipelines.

engineering gemengineer persona

Low-code for all

Empower every data user in your business to transform data like expert data engineers.

Complete solution

Benefit from Spark streaming/batch pipelines, SQL transformations, orchestration, management and more.

Trusted data

Leverage visual development along with best practices and quality control for all data users.

Open and extensible

Turn data pipelines into open-source Spark or SQL code, then add new visual components to standardize your operations.

The future of
data engineering

We fused the best of visual- and code-based approaches to deliver an unparalleled data engineering solution.

Elevate your data engineering

Create pipelines on any cloud platform using visual operators (gems), which Prophecy turns into powerful code — a unique approach that turbocharges productivity and makes everyone a top data engineer.

apache spark logo

Low-code Spark pipelines

Easily build batch or streaming pipelines by connecting to your Apache Spark cluster and integrating sources, targets and transformations step-by-step.

airflow logo

Low-code orchestration

Link to your Apache Airflow cluster to seamlessly orchestrate multiple Spark jobs, run them interactively and schedule production deployment.

DataOps for
trusted data

Move changes to production faster, with more confidence, thanks to versions, testing and data monitoring.

Git, tests, CI, CD

Transform all your visual assets into high-quality, 100% open-source code with ease.

Integrate with preferred Git providers like Github, Gitlab or Bitbucket for seamless development in a Git branch, knowing our testing procedures ensure a reliable, efficient development process with every commit.

Data quality and observability

Build and schedule dataset column expectations. Use our automated, lightning-fast, high-quality machine-learning models to detect data anomalies and receive alerts when data patterns change.

Reuse with standard
templates

Establish unified standards and enhance your data engineering workflow by using high-quality, reusable building blocks instead of ad hoc code.

Customizable templates

Create and reuse data pipeline templates that execute recurrently and share common business logic.

Prophecy offers two levels of templating:

  • Data pipelines with multiple configuration sets
  • Reusable subgraphs with separate       configurations for each data pipeline

Framework builder

Prophecy offers built-in gems for inputs, outputs and transformations. You can design your own gems to create your own standards to:

  • Connect to proprietary systems
  • Standardize recurring business logic or operational tasks

Frameworks — which are delivered as libraries — are Git projects comprised of gems, subgraphs and user-defined functions.

EBOOK

Low-code Apache Spark and Delta Lake

A guide to make data lakehouses even easier.
Get the eBook
ON-DEMAND WEBINAR

Low-code data transformations

10x productivity on cloud data platforms.
Watch the webinar