Delivering better patient outcomes with data-driven insights

Prophecy modernizes ETL, accelerating
healthcare breakthroughs

increase in data
engineering productivity
reduction in pipeline
development costs
Company Size
150K+ employees
Health sciences
Data engineering &
supply chain
Databricks Lakehouse


Strained data engineering resources and limits productivity
Infrastructure complexity slow data engineering efficiency

A global leader in life sciences was realizing poor performance from their data operations due to the complexity of their IT infrastructure. With over 30 ERPs and data sources, each configured differently to support the varied functions they performed, their architecture was fragmented, slowing data engineering productivity, which was causing delays to the successful operation, management and control of their various supply chains.

Limited technical expertise blocks innovations

While access to raw data was not an issue, creating the data pipelines that would unify, transform, and make sense of the data was a task that was too technical for many on the data engineering team. Intimate knowledge of the data was limited, so the organization needed tooling that could democratize the development of pipeline creation so as to effectively enable team members, regardless of their level of expertise or familiarity with the data itself.


Modernizing ETL with a low-code approach
Unlocking supply chain efficiency with data

The Prophecy data engineering platform was selected for its ability to seamlessly integrate with their Databricks Lakehouse implementation as well as their complex IT infrastructure and processes, including the more than 30 ERPs and domains, as well as their code management and CI/CD workflows. On the development side, Prophecy’s low-code, visual design and build capabilities are enabling all contributors on their data team to create flexible, highly scalable, and reusable data assets that support supply chain efficiency, from inventory management, and sales orders, to manufacturing operations and more.

More productive data teams, better ETL pipelines

The organization has been able to see massive productivity improvements in their data engineering capabilities to where a single segment of their data team can now perform as well as three teams before the implementation of Prophecy. In addition, the quality of the data pipelines being developed have been outstanding, with the code written by Prophecy being on par with the code written by their most senior data engineers.

“Not only was Prophecy able to support our complex IT architecture and enable all members of our data team to quickly and easily create the data pipelines needed to deliver business insights; but the level of support, access to technical resources, and overall feeling of being a trusted partner have given us total confidence in Prophecy.”

Senior Manager, Data Engineering, Global Pharmaceutical Company


Higher data engineering productivity

Prophecy’s visual UI and low-code development workflows level the field for data teams, enabling effective data pipeline development regardless of technical expertise, improving team productivity.

Improved operational efficiency

Data engineering has seen a 66% increase in operational efficiency, allowing team members to move on to supporting other areas of the business; as well as educate, coach, and mentor on the use of the Prophecy platform to other business units.

Faster time-to-insights, cheaper

The ability to create more scalable, reliable, and flexible data pipelines with Prophecy has greatly reduced the costs of delivering insights, with the goal of a 50% reduction in overall costs for data engineering projects.

increase in data
engineering productivity
reduction in pipeline development costs

Low-code Apache Spark™and Delta Lake

A guide to making data
lakehouse even easier
Get the eBook
On-demand webinar

Low-code data transformations

10x productivity on
cloud data platforms
Watch the Webinar

Ready to try out Prophecy for free?

Generates Apache Spark code (Python or Scala)
NO vendor lock-in
Seamless integration with Databricks
Code-based Git, testing and CI/CD
Available on AWS, Azure, and GCP