RETRIEVAL-AUGMENTED GENERATION

For private enterprise data

Which generative AI applications will YOU build today?

Private data: Now the biggest problem for gen AI apps

Right now, your main obstacle in building gen AI apps is the enterprise data itself—not the data models. That’s because:

You don't have to build your own models anymore.

Foundation gen AI models are rapidly improving, and off-the-shelf models are delivering more accurate results.

You don't have to specialize foundation models with your data anymore.

As prompts grow in size, you can simply embed relevant knowledge in the prompt.

Model development
Model specialization
Prompt engineering
Hard | Expensive | Slow
Easy | Cheap | Fast

But Prophecy makes the private enterprise data part easy.

With Prophecy, a data engineer can build a gen AI app on any enterprise data in just two steps—and in many cases, do it in under a week.

STEP 1: Run ETL on unstructured data to build your knowledge warehouse.

For decades, we've run ETL or data pipelines daily to move structured data (tables) from operational systems into data warehouses in order to prepare it for analytics. Similarly, you can build a knowledge warehouse for your gen AI app by building data pipelines that move private, unstructured data—e.g., documents, ticketing systems and messages—into a vector database or OpenSearch/Elastic where it's kept up-to-date.

Prophecy Logo
A data pipeline on unstructured data

STEP 2: Build a streaming ETL pipeline that does inference, then connect it to your app.

Since large, off-the-shelf large language models (LLMs) like the GPT series by OpenAI (sometimes called "foundation models") continue to improve at an exponential rate, you don't need to train—or even fine-tune—them for most gen AI apps anymore. Apps like a support bot can simply send to the LLM the question being asked, along with any relevant, private documents from the knowledge warehouse.

Prophecy Logo
A streaming data pipeline for inference

The Prophecy difference

Use ETL pipelines to build a knowledge warehouse.

To build a knowledge warehouse using a vector database like Pinecone or Weaviate, you first need to read the private data from the various sources that you want to incorporate and run a transformation that creates a vector representing each of your documents (which are then stored in the vector database). Since vector databases build indices to facilitate fast lookup, they can quickly find similar or relevant documents when asked a question or given a document. Each index then needs to run as a regular ETL job.

Key features provided by Prophecy

With the Prophecy platform, you can build data pipelines on Apache Spark that can read and transform the private, unstructured data that comprises the knowledge warehouse, then orchestrate those data pipelines to run regularly in production, with automatic monitoring.

Source components for reading documents

The source components Prophecy provides can read data from documents, databases and application APIs (e.g., Slack).

Unstructured data transformations

Prophecy breaks into individual text articles each covering a single topic all your unstructured data (e.g., documentation websites, Slack messages, etc.).

Large language model (LLM) lookup

Prophecy supports multiple LLM vendors (including the GPT series from OpenAI), so you can generate the vectors or vector embedding needed to look up and search text documents.

Vector database storage

With Prophecy, you can store any document—along with the vectors required to represent it for similarity searches—in any popular vector database (e.g., Pinecone, Weaviate, etc.).

At-scale execution on Spark

Prophecy's data pipelines run at scale on Apache Spark, with interactive execution that makes development easy.

Orchestration

After initial setup, Prophecy's orchestration capabilities will regularly, automatically populate your knowledge warehouse with up-to-date data.

Use streaming inference pipelines to query LLMs.

Once you've populated your knowledge warehouse with private enterprise data, you need to pair it with the intelligence of LLMs in order to answer user questions. It's a classic, lightweight, streaming approach to inference data pipelines—the data pipeline reads the next question, looks up documents relevant to the question, and sends both the question and the documents to an LLM to get the answer.

Key features provided by Prophecy

With the Prophecy platform, you can build continuous, streaming data pipelines on Apache Spark that can read text inputs, generate vectors that represent the input, look up components and answer questions while providing relevant documents from the vector database as context.

Source components to read/write streams

Prophecy's source components read text input from the data streams that retrieve data from user-facing applications. Prophecy then delivers computed answers to the output stream where they're picked up by the applications.

Lookups for LLMs and vector databases

Prophecy's built-in components make it easy to look up answers via LLMs and retrieve relevant knowledge articles from various vector databases (e.g., Pinecone, Weaviate, etc.).

Interactive execution on Spark

Prophecy's interactive execution on Apache Spark allows users to view various inputs, confirm if correct articles are retrieved and fine tune the pipeline process.

Orchestration

Prophecy's streaming pipelines run continuously in production where they're monitored for maintenance.

EBOOK

Low-code Apache Spark and Delta Lake

A guide to make data lakehouses even easier.
Get the eBook
ON-DEMAND WEBINAR

Low-code data transformations

10x productivity on cloud data platforms.
Watch the webinar