Wednesday, March 11, 2026

Airtable + GPT: prototyping a airy rag system with tools without code

Share

Airtable + GPT: prototyping a airy rag system with tools without code
Photo via editor Chatgpt

# Entry

# Ingredients

To practice this tutorial yourself, you will need:

  • Account in the air with a database created in your work area.
  • Openai APi (the best paid flexibility plan in the selection of the model).
  • AND Pipedream Account – orchestration application and automation, which enables experiments at a free level (with daily runs limits).

# Recipe for a recovery generation

The process of building our RAG system is not purely linear, and some steps can be taken in different ways. Depending on the level of knowledge about programming, you can decide to approach without code or almost without code or create a work flow.

We will basically create a flow of orchestration consisting of three parts, Using Pipedream:

  1. Trigger: Like the request of an internet service, this element initiates the flow of activity, which goes through the next elements in the flow of work. After implementing here, you specify the request, i.e. the user’s signature for our RAG prototype system.
  2. Block from the air: Establishes a connection with our Airtable base and a specific table to apply its data as the RAG system knowledge base. We will add text data to it soon in Airtable.
  3. Openai block: Connects with models based on the OPENAI GPT using the API key and transmits the user’s poem with the context (downloaded data from the air) to the model for the purpose of answering.

But first we need to create a up-to-date table in our Airtable base containing text data. In this example, I created an empty table with three fields (ID: one line text, source: one line text, content: long text), and then imported data from it A small data set publicly available containing text with basic knowledge about Asian countries. Operate the CSV option and links to import data to the table. More information on creating tables and data imports can be found in this article.

When you prepare your knowledge base in Airtable, let’s go to Pipedream for the rest of the process! Register if you haven’t done it yet and log in. There are several ways to initiate work flow. One of them is to describe what you want to create for the AI ​​Pipedream agent, what is intuitive, but it can cause errors that require corrections, how we illustrate soon. Pipedream has two parallel user interfaces: one for manually building and editing work flows and one for talks with the AI ​​agent, called String.

AI Pipedream conversation interface (string)AI Pipedream conversation interface (string)
AI Pipedream conversation interface (string)

If you decide on permission for a structure built of an approximate version of the desired flow, clearly describe what to build, with a monitors that may look like a kind of:

I would like to organize a straightforward application for a cloth that uses the transmitted table containing the text and GPT models OPENAI (I have a paid API key).

The Pipedream language model can return the response to approval (at your own risk), yes:

I will create a straight rag (generation of recovery) of an agent that connects the table table with OPENAI GPT models. Here’s what I will build:

Configure the trigger to accept queries (via Webhook or manual performance)
Search and download the appropriate text from the Airtable table based on the inquiry
Operate the recovered text as a context for Openai GPT to generate a conscious answer
Return the generated and the answer, which is justified in your sent data
This will provide the basic RAG system in which you can ask questions and get answers based on the content stored in Airtable, improved by the possibilities of GPT reasoning.

Due to the experiments and “seeing what is happening”, confirm the suggested way of acting in order to obtain three blocks of work flow similar to that in the image shown above.

Because this is not the main interface of the Pipedream editor, switch to it by clicking “Open Pipedream” in the upper right corner. The up-to-date card will be opened with a real Pipedream work flow editor.

In the case of the trigger block, the URL is automatically generated with a syntax similar to the one I got for mine: https://eoupscprutt37xx.m.pipedream.net. Click it, and in the Settings panel, which opens on the right, make sure that the first few options are set to “full HTTP request” and “return a static answer”.

In the second block (stable) may be petite to be done. Connect to the Airtable base first. If you work in the same browser, it can be straightforward: log in to Airtable from a pop -up window, which appears after clicking “Connect a new account”, and then take steps on the screen to specify the base and table to access:

Pipedream Workflow Editor: Connecting with AirtablePipedream Workflow Editor: Connecting with Airtable
Pipedream Workflow Editor: Connecting with Airtable

A hard part is coming (and the reason why I deliberately left an imperfect prompt earlier, when asking the AI ​​agent to build the flow of skeleton work): There are several types of activities about sending to choose, and we need a rag style search mechanism, there is “list records”. There is a chance that this is not the action you see in the second block of your work flow. In this case, remove it, add a up-to-date block in the middle, select “Airtable” and select “Records”. Then connect to the table again and test the connection to make sure it works.

This is what a successfully tested connection looks like:

Pipedream Workflow Editor: Testing connection with AirtablePipedream Workflow Editor: Testing connection with Airtable
Pipedream Workflow Editor: Testing connection with Airtable

Finally, configure and configure OpenAi Access to GPT. Keep your API key. If the secondary label of the third block does not “generate a response to rags”, remove the block and replace it with a up-to-date Openai block with this subtype.

Start by establishing the OpenAI connection using the API key:

Establishment of the Openai connectionEstablishment of the Openai connection
Establishment of the Openai connection

The user’s question field should be set as {{ steps.trigger.event.body.test }}and the records of the knowledge base (your text “documents” for the rag of Airtable) must be set as {{ steps.list_records.$return_value }}.

You can keep the rest as default and test, but you can encounter errors involving common work flows, which prompted you to go to string to obtain service and automatic corrections using the AI ​​agent. Alternatively, you can directly copy and paste the OPENENAI component in the field of the code to get a solid solution:

import openai from "@pipedream/openai"

export default defineComponent({
  name: "Generate RAG Response",
  description: "Generate a response using OpenAI based on user question and Airtable knowledge base content",
  type: "action",
  props: {
    openai,
    model: {
      propDefinition: [
        openai,
        "chatCompletionModelId",
      ],
    },
    question: {
      type: "string",
      label: "User Question",
      description: "The question from the webhook trigger",
      default: "{{ steps.trigger.event.body.test }}",
    },
    knowledgeBaseRecords: {
      type: "any",
      label: "Knowledge Base Records",
      description: "The Airtable records containing the knowledge base content",
      default: "{{ steps.list_records.$return_value }}",
    },
  },
  async run({ $ }) {
    // Extract user question
    const userQuestion = this.question;
    
    if (!userQuestion) {
      throw up-to-date Error("No question provided from the trigger");
    }

    // Process Airtable records to extract content
    const records = this.knowledgeBaseRecords;
    let knowledgeBaseContent = "";
    
    if (records && Array.isArray(records)) {
      knowledgeBaseContent = records
        .map(record => {
          // Extract content from fields.Content
          const content = record.fields?.Content;
          return content ? content.trim() : "";
        })
        .filter(content => content.length > 0) // Remove empty content
        .join("nn---nn"); // Separate different knowledge base entries
    }

    if (!knowledgeBaseContent) {
      throw up-to-date Error("No content found in knowledge base records");
    }

    // Create system prompt with knowledge base context
    const systemPrompt = `You are a helpful assistant that answers questions based on the provided knowledge base. Operate only the information from the knowledge base below to answer questions. If the information is not available in the knowledge base, please say so.

Knowledge Base:
${knowledgeBaseContent}

Instructions:
- Answer based only on the provided knowledge base content
- Be exact and concise
- If the answer is not in the knowledge base, clearly state that the information is not available
- Cite relevant parts of the knowledge base when possible`;

    // Prepare messages for OpenAI
    const messages = [
      {
        role: "system",
        content: systemPrompt,
      },
      {
        role: "user",
        content: userQuestion,
      },
    ];

    // Call OpenAI chat completion
    const response = await this.openai.createChatCompletion({
      $,
      data: {
        model: this.model,
        messages: messages,
        temperature: 0.7,
        max_tokens: 1000,
      },
    });

    const generatedResponse = response.generated_message?.content;

    if (!generatedResponse) {
      throw up-to-date Error("Failed to generate response from OpenAI");
    }

    // Export summary for user feedback
    $.export("$summary", `Generated RAG response for question: "${userQuestion.substring(0, 50)}${userQuestion.length > 50 ? '...' : ''}"`);

    // Return the generated response
    return {
      question: userQuestion,
      response: generatedResponse,
      model_used: this.model,
      knowledge_base_entries: records ? records.length : 0,
      full_openai_response: response,
    };
  },
})

If errors or warnings do not appear, you should be ready to test and implement. Implement first and then test, passing this user’s inquiry in the newly opened implementation tab:

Testing the implemented work flow with a quick question what the capital of Japan isTesting the implemented work flow with a quick question what the capital of Japan is
Testing the implemented work flow with a quick question what the capital of Japan is

If the request is supported and everything works correctly, scroll down to see the answer returned by the GPT model of access in the last stage of work flow:

GPT model responseGPT model response
GPT model response

Well done! This reaction is based on the knowledge basis that we have built in the air, so we now have a straightforward prototype RAG system that connects Airtable and GPT models via Pipedream.

# Wrapping

This article shows how to build, with petite or no coding, the flow of orchestration to prototype the RAG system that uses text databases as a knowledge base for downloading and GPT Openai models to generate answers. Pipedream allows you to define orchestration flows programmatically, manually or supported by his conversational agent AI. Thanks to the author’s experiences, we briefly showed the advantages and disadvantages of each approach.

IVán Palomares Carrascosa He is a leader, writer, speaker and advisor in artificial intelligence, machine learning, deep learning and LLM. He trains and runs others, using artificial intelligence in the real world.

Latest Posts

More News