Skip to main content
Skip table of contents

Data Pipeline Overview

Summary

iWorkflow operates at a high level by passing data from one action to the other. Each action is allowed to access, manipulate, and append to the data object before passing it to the next action.

This allows for complex workflow scenarios where each action can build on the one before it.

Here’s an example of a workflow that you might build, as well as what is happening on the data pipeline behind the scenes:

Action

Data Change(s)

:wf-iqa: Fetch IQA Data

IQA dataset is added.

:wf-deltahash: Delta Hash

IQA dataset is filtered to only changed records.

:wf-gate: Gate

Check the count of records - if count = 0, exit the workflow.

:wf-select: Select

Filter the IQA data to only include 2 columns, and add a third hardcoded column to every row.

:wf-csv: Convert to CSV

Convert the IQA dataset to CSV format.

:wf-ftp: FTP Upload

Upload the contents of the CSV file to a remote FTP server.

:wf-http: HTTP Request

Post a notification in a Teams channel (via a webhook) that the upload was successful and contained n record(s).

Viewing the Data on the Pipeline

The data that is passed between each action is available for inspection on the workflow run screen. This allows you to see the data that exists between each action to ensure it is accurate.

You may also use the Template Tester utility to inspect the data that was emitted from a particular action, as well as test out different template expressions

Data and Step Sequencing

Step sequencing has an effect on the data pipeline. The current action’s input / context will always be the previous action or trigger that ran. This may not be the action immediately preceding the current one in terms of defined order in the workflow definition.

If your action is expecting certain data as input, be sure that any step sequencing settings are set correctly so that data always exists, or use an expression to check for it first.

Example

Let’s walk through a step-by-step example to examine what the data looks like after each action has run.

Action 1 - :wf-iqa: Fetch IQA Data

The IQA runs, and inserts a dataset onto the pipeline. You get to name the property (in the example, “myDataset”) whatever you’d like so you can reference it later on.

JSON
{
  "myDataset": [
    {
      "userID": "user123",
      "name": "Alice Smith",
      "age": 30,
      "email": "alice.smith@example.com"
    },
    {
      "userID": "user456",
      "name": "Bob Johnson",
      "age": 45,
      "email": "bob.johnson@example.com"
    },
    {
      "userID": "user789",
      "name": "Charlie Brown",
      "age": 25,
      "email": "charlie.brown@example.com"
    }
  ]
}

Action 2 - :wf-deltahash: Delta Hash

Now we’re going to remove any records that were processed from a previous workflow run and have not changed.

We end up with a dataset like this - notice that one record that was identified as unchanged was removed:

JSON
{
  "myDataset": [
    {
      "userID": "user123",
      "name": "Alice Smith",
      "age": 30,
      "email": "alice.smith@example.com"
    },
    {
      "userID": "user456",
      "name": "Bob Johnson",
      "age": 45,
      "email": "bob.johnson@example.com"
    }
  ]
}

Action 3 - :wf-select: Select

We don’t want to include the “age” column in this specific example, so let’s filter it out using the Select action.

We’ll input the dataset expression into the select statement as:

ADA
{{ input.myDataset | array }}

And we’ll set up our select columns like this:

Name

Value

userID

{{ this.userID }}

name

{{ this.name }}

email

{{ this.email }}

verified

yes

The select action will output the following data to the pipeline:

JSON
{
  "myDataset": [
    {
      "userID": "user123",
      "name": "Alice Smith",
      "email": "alice.smith@example.com",
      "verified": "yes"
    },
    {
      "userID": "user456",
      "name": "Bob Johnson",
      "email": "bob.johnson@example.com",
      "verified": "yes"
    }
  ]
}

Action 4 - :wf-csv: Convert to CSV

We’re now going to convert that dataset to a CSV file. We’ll name the output property myCsvData.

CODE
{
  "myDataset": [
    {
      "userID": "user123",
      "name": "Alice Smith",
      "email": "alice.smith@example.com",
      "verified": "yes"
    },
    {
      "userID": "user456",
      "name": "Bob Johnson",
      "email": "bob.johnson@example.com",
      "verified": "yes"
    }
  ],
  "myCsvData": "userID,name,email,verified\nuser123,Alice Smith,alice.smith@example.com,yes\nuser456,Bob Johnson,bob.johnson@example.com,yes"
}

Notice that the original dataset remains intact - most actions (with few exceptions) are additive and will pass through any previous data on to the next action. Also, actions that don’t add any properties or manipulate the data pipeline (e.g. the :wf-gate: Gate or :wf-manage: Workflow Management actions) are also set up to pass the dataset on to the next action verbatim.

Template Expressions

Template expressions allow you to specify certain properties on the data pipeline that previous actions have passed in.

For more information, check out the Template Engine documentation.

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.