Creating an Outbound Integration Pipeline

The basic workflow to create an outbound integration pipeline:

  1. Create an integration pipeline.

  2. Create a pipeline interface.

  3. Set up an integration subscription and execute the pipeline.

  4. View the pipeline execution history.

This page provides the instructions for the first step in this workflow. See the subsequent sections for information on each step. This example covers outbound integration from a CSV file. Users can also use the pipeline to integrate data from JSON and XML files.

Prerequisites for following along with the example shown in each section of this documentation:

  • Identify the Value Chain ID (VC ID) for your dataset.

  • Identify the Enterprise & Organization you would like to use. You can either create a HUB4 Enterprise and HUB4 Organization to mimic the sample data and code provided or use your own and change references to HUB4 within our sample data and code.

  • Load sample data into the system to use in the outbound pipeline. See the "Loading Sample Data" section for instructions.

Complete the following steps to create a new integration pipeline:

  1. Log in to the ONE system.

  2. Click Menu/Favs > Tools > Integration > NEO Plasma Pipelines. In the NEO UI, users can click on the Menus/Favs icon on the left sidebar and type in the name of the screen in the menu search bar. For more information, see " Using the Menu Search Bar ."
    The NEO Plasma Pipelines screen displays.

    images/download/attachments/117758899/image2022-8-2_9-14-32-version-1-modificationdate-1680203369000-api-v2.png



  3. Click the + (plus) icon in the top right corner.
    The Create NEO Plasma Pipeline popup window displays.

    images/download/attachments/117758899/image2022-8-2_9-16-17-version-1-modificationdate-1680203369000-api-v2.png



  4. Fill out the following fields. Fields with an asterisk ( * ) are required.

    Field

    Description

    Pipeline Name *

    Enter a name for the new pipeline.

    Version *

    Enter a version number for the new pipeline.

    Organization

    Use the picker tool to select the organization.

    Pipeline Type *

    Select the pipeline type from the dropdown menu. The options are Inbound Integration and Outbound Integration. For this example, select Outbound Integration.

  5. Click the OK button.
    A success message displays.

  6. Click the OK button.
    The NEO Plasma Pipelines screen updates with the new pipeline listed.

  7. Click the pipeline icon (highlighted below) to open the new pipeline.

    images/download/attachments/117758899/image2022-8-24_8-43-33-version-1-modificationdate-1680203370000-api-v2.png



    The new pipeline displays with Outbound Source and Outbound Sink nodes displayed.

    images/download/attachments/117758899/image2022-8-24_8-44-45-version-1-modificationdate-1680203370000-api-v2.png

  8. Click the Node List icon in the top left corner.

    images/download/attachments/117758899/image2022-8-24_9-42-34-version-1-modificationdate-1680203370000-api-v2.png

    The Node List slideout displays.

    images/download/attachments/117758899/image2022-8-24_9-43-37-version-1-modificationdate-1680203370000-api-v2.png

  9. Click the + (plus) icon for the type of node you want to add. For this example, we selected a parse node. A parse node analyzes the inbound file data into records.
    The parse node displays in the pipeline.

    images/download/attachments/117758899/image2022-8-24_9-44-41-version-1-modificationdate-1680203370000-api-v2.png

  10. Click the icon with three vertical dots o n the new node, and click Properties .
    The Node Properties popup window displays.

    Click to download a CSV sample file to use in this example. You will upload this sample file in the next step. Alternatively, use the data in the "Sample Data to Create a Schema" section of this guide to create your own CSV file to upload in the next step.


    images/download/attachments/117758899/image2022-8-24_9-47-30-version-1-modificationdate-1680203370000-api-v2.png

  11. Fill out the following fields. Fields with an asterisk ( * ) are required.

    Field

    Description

    Node Name *

    Enter a name for the node.

    Format Type *

    Select a format type for the inbound file from the dropdown list. Options are JSON, CSV, and XML. The remaining fields vary according to the format chosen. For this example, we selected CSV.

    Create Schema

    a. Click the Upload Sample File link.
    The Create Schema popup displays.
    b. Enter values for fields described in the table below. Fields with an asterisk ( * ) are required.

    Field

    Description

    Namespace *

    Enter a unique category to organize schema name fields. For this example, we used Input.

    Schema *

    Enter a unique name with which to associate the dataset. For this example, we used Sites.

    Sample File *

    Click the upload icon to upload a sample file for the parse node. For this example, you can use the same CSV file downloaded in the previous step, or create a CSV file using the data in the "Sample Data to Create a Schema" section.

    c. Click the OK button.
    The Node Properties updates.

    View Schema

    This field auto-populates with a link once the sample file is uploaded in the Create Schema field. Click the link to view or edit the record schemas.


  12. Click the OK button.
    The pipeline screen updates.

  13. Click the red dot next to Output on the Outbound Source node, and drag the cursor to connect to the Input on the Parse node.
    The connection turns green. The parse node can now parse the data from the outbound source.

    images/download/attachments/117758899/image2022-8-24_10-15-10-version-1-modificationdate-1680203370000-api-v2.png



  14. Click the node list icon in the top left corner again.
    The Node List slideout displays.

  15. Click the + (plus) icon next to Script.
    A script node is added to the pipeline.

  16. Click the icon with three vertical dots on the new script node, and click Properties.
    The Node Properties popup window displays.

    images/download/attachments/117758899/image2022-8-24_10-16-49-version-1-modificationdate-1680203370000-api-v2.png

  17. Enter a name for the node i n the Node Name * field . For this example, we used Transform CSV to JSON.

  18. Click the arrow o n the Input tab in the Ports section to display the port fields.

    images/download/attachments/117758899/image2022-8-24_10-19-45-version-1-modificationdate-1680203370000-api-v2.png

  19. Select Stream of Record from the dropdown list in the Port Type * field.
    The icon beside the dropdown list becomes active.

  20. Click the icon beside the Port Type * dropdown list.
    The Record Schemas popup window displays.

  21. Click the arrow beside Input.
    The list of schemas displays.

    images/download/attachments/117758899/image2022-8-24_10-23-4-version-1-modificationdate-1680203370000-api-v2.png



  22. Select the input schema. For this example, we named the Sites schema.
    The schema displays in the pane to the right, and the Activate button becomes active.

    images/download/attachments/117758899/image2022-8-24_10-32-32-version-1-modificationdate-1680203370000-api-v2.png



  23. Click the Activate button.

  24. Click the OK button.
    The Node Properties popup window displays again.

  25. Click the Output tab in the Ports section.

  26. Click the arrow to the left of the output to display the Port Name * and Port Type * fields.

    images/download/attachments/117758899/image2022-8-24_10-35-29-version-1-modificationdate-1680203370000-api-v2.png



  27. Select Stream of Record from the dropdown list in the Port Type * field.

  28. Click the icon to the right of the Port Type * dropdown list.

  29. Click the arrow beside Output in the Schema pane on the left.
    The Output record schemas display in the pane.

  30. Click the output schema. For this example, select Sites.
    The schema displays in the pane to the right, and the Activate button becomes active.

  31. Click the Activate button.

  32. Click the OK button.
    The Node Properties popup window displays.

  33. Click the edit (pencil) icon in the Script * field on the Node Properties popup window.
    The Edit Script popup window displays.

  34. Enter the script code for the node. A sample script code is shown below.

    def executeNode(inputs):
    iterable_inputs = {}
    outputs = {}
    # Input ports
    iterable_inputs["Input"] = inputs["Input"]
    # Type = stream.record
    # Address, AlternateAddress1, AlternateAddress2, Contact, CreationDate, CreationUser, Description, DisplayName,
    # EnterpriseName, ExternalRefNo, LastComputedDate, LastModifiedDate, LastModifiedUser, Latitude, Longitude, Name,
    # OrganizationName, RunReplenishment, ActivationDate, AuthoritativeLevel, AutoGenBuffer, BarCodeDelimiter,
    # BarCodeFormat, BarCodePrefixing, BillingContactEmail, BillingContactFax, BillingContactMobile, BillingContactName,
    # BillingContactPhNum, County, DeactivationDate, HolidayCalendarName, IsBilling, IsDC, IsPlant, IsPrimarySubSite,
    # IsPublic, IsShipping, IsStore, ManagingOrgEnterpriseName, ManagingOrgName, ParentSiteName,
    # ParentSiteOrganizationEnterpriseName, ParentSiteOrganizationName, PrimarySubSiteName, ReceivingCalendarName,
    # ReceivingContactEmail, ReceivingContactFax, ReceivingContactMobile, ReceivingContactName, ReceivingContactPhNum,
    # ShippingCalendarName, ShippingContactEmail, ShippingContactFax, ShippingContactMobile, ShippingContactName,
    # ShippingContactPhoneNum, TransportationInstructions, TransSiteGroupName, TransSiteGroupOrganizationEnterpriseName,
    # TransSiteGroupOrganizationName, ApptSchedulingSystem, ApptSchedulingSystemChangedDate, EnableSoftAppointments,
    # SiteGroupName, Tier, TimeZoneId, DefaultDockDoorCount, NotifyVASP, TypeName, AllowedDetentionTime
    # US~State~City~Address_ent~Address_org~Address_site~Street1~Street2~Street3~Zip~Latitude~Longitude~Time_zone
    address_components = {
    'State': 1,
    'City': 2,
    'Street1': 6,
    'Street2': 7,
    'Zip': 9,
    }
    # Add node logic here
    for record in iterable_inputs["Input"]:
    o_record = {}
    o_record['Name'] = record['Name']
    o_record['Organization'] = record['OrganizationName']
    o_record['Enterprise'] = record['EnterpriseName']
    o_record['Contact'] = record['Contact']
    o_record['Address'] = parse_addr(record['Address'], address_components)
    yield { "Output": o_record }
    # Activate and set outputs (omit a port to prevent execution of nodes that depend on that port)
    yield { "Output": None }
    # Type = stream.record
    # Name, Organization, Enterprise, Contact, Address
    return outputs
     
    def parse_addr(addr, address_components):
    comps = addr.split('~')
    result = {}
    for key in address_components:
    result[key] = comps[address_components[key]]
    return result
  35. Click the Save button.
    The Node Properties popup window displays.

  36. Click the Save button.
    The pipeline screen displays.

  37. Click the red dot next to Parsed on the Parse node, and drag the cursor to connect to Input 1 on the script node.
    The connection turns green. The stream of records from the parser has now been converted to the correct format.

    images/download/attachments/117758899/image2022-8-24_10-43-43-version-1-modificationdate-1680203370000-api-v2.png



  38. Click the node list icon in the top left corner again.
    The Node List slideout displays.

  39. Click the + (plus) icon next to Format.
    A format node is added to the pipeline.

  40. Click the icon with three vertical dots on the new Format node, and click Properties.
    The Node Properties popup window displays.

    images/download/attachments/117758899/image2022-8-24_10-53-48-version-1-modificationdate-1680203370000-api-v2.png



  41. Fill out the following fields. Fields with an asterisk ( * ) are required.

    Field

    Description

    Node Name *

    Enter a name for the node. In this example, we used Format JSON.

    Format Type *

    Select the desired format type from the dropdown list. For this example, we used JSON. Note that the remaining fields change based on the format type selected.

    Template

    1. Click the edit (pencil) icon.
      The Edit Script popup displays.

    2. In the Edit Current Code field, enter the script. For this example, we used the following script:

      {
      "sites": [
      {% for record in records %}
      {% filter ppjson|indent(first=True) %}
      {{ record|tojson }}
      {% endfilter %}{% if not loop.last %}{{ ',\n' }}{% endif %}
      {% endfor +%}
      ]
      }

    Schema *

    Select the output schema from the dropdown list. For this example, we selected Output/Sites.

  42. Click the OK button.
    The pipeline screen displays.

  43. Click the red dot next to Output on the Format node, and drag the cursor to connect to File on the Outbound Sink node.

    images/download/attachments/117758899/image2022-8-25_11-49-18-version-1-modificationdate-1680203370000-api-v2.png



  44. Repeat this process to add additional nodes as desired. The following node types are available:

    1. Parse

    2. Format

    3. Collect Records

    4. Normalize

    5. Sort

    6. Script

  45. Click the Save button to save the pipeline.

  46. Click the Run Test icon in the top right corner to test the pipeline.

    images/download/attachments/117758899/image2022-8-25_11-52-25-version-1-modificationdate-1680203370000-api-v2.png



    The Test Run Pipeline slideout displays.

    images/download/attachments/117758899/image2022-8-25_11-53-13-version-1-modificationdate-1680203370000-api-v2.png



  47. Click the upload icon to upload the inbound source file.

  48. Click the Run Test button.
    The Node Logs section displays at the bottom of the screen with the successful nodes turning green in the Execution Sequence column.

    images/download/attachments/117758899/image2022-8-25_11-56-58-version-1-modificationdate-1680203370000-api-v2.png



  49. If the test run is without errors, the next step is to create a pipeline interface. See the "Creating a Pipeline Interface" section for instructions.