Last updated on: January 27, 2026
Outbound Job Steps
A job step is one action that is performed after the dataset has been retrieved and (optionally) delta hashed.
A job can have multiple steps. Each step has an Order field. Jobs are executed in ascending order based on the order number.
Note: When creating job steps, it is a best practice to separate order numbers by at least 10 (such as 20, 30, and 40). This allows for future growth and changes, such as inserting a step in between two other steps.
Job steps can also be instructed to execute conditionally based on if the previous step succeeded or failed. (This can be used, for example, to create a step that fires a webhook notification only if the main step fails.)
Understanding job step components
Job steps are made up of three main components:
General settings
The General Settings component includes options like the step name, order, and condition.
Connection type
The Connection Type component specifies how to transfer data, such as through HTTP or FTP.
Data format
The Data Format component specifies what to transfer, such as the payload. For example, JSON or delimited (CSV/TSV).
Note: Generally, use any Connection Type with any Data Format. For example, a staff user could HTTP POST and provide a pipe-delimited file as part of that request. However, there may be limitations with specific, non-standard combinations, as not all combinations and options are vetted.
Understanding connection types
HTTP connection
Specify this connection type to initiate an HTTP request to an endpoint. The HTTP request timeout is 15 minutes and cannot be changed. The default User-Agent string, if not otherwise specified, is set to CSI_DataStation/*** dotnet/***, where *** represents the version of the item.
- Method - Select the HTTP method. Common values are GET or POST.
- Target URL - Enter the target URL where the HTTP request should be made. Query string parameters can be added.
- Content type - Specify the HTTP Content-Type header value. For example, to send JSON data, enter
application/json. - Headers - Enter one or more additional headers that should be sent, in the format:
Header: Header Value. Separate multiple headers with a line break.
Note: It is not possible to parameterize or add dynamic tokens to the URL at this time. The URL must be static.
Warning! Do not send any control headers, or headers that are dynamically generated. For example, do not include Content-Length or Location.
For example, to send an Authorization header and a User-Agent header, enter the following into the Headers field:
Authorization: Bearer AaBbCcDdEeFf0123456789 User-Agent: CSI_DataStation/1.0
For example, a data source contains the letters A through F. When Payload Chunking is off, the dataset is sent as:
[
{ "letter": "A" },
{ "letter": "B" },
{ "letter": "C" },
{ "letter": "D" },
{ "letter": "E" },
{ "letter": "F" }
]
But if Payload Chunking is on, and the chunk size is set to 3, then two datasets are sent:
[
{ "letter": "A" },
{ "letter": "B" },
{ "letter": "C" }
]
... DELAY in SECONDS ...
[
{ "letter": "D" },
{ "letter": "E" },
{ "letter": "F" }
]
FTP connection
Specify this method to initiate an FTP file upload.
- FTP server / hostname - Enter the FTP server / hostname. Can be a publicly-resolvable hostname, such as ftp.example.org, or an IP address.
- Port number - This is almost always 21, unless the FTP server administrator instructs otherwise.
- Enable FTPS over SSL/TLS - If enabled, the connection will attempt to use FTPS (FTP over SSL / TLS). If FTPS is enabled, the Port number should be changed to 990, or the FTPS port number of the destination server.
- Enable legacy TLS 1.0 / SSL3 protocols - If enabled, FTPS connections over TLS 1.0 and SSL3 will be allowed. Otherwise, only TLS 1.1 / TLS 1.2 connections are allowed.
- Username - Enter the username of the user to log in as.
- Password - Enter the user's password.
- Path / folder - Enter a fully-qualified file name/path (relative to the FTP root) of the destination file. Use ${date:..} and ${guid} to add a date or random guid for a unique filename.
- Enable passive mode - If the FTP server requires passive file transfers, enable this option.
-
Overwrite existing files - If enabled, and if the FTP server allows this operation, the DataStation can overwrite an existing file of the same name on the server, if it finds one.
Note: The Path/folder setting also defines the file name.
Example: To upload a file to the root / home folder of the FTP server, enter the file name into the Path/folder field. For example, sample-transfer.csv. To send a file with the date tagged onto the end of the file name (for example, MyFile-2023-01-03.csv), input /test/myFile-${date:yyyy-MM-dd}.csv. For an additional example, to send a file with a random guid tagged onto the end of the file name, input /test/myFile-${guid}.csv.
Note: Verify with your FTP server administrator which file transfer mode is required (active or passive). If this setting is incorrect, it can lead to failures when uploading.
SFTP connection
Specify this method to initiate a file upload via an SFTP connection. SFTP should not be confused with FTP or FTPS, as SFTP is the SSH File Transfer Protocol. It enables file transfers over an SSH connection, typically to a Unix or Linux-based server.
Note: iTransfer will always accept (and write to the log) a host key it receives from the host. (It is not currently possible to specify an allowed list of host keys.)
Warning! At this time, it is not possible to authenticate with a public and private keypair. Support for public key authentication will be added in a future release.
- Server / hostname - Enter the FTP server / hostname. Can be a publicly-resolvable hostname, such as ftp.example.org, or an IP address.
- Port number - The port number is almost always 22, unless the SSH/SFTP server administrator says otherwise.
- Username - Enter the username of the user to log in as.
- Password - Enter the password of the user to log in as.
- Path / folder - Enter a fully-qualified file name/path (relative to the FTP root) of the destination file.
- Overwrite existing files - If enabled, and if the SFTP server allows this operation, the DataStation can overwrite an existing file of the same name on the server, if it finds one.
Example: To upload a file to the root / home folder of the FTP server, enter the file name into the Path/folder field. For example, sample-transfer.csv. To send a file with the date tagged onto the end of the file name (for example, MyFile-2023-01-03.csv), input /test/myFile-${date:yyyy-MM-dd}.csv. For an additional example, to send a file with a random guid tagged onto the end of the file name, input /test/myFile-${guid}.csv.
Data formats
JSON data format
Specify this data format to transmit JSON data.
JSON data is built using a template. The JSON data entered into the Template field is wrapped into a JSON Array, and transmitted in bulk to the third party.
For example, given this source data table:
| FirstName | LastName | FavoriteNumber |
|---|---|---|
|
John |
Smith |
42 |
|
Bob |
Jones |
3 |
|
Alice |
Thompson |
85 |
And the following template:
{
"first": "$$FirstName$$",
"last": "$$LastName$$",
"favNum": $$FavoriteNumber$$
}
The following payload will be produced:
[
{
"first": "John",
"last": "Smith",
"favNum": 42
},
{
"first": "Bob",
"last": "Jones",
"favNum": 3
},
{
"first": "Alice",
"last": "Thompson",
"favNum": 85
}
]
Notice that:
- The resulting payload is automatically enclosed in a JSON array:
[...]. - Each row or object (except the last) is automatically suffixed with a comma (
,). Do not include one in the template. - The name of the column from the source data, enclosed in
$$...$$, is replaced with the value from each row of the source data table. - Strings must be enclosed in
"...", per the JSON specification. - The FavoriteNumber field, being numeric, does not need to be enclosed in
"...", although if the receiving party needs it to be a string and not a number, it can optionally be enclosed. - The names of the columns do not need to match the property names that are sent (for example, favNum is the JSON property name, but FavoriteNumber is the source table's column name).
- It is possible to hard-code data. Each property does not have to contain a
$$...$$placeholder.
- Insert fields from IQA - Click on a blue field name to automatically insert it into the Template field where the cursor is currently located.
- Auto-template - Click this button to automatically generate a template based on the source columns. This is an excellent time-saver if a dataset contains many columns.
- Every field is included in the template.
- The JSON property name is copied directly from the column name as-is.
- All fields are generated as strings. If numeric or boolean (
true/false) fields are present, remove the quotation marks surrounding these placeholders. - Enable single row mode - If this mode is enabled, only the first row of the source data table is used. Alternately, the job definition's Data Source can be set to None. The JSON template is not wrapped in an array. Only a single JSON object is sent. This mode is useful for creating a data source with only one row of data, containing aggregate and reporting numbers, such as totals and other statistics, or manually entering a static JSON payload to send as a webhook, such as in a subsequent step marked as Only on Failure, to send a webhook message to another online service such as Slack or Microsoft Teams.
- Wrapper Template - If this setting is populated, wrap the JSON array in an outer JSON object which will then be sent. Leave this field blank to not use a wrapper template.
In the preceding template example, notice that the root object is a JSON array. Some systems are unable to accept an array as the root object, or otherwise require the array to be nested within a parent object. Using the preceding example:
[
{
"first": "John",
"last": "Smith",
"favNum": 42
},
...
]
If a wrapper template is entered:
{
"success": true,
"data": %%data%%
}
The actual payload that is sent is in the following format:
{
"success": true,
"data": [
{
"first": "John",
"last": "Smith",
"favNum": 42
},
...
]
}
Other properties can be hard-coded to send, such as "success": true. However, replacement tokens from the source data table ($$...$$) are not allowed in this field.
Delimited (CSV) data format
Specify this data format to create any flat-file delimited data, such as CSV, TSV, pipe-separated, or more.
Important! At this time, the only line (or record) delimiters allowed are standard line breaks. Choose between Windows-style (CR LF) or Unix-style (LF) line breaks.
Note: Use any typeable character for the field delimiter (such as a comma, pipe, semicolon, or other symbol). For non-typeable characters, such as tabs, compose the line template in a text editor, such as Notepad++, ensure that the tab or other character is represented correctly, and then copy and paste the template into the Line Template field.
- Header template - To add a header line to the file, enter the header line as it should appear in the file.
- Line template - Enter the line template. This template is repeated once per record. A column name from the source data table, surrounded by
$$..$$, will be replaced with the value from the current row. - Insert fields from IQA - Click on a blue field name to automatically insert it into the Line Template field where the cursor is located.
- Auto-Template - Click the Auto-Template button to automatically generate a template based on the source columns. This is an excellent time-saver if the dataset contains many columns.
- Enable UNIX line endings - If enabled, line endings will be written in the
LFformat (\norchar(10)). If disabled, line endings will be written in theCRLFformat (\r\norchar(13)char(10)). For more information, see Newline on Wikipedia.
Note: If writing CSV data, it is always a good idea to enclose each field in double quotes: ("..."). For example, instead of the $$FirstName$$,$$LastName$$ line template, use the template instead: "$$FirstName$$","$$LastName$$"
Note: The Auto-Template button will populate both the Header Template and Line Template fields. To delete a header in the file, simply blank out that field.
Conditional multi-step logic
All job steps except the first can define conditional logic, where the step will only execute if the previous step succeeded or failed. The first step is always executed. The condition field is always ignored for a given job's first step (as defined by its Order field).
After the first step, each subsequent step is checked against an internal success flag. If the flag matches the condition, the step runs. Otherwise, the step is skipped.
A step is marked as failed if:
- It throws an unknown error during processing
- The network transmission failed (connection interrupted, host not found, timed out, invalid username/password, or more)
- For the HTTP transmission step, if the Fail Step on 4xx/5xx status option is checked, and the remote server responds with a status code between 400 and 599
Success flag logic
The internal success flag is only set to false if a step explicitly fails, and a step wasn't skipped.
If the current step is set to execute, the success flag is reset to true, and then the step runs. If the current step runs and fails, the flag is set to false. This allows an Only on Success and Only on Failure step execute, which both check the main job.
Consider the following example:
|
Order |
Step Name |
Step Condition |
Success Flag Before Execution |
Step Outcome |
Success Flag After Execution |
|---|---|---|---|---|---|
|
10 |
Transmit Data to Third Party |
None (The first step is always run) |
Yes |
Success |
Yes |
|
20 |
Failure Notification |
Only on Failure |
— |
Skipped |
— |
|
30 |
Success Notification |
Only on Success |
Yes |
Success |
Yes |
Then review a more complex example:
|
Order |
Step Name |
Step Condition | Success Flag Before Execution | Step Outcome | Success Flag After Execution |
|---|---|---|---|---|---|
|
10 |
Transmit Data to Third Party |
None (The first step is always run) |
Yes |
Error |
No |
|
20 |
Success Notification |
Only on Success |
— |
Skipped |
— |
|
30 |
Failure Notification |
Only on Failure |
No |
Success |
Yes |
|
40 |
Job Completion Notification |
Always |
Yes |
Success |
Yes |
In the complex example, notice that the Success Notification step did not reset the Success Flag, because the step did not run. When step 3 executed (Failure Notification), it was able to read the Success flag from step 1, which was false.
Additionally, a Job Completion Notification step was added, which has a condition of Always. Regardless of the outcome of the other steps, this step will always run.