Adding Archer-to-Archer Standard Data Feeds
Complete the following tasks to add an Archer-to-Archer Standard Data Feed.
On this page
Task 1: Add a standard data feed
-
From the menu, click
> Integration > Data Feeds.
- Click
to create a new data feed.
- In the General Information section, do the following:
- Enter the name and description of the data feed.
Note: The alias populates automatically when you set the name of the data feed for the first time and save the data feed. You can modify the alias after you save the data feed. The remaining fields in the General Information section are read-only and populate when the data feed is created, updated, and run.
- Select
to make the data feed active.
- Enter the name and description of the data feed.
- In the Feed Information section, do the following:
- In the Feed Type field, select Standard. See Adding Transport Only Data Feeds to create a Transport Only data feed.
- In the Target Application field, select the application or questionnaire where you want to import the data into.
- In the Service Account Name field, enter the user account associated with the data feed. If the user does not exist, you can create a new user. Enter the username. See Data Feeds Service Account for more information on the Service Account Name.
-
Do one of the following:
- To continue configuring the data feed, go to the next task.
- To finish setting up the feed later, click Save or Save and Close.
Task 2: Define the transport method
-
Go to the Source Connection tab of the data feed.
- From the Source Locale list, select a locale.
-
From the Transport Method list, select a transport type.
-
In the Logon Properties section, enter the applicable credentials for logging on to the Archer instance. You can specify whether the Archer instance uses anonymous authentication or Windows Authentication.
-
In the Transport Configuration section, select a Search Type and do one of the following:
-
Enter the credentials of the account that runs the report. The report results will be based on the permissions of that account.
Note: The account could be a content administrator with full access permissions to the content of the applications. Do not use the same account that you used to log on.
-
Select Use Windows Authentication. Single Sign-On must be configured in the source instance to use this option.
-
-
If you selected Statistic Report ID or Report ID as the Search Type, enter the GUID or ID of the report that contains the source data.
-
Enter the Domain and Instance names to be searched against.
-
If you selected Search XML as the Search Type, enter the following information:
-
Records per file: The number of files retrieved from the API call.
-
Application GUID: The application that contains the source data.
-
Configuration String: Parameters passed to the SearchRecords method in the SOAP API to run a search.
-
-
In the Proxy Option field, select Use System Proxy.
-
In the applicable fields, provide the name, port ID, and domain of the proxy server and the user credentials to log onto the proxy server.
-
(Optional) The data feed creates a local copy of the source data for further processing. In the Post Processing - Local copy of source file section, select from the following options to specify how the data feed handles the local copy of the source data after processing the source data.
The following table describes the options for post processing the local copy of the source data.
Option
Description
Delete
Deletes the processed source file when the data feed successfully completes. The data feed also deletes any local copy of the source information.
Rename
Saves the source file under a new name when the data feed successfully completes. In File Path and Name, specify the new name for the file and the location to save the file.
To save the data, ensure that the account running the Job Engine service can access the path of the destination file.
If you select this option, use filename tokens for specifying the location or name of the file.
Filename tokens
Filename tokens are available for post processing when you want to save the source information and specify a location or name for the file. When you select the Rename option, you can use tokens to generate unique names automatically for the files.
Here are the usable tokens for renaming data files.
- Now. Insert a user-defined date format within the new filename. Possible formats include Now(MM/dd/yyyy) or Now(MM-dd-yyyy). See the Microsoft .Net Framework Developer Center for available custom date/time formats.
-
DataFileDirectoryName. Update the filename with the directory name, including the drive, of your file.
-
DataFileName. Insert the original filename, excluding the directory name and extension.
-
DataFileExtension. Insert the file extension, such as .csv, in the new filename.
-
DataFileFullName. Insert the fully qualified filename. This data includes the drive, directory, filename, and extension of the original file.
For example, if the data file came from the following location, C:\DataFeed\Source\ESL\processed\ThreatData.csv, any files renamed using tokens provide the following output.
Example 1
- Input Tokens: {DataFileDirectoryName}\success\{DataFileName}_{Now(MM.dd.yyyy)}.{DataFileExtension}
- Output: C:\DataFeed\Source\ESL\processed\success\ThreatData_01.31.2008.csv
Example 2
- Input Tokens: \\DFSRepository\{Now(yyyy)}\{Now(MM)}\{DataFileName}_success.{DataFileExtension}
- Output: \\DFSRepository\2008\01\ThreatData_success.csv
-
If you selected the Rename post-processing option, enter the location and name of the new file you want to save in the File Path and Name field.
-
Do one of the following:
- To continue configuring the data feed, go to the next task.
- To finish setting up the feed later, click Save or Save and Close.
Note: Post-processing in steps 11 and 12 is not supported in SaaS.
Task 3: Define the XML format of the source data
Use this task to transform the XML structure of the source file.
- Go to the Source Parsing tab of the data feed.
-
In the Source Format field, select XML.
- In the File Definition section, if you want to modify the structure of your source data, either:
-
Do one of the following:
- To continue configuring the data feed, go to the next task.
- To finish setting up the feed later, click Save or Save and Close.
Task 4: Configure the source data
Use the Source Definition tab to ensure that only data you want is included from the source data. The Source Data tab is available only for Standard data feed types.
Data options
- Import the source data using its current format into Archer.
- Convert the source data to a format that matches the requirements of the target application or questionnaire, using advanced options such as lookup translations and calculations.
- Use the Source Filter tab to only import certain records into the target application or questionnaire.
- Leave the field values blank to return all records from the source data.
- Enter field values and use Advanced Operator Logic to only return specific records from the source data.
- Capture data tokens from the latest run of a data feed to configure the next data feed run.
Process
-
Go to the Source Definition tab > Source Data tab.
- To add a source field, in the Actions column, click
and select Add Child.
- Enter a Source Field Name.
- From the list, select a Field Type.
- Select
to include a Token to use for the next data feed run. To configure the token, see Define data tokens.
Note: Only child source fields support tokens.
- Click Add New.
-
Do one of the following:
- To continue configuring the data feed, go to the next task.
- To finish setting up the feed later, click Save or Save and Close.
Task 5: Map the source fields to the target fields
-
Go to the Data Map tab > Field Map tab.
- Map the source field to the applicable application or questionnaire. Do one of the following:
In the Target Field panel Actions column, click
to enter Map Mode and select a source field to map the target field to.
In the Source Field panel, select a source field and click Map a New Target Field to enter Map Mode. In the Target Field panel Actions column, click
to map the source field. To map additional target fields, select a source field and click Map Another Target Field.
Note: To resize the Source Fields column and display long source field names, drag the left or right scroll bar.
- (Optional) To configure the mapped fields, in the Actions column, select
and complete the settings for the selected field.
- (Optional) When you activate the Trust Level option, you can assign a trust level to your source data for a mapped field. Enter a value from 0-99 in the Trust Level of the target field settings. The highest trust level is 0 and 99 is the lowest.
- (Optional) Do one or more of the following:
- To delete a mapping for a single field, in the Actions column, click
.
- To delete the mappings for all fields, in the Source Field panel, click
, and select Clear Mappings.
Note: Deleting a mapping also deletes any child field mappings.
- To delete a mapping for a single field, in the Actions column, click
-
Do one of the following:
- To continue configuring the data feed, go to the next task.
- To finish setting up the feed later, click Save or Save and Close.
Note: A data feed cannot overwrite a previous feed with a higher trust level. For example, a data feed with a trust level of 75 cannot overwrite a data feed with a trust level of 20.
Task 6: Define key fields
To define key fields, complete the following steps:
-
Go to the Data Map tab > Key Field Definition tab.
- In the Reference Fields section, select the field that requires a key field definition.
Note: The Reference Fields section contains the target application or questionnaire and any mapped cross-reference, related records, CAST, or sub-form fields that require the creation of a key field definition.
- In the Key Field Definitions title bar, click
.
Note: You can use the Key Field Definitions section to define the simple keys and the data feed actions during the feed run.
- In the Field Name field, select a target application or questionnaire field that uniquely identifies the record.
- (Optional) Assign combination keys for the record. Do the following:
- In the Actions column, click
.
- In the Field Name field, select a field.
- In the Actions column, click
- (Optional) In the Actions column, click
to add simple keys in a hierarchical structure for sub-form field types.
Note: After setting the order of key fields, the Data Feed Manager scans the data source file for matches to each simple key in the specified order. When any key field is found as a match to a field in the target application, the record is considered matched.
- In the Operator column, select the applicable option for the matching criteria for the simple key.
-
Do one of the following:
- To continue configuring the data feed, go to the next task.
- To finish setting up the feed later, click Save or Save and Close.
Task 7: Define the data feed schedule
You can set up data feeds to run automatically at regular intervals. This reduces the time and effort required to import data from an external file. You can initiate data feeds at various times and configure them to run in regular increments for an indefinite period of time.
You can also run the data feed immediately.
To prevent excess server load, schedule data feeds on a staggered basis.
A reference feed allows you to specify another feed. This indicates to the Data Feed Service that this feed will start running as soon as the referenced feed completes successfully. A successful data feed run processes all input data, completes all expected record updates, and does not report any failures in the Run Details Report.
-
Go to the Run Configuration tab > Schedule section.
-
Do one of the following to schedule your data feed.
Run on Schedule
You can configure your data feed to run on a defined schedule.
The following table describes the fields in the Run on Schedule option.
Field
Description
Start Date
Specifies the date on which the data feed schedule begins.
Start Time
Specifies the time the data feed starts running.
Time Zone
Specifies the time zone in which the data feed schedule begins.
Recurring
Specifies the interval in which the data feed runs, for example, Minutely, Hourly, Daily, Weekly, or Monthly.
- Minutely. Runs the data feed by the interval set.
- Hourly. Runs the data feed by the interval set, for example, every hour (1), every other hour (2) and so forth.
- Daily. Runs the data feed by the interval set, for example, every day (1), every other day (2) and, so forth.
- Weekly. Runs the data feed based on a specified day of the week, for example, every other Monday (2), every third Monday (3), and so forth.
- Monthly. Runs the data feed based on a specified week of the month, for example, on the first Monday of every month, on the second Tuesday of every third month, and so forth.
For example, if you specify 45 in the Every list, the data feed runs every 45 minutes.
Every
Specifies the interval of the frequency in which the data feed runs.
On
Specifies the frequency of the days of the week on which the data feed runs.
Weekday
Specifies the days of the week on which the data feed runs.
Run After
Runs a specified data feed before the current one. The Data Feed Service starts the current data feed after the referenced data feed completes successfully.
For example, you can select to have a Threats data feed run immediately after your Assets data feed finishes. From the Reference Feed dropdown, select the data feed that runs before the current data feed.
Run Now
Click the Run Now button in the toolbar on the Manage Data Feed page to run the data feed manually.
-
To save the data feed, click Save or Save and Close.
-
Do one of the following:
- To continue configuring the data feed, go to the next task.
- To finish setting up the feed later, click Save or Save and Close.
Task 8: Define data tokens
- Go to the Run Configuration tab > Tokens section.
- (Optional) Click
to add an additional token.
- In the Value field of the token that you want to modify, enter the updated value.
- (Optional) In the Actions column, click
in the row of the token that you want to remove.
-
Do one of the following:
- To continue configuring the data feed, go to the next task.
- To finish setting up the feed later, click Save or Save and Close.
Task 9: Set rules for archiving and updating records
You can use the update and archive options to update existing records, create new records, or both. In addition, when target records in Archer cannot match records in the external data source, you can select to modify or delete those records. This option can be useful if you are deferring the accuracy and current status of your data to the external system. By deleting or modifying records in the system that are not in your external data source, you ensure that both the external source and the system are synchronized.
- Go to the Run Configuration tab > Target Record Processing section.
- Select whether to create, update, or ignore records in the target application.
-
The following table describes the options. Option
Description
Create a new record, if no matching record is found
Create new records for data found in the source file and not in the target application or questionnaire.
Update existing record, if a matching record is found
Updates records in the target application or questionnaire when a simple key match exists in the source file. If you are also selecting the Unmatched records option of Delete and want to retain existing records, make sure to select the Update existing record option.
Ignore data validation rules for the target application
Ignores data validation rules applied when importing source data into required fields in the target application.
For example, if a values list in the target application required a minimum of two selections, but the source data row only provided one selection, the source row still imports into the target application.
- Select what happens when the data feed does not find the matching record in the target application.
- Level 3 or lower in a leveled application.
- You are modifying the data feed configuration.
-
Do one of the following:
- To continue configuring the data feed, go to the next task.
- To finish setting up the feed later, click Save or Save and Close.
Option |
Description |
---|---|
Ignore |
Does nothing when a matching record is not found. |
Delete |
Deletes records in the target application or questionnaire when a matching record is not in the source data. If you want to retain existing records, also select the Update existing record option. Data feeds match records using simple keys, only during the update process. If you want to retain existing records while archiving the target application, also select Update existing record in the Target Record Processing Options. If you do not select the Update existing record option, the data feed permanently deletes all records in the target application. |
Set a value from a target values list |
Sets a value in a Values List field in a record whenever the external data file does not contain a matching record. Use this option to set a Values List to a value that identifies this record as Inactive or Not Current. For example, a Devices application with a record for a specific laptop and the external data file does not have a matching record for that laptop. You can use this option to set a Values List field in the laptop record to the value Inactive. When you select this option, you also select the Values List field in the target application or questionnaire and the value that you want to set in that field. This set value only sets the specified value in the Values List field, instead of the normal content save behaviors, such as changing the last updated date or triggering calculations and notifications. You cannot set the value in the Values List field of the target leveled application under the following conditions: In most scenarios, select the Set with a values list value option and flag these records with a specific value rather than deleting them. For example, you can add a field to your application called Status and include the values Current and Archived. If a data feed cannot find a matching record in the data source with a system record, the system record could be updated to have a value of Archived for the Status field. |
Task 10: Define optimization and notification settings
Use this task to configure email and job status notifications, and whether to optimize related calculations until after the data feed completes.
- Go to the Run Configuration tab > Post Processing section.
- (Optional) Select whether to delay running calculations until the data feed completes and condense calculations for optimal processing. If this option is enabled, the optimized calculation jobs run after the data feed completes. This improves the performance of data feed processing and calculations.
-
(Optional) Specify whether to send an email notification when records are created or updated and when the job status changes.
The following table describes the options. Option
Description
Send email notifications when the data feed publishes and updates records
Select whether to have the data feed trigger notification emails when records are published or updated. If notifications are not enabled in the selected target application, no notification emails are sent when the data feed runs.
Send job status notifications to selected users when job status changes
Select whether to have job status notifications sent to selected users or groups for:
-
All statuses - Success, warning, and faulted
-
Faulted status only
You can also select email addresses to receive job status notifications.
-
-
To save the data feed, click Save or Save and Close.
Note: Condense jobs that are in the queue during an upgrade cause the jobs to fail after the upgrade. Verify that there are no Condense jobs in the queue before upgrading.