UI overview

This section includes detailed description of all pages and dialog boxes in Testing Tool.

1. Integration Objects page

Integration Objects page looks:

integration objects

On this page you can:

  • synchronize integration objects of the required Agents (see Synchronization with Agent system section);

  • view last synchronization result (to do it click on Last Sync Result button);

    Last Sync Result button is available after synchronization and until page is refreshed.

  • view the integration objects of the selected Agent;

  • view deleted objects (to do it click on Browse deleted objects switcher) - it shows the objects that have been deleted on remote system after previous synchronization;

  • remove deleted objects (to do it select the object(s) and click on trash) - it deletes all related objects in scope of Testing Tool (e.g. recordings, test cases), and it doesn’t delete the object(s) from Change Tracking Tool → Tracked Objects page;

    You should switch view for deleted objects to be able to delete them.

  • delete test IFlows for CPI systems (to do it click on menu full and choose the action);

  • add, remove modules for PRO systems (to do it click on menu full and choose the action):

    • Add modules adds Figaf Agent Module or SAP Log Module (depending on chosen value of Messages logging approach option on Agent Configuration) to communication channel’s modules chain;

    • Remove modules is opposite to Add modules;

    • Remove ALL modules triggers removing from all objects. It is used while uninstallation;

      Add modules and Remove modules can be used only for primary objects (sender agreements for wildcard sender ICOs, wildcard receiver ICOs for receiver agreements, sender scenario in the bridges).

    • Modules management report downloads modules management history report with historical changes for communication channels made by Figaf Tool;

    • PI agent report downloads report with channels, operation mappings and function libraries configurations.

  • record messages to start configuring testing (to do it select the object(s) and click on Record messages button) - it opens Create recording requests dialog box;

  • create test case (to do it select the object and click Create Testcase) - it opens Test case creation page;

    Record messages and Create Testcase can be used only for primary objects (sender agreements for wildcard sender ICOs, wildcard receiver ICOs for receiver agreements, sender scenario in the bridges).

    Selected objects should be licensed for Testing (see Configure Object Licenses for details).

  • open details of integration objects (to do it click on recordings io or test cases io buttons) - it opens Integration Object Details page (see Integration Object Details page).

2. Integration Object Details page

Integration Object Details page includes several tabs with various information about integration object:

  1. Common Information tab shows the object metadata. It looks a little bit different for PRO and CPI objects:

    1. For PRO objects:

      io details pro

      Here you can open Tracked Object Details clicking on Tracked Object, and configure the following parameters:

      1. Scenario type configures scenario type for the integration object. Enable Override scenario type to be able to configure the parameter. In most cases Figaf Tool initializes this parameter appropriately but in some cases this parameter should be defined manually. Be careful with updates it may lead to unexpected behavior.

      2. Messages Logging Approach configures messages logging approach for the integration object. Only SAP Log Module or ICO Logging can be configured. Enable Override Messages Logging Approach to be able to configure the parameter. The option is visible when Use the most appropriate messages logging approach by default per scenario is enabled for the agent.

      3. Scenario validation contains the following validations:

        1. Is it ok to test without modules or not: warning - if ICO Logging is used for the object having custom modules, information - for SAP Log Module/Agent Module.

        2. Undetermined scenario type. Default value for ICO is UNDEFINED. If processing type is UNDEFINED the following operations are blocked for the ICO: modules addition (deletion should work), recording creation, test case running, polling.

        3. Undetermined default adapter modules (that affects modules addition): error for SAP Log Module/Agent Module, warning for ICO Logging.

        4. Detect if ICO logging is configured properly for current scenario. If setting are not found at all, then there is a message to check if BI/AM logging is enabled (they are global or scenario is not 7.5):

          1. When logging settings are not found: information when Sap Log Module enabled.

          2. When logging settings are found: information - if BI and AM not found and Sap Log Module enabled, warning - if BI and AM not found and ICO Logging enabled.

      4. Communication channels with information about modules. It’s possible to Add Modules and Remove Modules on this page.

      5. Channel selection strategy configures strategy applied to channels listed in Channels filter for modules update (EXCLUDE_CHANNELS (default) or INCLUDE_CHANNELS).

      6. Channels filter for modules update - is a list of communication channels which shouldn’t be updated (if Channel selection strategy is EXCLUDE_CHANNELS) or which should be updated (if Channel selection strategy is INCLUDE_CHANNELS). Start typing the communication channel name and select it to add it to the list.

        Channel selection strategy and Channels filter for modules update options must be configured for each ICO/SA/RA separately.
      7. Message interfaces which should be polled - is a list of message interfaces to be polled.

      8. Update channels with passwords. If false, it prevents updating of communication channels if they have password parameters in the modules chain.

      9. Has external sender. This option should be set manually if separate splitter scenario is used for triggering current scenario or when external correlation id is sent with the message and SAP Log Module or ICO Logging integration types are configured on Agent. Regarding the second case, it is required because we search Adapter Framework Data entries (entries in the SAP Message Monitoring) and determine root entries where root entry is calculated by default as an entry with matching interface info and without parent id and correlation id. Enabling Has external sender option disables some checks for root entry determination and makes it possible to have root entry with the link to external parent. But such configuration will cause issues when non-root messages with the same interface and with the link to real parent entry are fetched from SAP Message Monitoring.

      10. Add LogVersion status to the messages lookup during the polling - enable it if you want to add LogVersion status to the messages lookup during the polling.

      11. Test separately from parent scenario enables possibility to test the ICO in Async/Sync bridge separately. The option is available on Async/Sync bridges child objects pages. The option is relevant only for SAP log module/ICO Logging integrations.

        You also need to enable Record synchronous service response if exists setting on recording configuration page.

      12. Enable old polling approach enables old polling approach for scenario. That option has effect if old polling approach is not enabled globally by default (in bootstrap properties: irt.testing.pro.polling.async-with-splitter-scenario.use-old-approach, irt.testing.pro.polling.async-without-splitter-scenario.use-old-approach, irt.testing.pro.polling.sync-scenario.use-old-approach, irt.testing.pro.polling.async-sync-bridge-scenario.use-old-approach). When the option is enabled the following settings are shown:

        1. Enable automatic messages lookup optimization during the polling. Default value is true. Disable this option if you poll messages for scenario with the 1 receiver interface and 1 receiver channel but with the message split or scenario with the interfaces/receiver/message split without BI staging enabled and your agent uses SAP Log module or ICO Logging. If it’s false, the following options can be configured:

          1. Forcibly lookup successors during the polling enables successors lookup using message id.

          2. Forcibly lookup parent during the polling enables parent lookup by msgId using refId/parentId.

          3. Forcibly lookup by Reference ID/Correlation ID during the polling enables lookup by refId/correlationId using message id.

    2. For CPI objects:

      io details cpi
      1. For CPI IFlow objects Initialize inbound order number from SAP_ApplicationID configures order numbers initialization for inbound messages using SAP_ApplicationID (see e2e flow description here).

      2. For IFlow Chain objects Sender IFlow configures sender IFlow in the chain, Chained IFlows configures IFlows following Sender IFlow in the chain.

  2. Recording Configuration tab shows current recording configuration and provides functionality to change the configuration. The following settings are configured:

    1. For PRO objects:

      io recording config PRO
      • Messages Count defines how many messages will be recorded.

      • per unique message/partner combination sets up a recording strategy per message/partner, so that expected message groups count means n messages per message/partner. It also requires a timeout is hours to define an expected period of recording.

      • Message Expression defines some specific text, e.g. 'demo2019'.

      • Partner Expression defines XPath expression, e.g. '//Seller'. This means that once it is processing this data it will then go into it.

      • Order expressions means when you are receiving messages if there are some correlation or some id that you need to use for ordering these. You need it if you have a message that being split, otherwise it is pretty difficult to see which of the messages you need to compare.

      • Message Filter and Partner Filter define interfaces to be recorded.

      • Use alphanumeric order number enables calculation of order number as a string instead of a number.

      • Don’t calculate order number for a single entry - if true, order number is set to 1 (if there are 1 inbound and 1 outbound messages).

      • Record synchronous service response if exists is used when you have sync call. We test it the following way:

        • You have a message that being processed.

        • The first part is the sender adapter.

        • Then we go to the receiver adapter.

        • We log it before it’s been sent to the SOAP channel for instance.

        • We log it in the response coming back from the SOAP channel.

        • We log it just before it’s delivered to livery segment system.

        • If Record synchronous service response if exists is true, we log response coming back and use that to set up testing.

          Enable it if you record messages for child ICO in Async/Sync bridge.

      • Record payloads logged by the adapter if possible is used to record JSON payloads logged by the Rest adapter or CSV payloads logged by the File adapter. Supported only for SAP Log Module and ICO Logging message retrieving ways.

      • Record BI caption for inbound messages forcibly is used to record BI message instead of irtLogStage1. Supported only for SAP Log Module message retrieving way.

    2. For CPI objects:

      io recording config CPI
      • Messages Count defines how many messages will be recorded.

      • Message Expression defines some specific text, e.g. 'demo2019'.

      • Partner Expression defines XPath expression, e.g. '//Seller'. This means that once it is processing this data it will then go into it.

      • Order expressions means when you are receiving messages if there are some correlation or some id that you need to use for ordering these. You need it if you have a message that being split, otherwise it is pretty difficult to see which of the messages you need to compare.

      • Use alphanumeric order number enables calculation of order number as a string instead of a number.

      • Don’t calculate order number for a single entry - if true, order number is set to 1 (if there are 1 inbound and 1 outbound messages).

      • Download additional payloads - if true, additional payloads (e.g. MPL attachments, datastore entries) are recorded.

      • Use only finishing run steps enables functionality when only the last messages in a test case will be compared. If true, Run steps selection strategy and Run steps filter can’t be set.

      • Run steps selection strategy and Run steps filter configure steps which should be recorded. Since 2209 it’s possible to configure run steps filter in BPMN Model with viewer dialog clicking on configuration button.

        If it’s IFlow Chain object, it’s possible to configure Run steps filter for each IFlow used in IFlow Chain separately.
  3. Test Configuration tab shows test configurations of the integration object. There are the following properties can be configured here:

    1. Comparison Type defines used comparison strategy. Most of the time AUTO is fine.

    2. EDI functional group correlation key path defines path to data element or component element which can be used as a group identifier during comparison. Define a value if there are several functional groups in EDI outbound messages. The path format is described below.

    3. EDI message correlation key path defines path to data element or component element which can be used as a message identifier during comparison. Define a value if there are several messages in EDI outbound messages. Don’t define a value if there are several groups with 1 message in each group.

      EDI functional group correlation key path and EDI message correlation key path must be configured in the special format:

      <segment prefix><data element index and options><component element index (optional)>

      where:

      • <segment prefix> - shouldn’t contain [ or ], for example, UNE, DTM+137.

      • <data element index and options>: [<standalone index>] or [<standalone index>|<option>].

      • <component element index>: for example, 1.

        • <standalone index>: for example, 1

        • Available option:

          • substring - the option defines a substring rule for group/message correlation id path in EDI comparison. The option has the following syntax: substring(<characterIndex>|<characterIndex>), where characterIndex is index of the first or last character of substring that should be used as an identifier, could have positive or negative value:

            • substring(-2|), substring(-2), and substring(-2|0) uses 2 last characters, e.g. somestriNG.

            • substring(-5|-1) uses 4 characters starting from the fifth character from the end, e.g. somesTRINg.

            • substring(1|-1) uses all characters starting from the second and ending with penultimate, e.g. sOMESTRINg.

            • substring(-1|1) uses the first and the last characters, e.g. SomestrinG.

      Examples:

      • Only segment prefix: UNE, DTM+137.

      • Segment prefix with data element index: UNH[0], IMD[1].

      • Segment prefix with data element index and option: UNE[1|substring(-2|)].

      • Segment prefix with data element index and component element index: UNB[2][4].

    4. Item Ignore defines XPath expressions (divided by semicolon) to ignore, e.g. '/Invoice/Date'.

      For EDI type you can configure items in the special format

      <segment prefix><data element indexes and options><component element indexes (optional)>+<data element indexes and options><component element indexes (optional)>+<data element indexes and options><component element indexes (optional)>+...

      where:

      • <segment prefix> - shouldn’t contain [ or ], for example, UNE, DTM+137.

      • <data element indexes and options>: [<standalone/ranged index>, …​, <standalone/ranged index>|<option1>, <option2>]

      • <component element indexes>: [<standalone/ranged index>, …​, <standalone/ranged index>]

        • <standalone index>: for example, 1

        • <ranged index>: for example, 0-4

        • Available options:

          • empty-missing

          • java formatting which matches regex %[\\d.]*([dfeg]{1})$, for example: %.3f, %.6e.

      Examples:

      • Only segment prefix: UNE, DTM+137.

      • Segment prefix with data element indexes: UNH[0], IMD[0-2], IMD[0,2].

      • Segment prefix with data element indexes and options and component element indexes: UNB[0][1|empty-missing][0][2-5,7-8,9-10][4,5].

      For XML type you can configure items in the special format

      <XPath><options>

      where:

      • <XPath> - standard XPath expression, for example, //targetElem.

      • <options>: {{<option1>,<option2>}}

        • Available options:

          • empty-missing

          • java formatting which matches regex %[\\d.]*([dfeg]{1})$, for example: %.3f, %.6e.

          • @<xml_attribute> which configures ignoring of some particular xml attribute differences.

      Examples:

      • Only XPath expression: //targetElem.

      • XPath expression with additional ignoring options: /targetElem{empty-missing,%.3f}.

      • XML attribute ignoring: /targetElem/@id.

    5. Message Properties Ignore defines dynamic properties (divided by semicolon) to ignore, e.g. 'X-Vcap-Request-Id'. If you have a lot of dynamic properties with the same prefix (e.g. DynamicProperty1 and DynamicProperty2), you can define just common prefix (DynamicProperty) to ignore all dynamic properties with this prefix during comparison.g

      Item Ignore and Message Properties Ignore properties can be configured during result comparison in Differences dialog that can be opened from Test Run Details page.

    6. Order Expressions defines how to order messages.

    7. Collection Item Identification Expressions (only for XML type) defines rules to compare a collection with the different order of elements in the expected and actual messages. This setting should be configured in the following format:

      <path to the collection>-><relative path to the identification element>

      If path to your collection is /example/collection and path to element identified collection id example/collection/object/id, then Collection Item Identification Expressions is /example/collection->object/id.

    8. Result Notification Delay (in minutes). When current test object is used in some test suite, value of this option is used in the calculation of the min amount of time between the time when test suite run report is generated and its sending through email. Result Notification Delay for test suite is calculated as the max of Result Notification Delay values configured for each test object used by this test suite. Configure delay properly when testing of current scenario takes a lot of time.

    9. Define headers from inbound message to send forcibly enables forcible sending of headers configured in Headers to send forcibly field.

      The feature works only for IFlows having not standard sender channel. Then a copy IFlow (with replaced sender) will be created and have ProcessDirect sender if the headers are defined. Otherwise copy IFlow will be not created (when standard sender channel) or created with HTTP sender, that doesn’t support a possibility to send some specific headers (e.g. Camel headers), will be created.
    10. Use alphanumeric order number enables calculation of order number as a string instead of a number.

    11. Don’t calculate order number for a single entry - if true, order number is set to 1 (if there are 1 inbound and 1 outbound messages).

    12. Use expected message encoding during comparison. Enable if you want to forcibly use encoding from expected message instead of dynamically determined encoding from actual result. Have effect only for the following message types: EDIFACT, X12, TEXT.

    13. Ignore XML attributes configures XML attributes ignoring during comparison.

    14. XML attributes comparison configures separate XML attributes comparison.

    15. Trim whitespaces before XML comparison enables ignoring whitespaces during comparison of XML payloads. By default, option is enabled.

    16. For PRO objects you can configure additional settings for testing:

      1. Don’t use original receiver communication (mock endpoints) (for BE scenarios) enables testing with mock data. It uses the replacement of original receiver channel links in the test scenario by our REST channel links. For each original receiver channel, Figaf Tool creates and keeps actual the REST channel with the prefix FigafIRT_. That channel calls endpoint which just receives code 200 without the body. All custom modules from the original channel are copied to that REST channel. So requests to the original receiver are avoided.

      2. Use splitter - if true, B2B splitter is used. If you have B2B messages and you want to send them in with the B2B splitter, you can specify which scenario to set them up and to send them in with this specific splitter.

        io test config splitter

        There is also global splitter setting that you can use (see Agent Configuration, field Default splitter on Agent dialog box).

      3. Decide to test with/without sender modules automatically - true by default. When sender communication channel has custom modules, the standard way of testing for PRO scenarios in Figaf Tool (through common infrastructure scenario) will not work correctly because these custom modules will not be triggered. Figaf Tool determines such case automatically if Decide to test with/without sender modules automatically is true, as a result, special infrastructure scenario will be created for making correct test. Otherwise you can decide to apply or not such behavior for current test object.

        After disabling Decide to test with/without sender modules automatically new option Run migration tests with modules on sender side is available. You have to decide by yourself should it be selected or not.

      4. Generate new numbers for each message - it adds the NumberRange Module to the test channel, so you can use the B2B NRO to maintain a list of numbers.

      5. Don’t send message after testing - it can be applied only to standard EO/EOIO scenarios and only when Figaf Agent Module integration type is configured on Agent. Agent Module throws specific exception after processing of 2nd stage message (after sender mapping). Then Figaf Tool triggers cancellation of such message. As a result, the message is not sent to real receiver but tested in Figaf Tool. It causes critical limitation for BE scenarios because only sender mapping is tested.

      6. Test 3rd stage messages in baseline test cases - if true, it creates test run result for 3rd stage message (sync service response) so comparison between expected and actual results is processed.

      7. Validate receiver integration - if true, AFD status will be compared with the expected value success. If they are not equal, it will be shown on Differences dialog as a dynamic configuration property http://figaf.com/testing/virtual-property|MessageStatus. The feature works only for SAP Log module and ICO Logging.

      8. Don’t skip adapter-specific properties by default during message sending - if true, message properties that start with one of prefixes DEFAULT_IGNORED_PROPERTY_PREFIXES or equal to one of DEFAULT_IGNORED_PROPERTIES aren’t skipped during message sending.

      9. Define additional dynamic properties to send with the test message enables configuration of additional dynamic properties.

    17. For CPI objects you can configure:

      1. Don’t use original receiver communication (mock endpoints) enables testing with mock data. Linked test cases are run with replacing real receivers responses by previously recorded responses (from corresponding receivers) on Agent Object. Figaf Tool clones original test object (IFlow) with replacing sender and receiver channels to HTTPS where receiver channel endpoints are special Figaf Tool API. To use this mocking service you need to have configured Figaf Tool to use the cloud connector (see cloud connector properties configured in run command).

      2. Use a separate IFlow to send messages enables testing by calling another IFlow. If true, Sender IFlow can be selected.

        Use a separate IFlow to send messages and Don’t use original receiver communication (mock endpoints) can’t be used simultaneously.

        Please note that if shared configuration is configured for the IFlow with enabled Use a separate IFlow to send messages, Test with mock data should be disabled in the shared configuration.

        If Sender IFlow has multiple entry points, the first one will be selected for running.

        If Sender IFlow has a not standard sender channel, a copy IFlow (with replaced sender) will be created. But a target IFlow will never be changed if we use a Sender IFlow.

      3. Use only finishing run steps enables functionality when only the last messages in a test case will be compared. If true, Run steps selection strategy and Run steps filter can’t be set.

      4. Run steps selection strategy and Run steps filter configure steps which should be tested. Since 2209 it’s possible to configure run steps filter in BPMN Model with viewer dialog clicking on configuration button.

        If it’s IFlow Chain object, it’s possible to configure Run steps filter for each IFlow used in IFlow Chain separately.
  4. Recording Requests tab looks:

    io recordings

    Several actions can be done on this tab:

    1. View related test suites clicking on Test Suite Title. It opens Test Suite page.

    2. View recording details clicking on details. It opens 'Recording Request Details' page.

    3. Copy recording clicking on dots icon and then on Copy button. It opens 'Copy Recording Requests' dialog box.

      copy recordings dialog

      In this dialog box you can configure created recording:

      1. Fetch policy switches the way how messages will be fetched from the message monitoring. There are several possible values:

        • Fetch historic messages - takes messages from message monitoring in order of their creation depending on specified period in Historic lookup period property (the left bound is calculated as current time minus specified lookup period). When you use Fetch historic messages option, be sure that historic messages have enough metadata to record them. For SAP Log Module integration type messages should have logged state with irtLogStage<x> labels, where x=1…4, for ICO Logging integration type messages should have logged state with BI and AM labels.

          Fetch historic messages is used for all CPI agents and when PRO Agent has Messages logging approach equals to SAP Log Module or ICO Logging (see Agent Configuration).
        • Fetch future messages - only messages sent after recording creation will be recorded.

        • Fetch chosen message groups enables message groups configuration. Once it’s selected, Chosen root messages can be configured by clicking on configuration button. It opens search dialog where you can select root message groups for recording.

          Fetch chosen message groups is used only when PRO Agent has Messages logging approach equals to SAP Log Module or ICO Logging (see Agent Configuration) and scenarios used in recordings aren’t RDs.
      2. RD historic lookup period (only for PRO agents) is shown for agents with enabled dual stack recording (see property Enable dual stack recording on Agent Configuration) and similar to Historic lookup period (only for PRO Agents).

      3. Update channels with password (PRO Agents).

      4. You can configure more properties clicking on config:

        • For PRO objects:

          • Messages Count defines how many messages will be recorded.

          • per unique message/partner combination sets up a recording strategy per message/partner, so that expected message groups count means n messages per message/partner. It also requires a timeout is hours to define an expected period of recording.

          • Message Expression defines some specific text, e.g. 'demo2019'.

          • Partner Expression defines XPath expression, e.g. '//Seller'. This means that once it is processing this data it will then go into it.

          • Order expressions means when you are receiving messages if there are some correlation or some id that you need to use for ordering these. You need it if you have a message that being split, otherwise it is pretty difficult to see which of the messages you need to compare.

          • Message Filter and Partner Filter define interfaces to be recorded.

          • Use alphanumeric order number enables calculation of order number as a string instead of a number.

          • Don’t calculate order number for a single entry - if true, order number is set to 1 (if there are 1 inbound and 1 outbound messages).

          • Record synchronous service response if exists is used when you have sync call. We test it the following way:

            • You have a message that being processed.

            • The first part is the sender adapter.

            • Then we go to the receiver adapter.

            • We log it before it’s been sent to the SOAP channel for instance.

            • We log it in the response coming back from the SOAP channel.

            • We log it just before it’s delivered to livery segment system.

            • If Record synchronous service response if exists is true, we log response coming back and use that to set up testing.

              Enable it if you record messages for child ICO in Async/Sync bridge.

        • For CPI objects:

          • Messages Count defines how many messages will be recorded.

          • Message Expression defines some specific text, e.g. 'demo2019'.

          • Partner Expression defines XPath expression, e.g. '//Seller'. This means that once it is processing this data it will then go into it.

          • Order expressions means when you are receiving messages if there are some correlation or some id that you need to use for ordering these. You need it if you have a message that being split, otherwise it is pretty difficult to see which of the messages you need to compare.

          • Use alphanumeric order number enables calculation of order number as a string instead of a number.

          • Don’t calculate order number for a single entry - if true, order number is set to 1 (if there are 1 inbound and 1 outbound messages).

          • Download additional payloads - if true, additional payloads (e.g. MPL attachments, datastore entries) are recorded.

          • Use only finishing run steps enables functionality when only the last messages in a test case will be compared. If true, Run steps selection strategy and Run steps filter can’t be set.

          • Run steps selection strategy and Run steps filter configure steps which should be recorded. Since 2209 it’s possible to configure run steps filter in BPMN Model with viewer dialog clicking on configuration button.

            If it’s IFlow Chain object, it’s possible to configure Run steps filter for each IFlow used in IFlow Chain separately.

          As a result of recording coping, new recording will be created.

    4. Delete selected recording requests clicking on red trash button.

      In current version of Figaf Tool recordings are not stopped, they are deleted.
    5. Create Test Cases clicking on play with plus button. It opens 'Test Case creation' dialog box. After test case is created the source recording should be removed from recordings table.

      1. You should create test cases for recordings having recorded inbound/outbound messages, otherwise you will get the error.

      2. You can create test cases for recordings having not enough (less than configured Messages Count) recorded inbound/outbound messages, then test case with available messages will be created, and recording will stay in Recordings table (having less messages count).

    6. Poll remote messages clicking on Poll remote messages button. Before polling remote messages you should send the messages to the scenario or wait for messages if the scenario is scheduled, otherwise, you won’t get any messages.

      Since Figaf Tool 2.15 automatic polling is added for SAP CPI Agents. Once the messages have been run successfully, automatic polling is started.

      Since Figaf Tool 2.15.2 active polling statistics can be viewed. If there is polling in progress for the integration objects and you open another poll remote data dialog, then

      • recordings/test runs which are in both polling requests will be marked as SKIPPED_AS_DUPLICATE and statistics will be copied from original item and shown on UI in the new one.

      • recordings/test runs which are not in the original polling request will be SKIPPED in the new one.

      Since Figaf Tool 2305 polling will not be started if there is already a started polling requests with same objects in progress or polling has been already completed for all selected objects.

      If you have polled almost all messages and you just need to poll small number of messages to finish testing, you can enable option Ignore messages cache during polling on Application configuration page. When this option is enabled, already polled messages won’t be skipped. It affects performance when the amount of messages is large and you just need to poll small number to finish testing.

    7. Open last polling result clicking on Open Last Polling Result button.

      Open Last Polling Result button is available after polling remote data and until page is refreshed.

    8. Record messages clicking on Record messages button. It opens Create recording requests dialog box.

  5. Agent Test Cases tab shows test cases which have current Integration Object as Agent Object. This tab looks:

    io agent test cases

    On this tab you can:

    1. View test case details clicking on details. It opens Test Case Details page.

    2. Select the test case(s) and do actions:

      1. Run the test case(s) clicking on run or run on buttons:

        • Run uses static links to test objects defined in the test case.

        • Run on resolves these links dynamically at the moment of test run. It is possible to see how these links are resolved through Current Mapping Configuration button - it opens 'Mapping Configuration' dialog box:

          mapping configuration

          There are Agent Objects (parent nodes) and Test Objects (child nodes) in opened dialog box.

    3. Click on table’s menu full button and choose:

      1. Create Test Case (PRO) opens Test case creation page.

      2. Merge Test Cases merges two selected test cases' messages. It removes previous test cases run results.

        You should select two test cases with the same Agent Object, Message and Partner.
      3. Add to test suite adds selected test case(s) to new or existing test suite. It opens 'Create Test Suite' dialog box:

        create test suite

        You can choose to create new test suite or to use existing one. If you enable Open Test Suite’s page after creation option, Test Suite page will be opened after saving.

      4. Trim tests removes messages from selected test case(s). It keeps number of inbound messages equals to the value of How many messages to keep in the test case? property of 'Trim Test Cases' dialog box:

        trim test case
      5. Migrate to CPI (PRO) migrates PI test cases to CPI (see Migrate test cases from PI to CPI scenario).

It opens Migrate Test Cases dialog:

migrate test cases dialog

Where you can configure test suite (new or existent one) and CPI IFlow or IFlow Chain created manually or created during PI to CPI migration with Pipeline All profile. If you want to open Test Suite page, enable Open Test Suite’s page after test cases migration option.

If selected IFlow was migrated from PI agent using PI to CPI migration feature, you are able to configure mapping between platforms. Since 2208.1 it’s possible to configure IFlow element ids in BPMN Model with viewer dialog clicking on configuration button.

Once you configure all fields, click on Save. It opens confirmation:

migrate test cases confirmation

As the result, new test case will be created and attached to the test suite.

During the copying of the messages, interface related parameters (stepId, activity, branchId) of the last stage are initialized by value ${determineDuringFirstRun}. For other stages for bridges and BE scenario value ${shouldBeMappedManually} is used. Original stage number is temporary saved in the step number, then it will be replaced by real value - it needs for BE and bridges to distinguish different original stages. …​ Delete removes selected test case(s). …​ Clean removes all test run results and reports of selected test case(s).

  1. Linked Test Cases tab shows test cases which have current Integration Object as test object. This tab looks:

    io linked test cases

    On this tab you can:

    1. View test case details clicking on details. It opens Test Case Details page.

    2. Select the test case(s) and do actions:

      1. Run the test case(s) clicking on run or run on buttons:

        • Run uses static links to test objects defined in the test case.

        • Run on resolves these links dynamically at the moment of test run. It is possible to see how these links are resolved through Current Mapping Configuration button - it opens 'Mapping Configuration' dialog box:

          mapping configuration

          There are Agent Objects (parent nodes) and Test Objects (child nodes) in opened dialog box.

      2. Detach link between current Integration Object and selected test case clicking on Detach button.

3. Test Suites page

Test Suites page shows all existing test suites. The page looks:

test suites

On this page you can:

  • view test suites unrelated or related to tickets (to see test suites related to tickets click on Hide Ticket related Test Suites switcher);

  • reset message interfaces to ${determineDuringFirstRun} for chosen test suites to be able to reinitialize them in runtime;

  • delete selected test suite(s) clicking on Delete;

  • delete selected test suite(s) run results clicking on Clean;

  • run selected test suite(s) clicking on play blue;

  • view test suite details clicking on details button. It opens Test Suite page.

4. Test Suite page

Test Suite page includes several tabs with various data related to the test suite:

On each tab you are able to edit the test suite data (to start editing click on edit button):

  • Test Suite Title.

    Test Suite Title must be unique.
  • Scheduled - if true, TestSuiteRunner task runs this test suite according to configured cron expression (configure TestSuiteRunner property on Application configuration page).

    Agent Objects used for testing should be licensed, otherwise, scheduled test suites won’t be run.
  • Send report by email - if true, TestSuiteResultNotifier task sends test suite excel report to email configured on Application configuration page (configure TestSuiteResultNotifier, SMTP host, SMTP port, SMTP username, SMTP password, Email Protocol, Email From, Email To properties on Application configuration page).

  • Use default test systems for running enables Run on behavior for scheduled test suite running - test suite will be run on all Agents licensed as test systems excluding marked as production systems.

    You can check licensed Agents on License page. Production systems can be checked on Agent Configuration.

To save changes click on save button, to reset - reset.

To run the test suite click on run or run on buttons:

  • Run uses static links to test objects defined in the test case.

  • Run on resolves these links dynamically at the moment of test run. It is possible to see how these links are resolved through Current Mapping Configuration button - it opens 'Mapping Configuration' dialog box:

    mapping configuration

    There are Agent Objects (parent nodes) and Test Objects (child nodes) in opened dialog box.

You can’t run test suite without linked test cases.

To create recordings click on Record messages button. It opens the dialog where you can select an integration object then Create recording requests dialog box will be opened.

  1. RECORDINGS tab looks:

    ts recordings

    Here you can:

    1. View Agent Object clicking on Agent Object Title (it opens Integration Object Details page).

    2. Delete recordings (see corresponding actions here).

    3. View recording configuration clicking on dots icon and then on Config button (see settings here).

    4. Copy recording clicking on dots icon and then on Copy button (see settings here).

    5. Poll remote messages for selected recordings.

    6. Create test cases for selected recordings clicking on play with plus.

  2. LINKED TEST CASES tab shows test cases linked to current test suite and looks:

    ts linked test cases

    Here you can:

    1. View Agent Object clicking on Agent Object Title (it opens Integration Object Details page).

    2. Add test cases clicking on plus button. It opens 'Add Test Cases to the test suite' dialog box:

      ts add test cases dialog

      Just select the Agent, the object and required test cases and click on 'OK'.

    3. Select test case(s) and do actions:

      1. Detach test case(s) from test suite clicking on red trash button. It deletes the link between current test suite and selected test case(s).

    4. View test case details clicking on details button. It opens Test Case Details page.

  3. LAST RESULT tab shows last test suite run result and looks:

    ts last result

    On this tab you can:

    1. View Agent and Test Objects clicking on Agent Object Title or Test Object Title. It opens Integration Object Details page.

    2. Poll remote results clicking on Poll remote results.

      Since Figaf Tool 2.15 automatic polling is added for SAP CPI Agents. Once the messages have been run successfully, automatic polling is started.

      Since Figaf Tool 2.15.2 active polling statistics can be viewed. If there is polling in progress for the integration objects and you open another poll remote data dialog, then

      • recordings/test runs which are in both polling requests will be marked as SKIPPED_AS_DUPLICATE and statistics will be copied from original item and shown on UI in the new one.

      • recordings/test runs which are not in the original polling request will be SKIPPED in the new one.

      Since Figaf Tool 2305 polling will not be started if there is already a started polling requests with same objects in progress or polling has been already completed for all selected objects.

      If you have polled almost all messages and you just need to poll small number of messages to finish testing, you can enable option Ignore messages cache during polling on Application configuration page. When this option is enabled, already polled messages won’t be skipped. It affects performance when the amount of messages is large and you just need to poll small number to finish testing.

    3. View last polling result clicking on Open Last Polling Result.

      Open Last Polling Result button is available after polling remote data and until page is refreshed.
    4. Export as a test case to PIT (only for PRO) clicking on Export as a Test Case to PIT.

      Export as a Test Case to PIT is available only for licensed version after the test suite running.
    5. Compare testing results and build reports clicking on check button.

      The option reprocesses comparison and report generation asynchronously. It makes sense to execute the option once you have configured a new "ignore rule".
    6. Do the following actions clicking on menu and choosing corresponding option:

      1. Rerun reruns required test case.

      2. View testcase opens 'Test Case Details' page.

      3. Download report.

        Report type can be one of 2 types:

        1. xlsx report:

          • Contains information about test run/test suite run results.

          • By default only failed results are added to xlsx report. If you want to change this behavior, go to Configuration → Application page and enable Show successful messages in the test suite’s report property.

        2. zip archive with 3 csv reports:

          • test-run-metadata.csv - contains common information about test run/test suite run (e.g. test object, counters for success/error/unexpected/not/compared/unfinished test run/test suite run results).

          • diff-report.csv - contains diff report for test run/test suite run.

          • processed-messages-report.csv - contains processed messages report for test run/test suite run.

        The report type is defined from Use bundled CSV report generation strategy property configured on Application configuration page. If it’s true, the report type is zip, otherwise, it’s xlsx.

      4. View results opens Test Run Details page.

  4. RESULTS HISTORY tab shows results of all test suite runs and looks:

    ts results history

    On this tab you can:

    1. Configure Running Date filter to hide some test suite run results.

    2. Select the result and do actions:

      1. Compare testing results and build reports clicking on check button.

        The option reprocesses comparison and report generation asynchronously. It makes sense to execute the option once you have configured a new "ignore rule".
      2. Poll remote results for unfinished runs clicking on Poll remote results.

        Since Figaf Tool 2.15 automatic polling is added for SAP CPI Agents. Once the messages have been run successfully, automatic polling is started.

        Since Figaf Tool 2.15.2 active polling statistics can be viewed. If there is polling in progress for the integration objects and you open another poll remote data dialog, then

        • recordings/test runs which are in both polling requests will be marked as SKIPPED_AS_DUPLICATE and statistics will be copied from original item and shown on UI in the new one.

        • recordings/test runs which are not in the original polling request will be SKIPPED in the new one.

        Since Figaf Tool 2305 polling will not be started if there is already a started polling requests with same objects in progress or polling has been already completed for all selected objects.

        If you have polled almost all messages and you just need to poll small number of messages to finish testing, you can enable option Ignore messages cache during polling on Application configuration page. When this option is enabled, already polled messages won’t be skipped. It affects performance when the amount of messages is large and you just need to poll small number to finish testing.

      3. View last polling result clicking on Open Last Polling Result.

        Open Last Polling Result button is available after polling remote data and until page is refreshed.
      4. Export as a test case to PIT (only for PRO) clicking on Export as a Test Case to PIT.

      5. Delete testing results clicking on red trash button.

    3. Download report of the test suite run result clicking on download button in the table row.

      Report type can be one of 2 types:

      1. xlsx report:

        • Contains information about test run/test suite run results.

        • By default only failed results are added to xlsx report. If you want to change this behavior, go to Configuration → Application page and enable Show successful messages in the test suite’s report property.

      2. zip archive with 3 csv reports:

        • test-run-metadata.csv - contains common information about test run/test suite run (e.g. test object, counters for success/error/unexpected/not/compared/unfinished test run/test suite run results).

        • diff-report.csv - contains diff report for test run/test suite run.

        • processed-messages-report.csv - contains processed messages report for test run/test suite run.

      The report type is defined from Use bundled CSV report generation strategy property configured on Application configuration page. If it’s true, the report type is zip, otherwise, it’s xlsx.

    4. View Test Suite Run details clicking on details. It opens Test Suite Run page.

5. Test Suite Runs page

Test Suite Runs page shows the test suites that were run in period defined in Running Date filter and looks:

ts runs

The following actions are available on this page:

  1. Viewing test suite details: click on Test Suite and it opens Test Suite page.

  2. Testing results comparison and reports building clicking on check button.

    The option reprocesses comparison and report generation asynchronously. It makes sense to execute the option once you have configured a new "ignore rule".
  3. Remote results polling. Select unfinished test suite run(s) and click on Poll remote results button.

    Since Figaf Tool 2.15 automatic polling is added for SAP CPI Agents. Once the messages have been run successfully, automatic polling is started.

    Since Figaf Tool 2.15.2 active polling statistics can be viewed. If there is polling in progress for the integration objects and you open another poll remote data dialog, then

    • recordings/test runs which are in both polling requests will be marked as SKIPPED_AS_DUPLICATE and statistics will be copied from original item and shown on UI in the new one.

    • recordings/test runs which are not in the original polling request will be SKIPPED in the new one.

    Since Figaf Tool 2305 polling will not be started if there is already a started polling requests with same objects in progress or polling has been already completed for all selected objects.

    If you have polled almost all messages and you just need to poll small number of messages to finish testing, you can enable option Ignore messages cache during polling on Application configuration page. When this option is enabled, already polled messages won’t be skipped. It affects performance when the amount of messages is large and you just need to poll small number to finish testing.

  4. Downloading consolidated diff report for test suite runs. Select test suite run(s) and click on Diff report button. It will take diff info from all related test runs/test run results with ERROR status and build CSV file (semicolon as a separator) with the following information:

    consolidated diff report

    Combination of fields Message, Partner, Diff State, Old Value, New Value organize a unique key, Affected Test Suites and Affected Messages count are aggregated. Affected Messages count - is count of failed test run results (outbound messages) which contain the error determined by the key. Obviously, if some error occurred in one message several times, that message will be counted only once.

    Correlation between old and new value for now properly added only for EDI types (EDIFACT and X12).

  5. Export as a test case to PIT.

  6. Deleting selected testing results.

  7. Downloading report.

    Report type can be one of 2 types:

    1. xlsx report:

      • Contains information about test run/test suite run results.

      • By default only failed results are added to xlsx report. If you want to change this behavior, go to Configuration → Application page and enable Show successful messages in the test suite’s report property.

    2. zip archive with 3 csv reports:

      • test-run-metadata.csv - contains common information about test run/test suite run (e.g. test object, counters for success/error/unexpected/not/compared/unfinished test run/test suite run results).

      • diff-report.csv - contains diff report for test run/test suite run.

      • processed-messages-report.csv - contains processed messages report for test run/test suite run.

    The report type is defined from Use bundled CSV report generation strategy property configured on Application configuration page. If it’s true, the report type is zip, otherwise, it’s xlsx.

  8. Viewing Test Suite Run details clicking on details. It opens Test Suite Run page.

6. Test Suite Run page

Test Suite Run page opens a historical test suite run and browse related test runs:

ts run page

On this page you can:

  1. View linked test suite. It opens Test Suite page.

  2. Work with test runs:

    1. View Agent and Test Objects clicking on Agent Object Title or Test Object Title. It opens Integration Object Details page.

    2. Poll remote results clicking on Poll remote results.

      Since Figaf Tool 2.15 automatic polling is added for SAP CPI Agents. Once the messages have been run successfully, automatic polling is started.

      Since Figaf Tool 2.15.2 active polling statistics can be viewed. If there is polling in progress for the integration objects and you open another poll remote data dialog, then

      • recordings/test runs which are in both polling requests will be marked as SKIPPED_AS_DUPLICATE and statistics will be copied from original item and shown on UI in the new one.

      • recordings/test runs which are not in the original polling request will be SKIPPED in the new one.

      Since Figaf Tool 2305 polling will not be started if there is already a started polling requests with same objects in progress or polling has been already completed for all selected objects.

      If you have polled almost all messages and you just need to poll small number of messages to finish testing, you can enable option Ignore messages cache during polling on Application configuration page. When this option is enabled, already polled messages won’t be skipped. It affects performance when the amount of messages is large and you just need to poll small number to finish testing.

    3. View last polling result clicking on Open Last Polling Result.

      Open Last Polling Result button is available after polling remote data and until page is refreshed.
    4. Do the following actions clicking on menu and choosing corresponding option:

      1. Rerun reruns required test case.

      2. View testcase opens 'Test Case Details' page.

      3. Download report.

        Report type can be one of 2 types:

        1. xlsx report:

          • Contains information about test run/test suite run results.

          • By default only failed results are added to xlsx report. If you want to change this behavior, go to Configuration → Application page and enable Show successful messages in the test suite’s report property.

        2. zip archive with 3 csv reports:

          • test-run-metadata.csv - contains common information about test run/test suite run (e.g. test object, counters for success/error/unexpected/not/compared/unfinished test run/test suite run results).

          • diff-report.csv - contains diff report for test run/test suite run.

          • processed-messages-report.csv - contains processed messages report for test run/test suite run.

        The report type is defined from Use bundled CSV report generation strategy property configured on Application configuration page. If it’s true, the report type is zip, otherwise, it’s xlsx.

      4. View results opens Test Run Details page.

  3. Menu options:

    1. Compare results for selected test runs.

    2. Open Used Mapping Configuration opens used mapping configuration.

    3. Test suite run report downloads test suite run report.

      Report type can be one of 2 types:

      1. xlsx report:

        • Contains information about test run/test suite run results.

        • By default only failed results are added to xlsx report. If you want to change this behavior, go to Configuration → Application page and enable Show successful messages in the test suite’s report property.

      2. zip archive with 3 csv reports:

        • test-run-metadata.csv - contains common information about test run/test suite run (e.g. test object, counters for success/error/unexpected/not/compared/unfinished test run/test suite run results).

        • diff-report.csv - contains diff report for test run/test suite run.

        • processed-messages-report.csv - contains processed messages report for test run/test suite run.

      The report type is defined from Use bundled CSV report generation strategy property configured on Application configuration page. If it’s true, the report type is zip, otherwise, it’s xlsx.

    4. Export to CSV downloads test runs CSV report.

    5. Extract related source data as a test case copies message groups related to filtered (not successful or not compared) test run results from all test runs to the new test case.

    6. Rerun reruns selected test runs.

7. Test Cases page

Test Cases page shows all existing test cases and looks:

test cases

On this page you can:

  1. View Agent Object clicking on Agent object. It opens Integration Object Details page.

  2. View Test Objects clicking on corresponding menu full button and selecting the object. It opens Integration Object Details page.

  3. View Test Suites clicking on corresponding menu full button and selecting the test suite. It opens Test Suite page.

  4. Run selected test case(s) clicking on run or run on buttons:

    • Run uses static links to test objects defined in the test case.

    • Run on resolves these links dynamically at the moment of test run. It is possible to see how these links are resolved through Current Mapping Configuration button - it opens 'Mapping Configuration' dialog box:

      mapping configuration

      There are Agent Objects (parent nodes) and Test Objects (child nodes) in opened dialog box.

  5. Click on table’s menu full button and choose:

    1. Create Test Case opens Test case creation page.

    2. Merge Test Cases merges two selected test cases' messages. It removes previous test cases run results.

      You should select two test cases with the same Agent Object, Message and Partner.
    3. Add to test suite adds selected test case(s) to new or existing test suite. It opens 'Create Test Suite' dialog box:

      create test suite

      You can choose to create new test suite or to use existing one. If you enable Open Test Suite’s page after creation option, Test Suite page will be opened after saving.

    4. Trim tests removes messages from selected test case(s). It keeps number of inbound messages equals to the value of How many messages to keep in the test case? property of 'Trim Test Cases' dialog box:

      trim test case
    5. Delete removes selected test case(s).

    6. Clean removes all test run results and reports of selected test case(s).

  6. View Details of the test case clicking on it’s details button. It opens Test Case Details page.

8. Test Case page

Test Case Details page includes several tabs with various data related to the test suite:

  1. Test Case Information tab shows general information about the test case, linked test objects and test suites. This tab looks:

    tc info

    On this tab you can:

    1. Edit the test case title, message or partner (to start editing click on edit button). To save changes click on save button, to reset - reset.

    2. View Agent Object clicking on Agent Object Title. It opens Integration Object Details page.

    3. View Test Objects clicking on required Test Object Title. It opens Integration Object Details page.

    4. View Test Suites that the test case is linked to clicking on required Test Suite Title. It opens Test Suite page.

    5. Add or remove Test Object(s) clicking on plus or decline buttons respectively.

      Test Objects are only from the same platform as Agent Object can be added.
    6. Add the test case to new or existing Test Suite or remove the test case from Test Suite(s) clicking on plus or decline buttons respectively.

  2. Messages tab shows inbound/outbound messages in the test case and looks differently for PRO and CPI Agent Objects:

    1. For PRO Agent Objects:

      tc messages pro

      The following actions can be executed for both PRO and CPI Agent Objects:

      1. Calculate ordering numbers calculates ordering numbers based on configuration on Recording Configuration tab of related integration object or suitable Shared Configuration.

      2. Calculate file type and encoding calculates messages file types and encoding based on Encoding Determination configuration if encoding determination rules exist or standard algorithm if they don’t exist.

      3. Options available in the message’s menu button:

        1. View/Edit message opens 'Message Details' page.

        2. Delete message group removes all messages with the same 'Inb. Group' as the one that the option selected for. It’s possible to delete only message or delete messages and related test run results (reports will be rebuilt in this case).

        3. Copy message group copies all messages (with the same 'Inb. Group' as the one that the option selected for) to selected test case(s).

        4. Change interface metadata opens 'Change message interface metadata' dialog box:

          change message metadata

          You can configure required changes here. Change messages globally applies changes to all messages with 'Source Message Interface Metadata'.

        5. Download downloads message xml file for PRO and file without extension for CPI.

    2. For CPI Agent Objects:

      tc messages cpi

      The following actions can be executed for CPI Agent Objects:

      1. Generate groovy scripts test data generates groovy scripts (see Groovy scripts unit testing).

      2. Options available in the message’s menu button:

        1. Build groovy script generates groovy script (see Groovy scripts unit testing).

        2. Generate test data for current message generates json.

  3. Messages Anonymization tab helps to configure anonymized test case based on current test case and looks:

    tc message anonym
    It works only for XML messages.

    On this tab you can:

    1. Add message mappings selecting Message group id and clicking twice on the element you want to anonymize:

      message anonymization create
      You also can add message mappings clicking on plus button in Applied mappings table. In such case you have to define Element path by yourself.

      It opens 'Create mapping' dialog box:

      create mapping

      Select desired function:

      • Multiply by random value from range multiplies source values by a random number from the range defined by values of From and to properties. If you want to get integers, set Count of number after a decimal point property value as 0, otherwise, define desired value. Example with filled properties:

        create mapping random ex

        To test function enter Test value and click on Test function.

      • Generate value from template generates values from Template property value. You can define template using special characters ('#' for number, '?' for uppercased character, '*' for lowcased character, e.g. ### - template for 3 random numbers) or anonymization variables defined in Anonymization variables, e.g. ${firstname} ${lastname}. Example with filled properties:

        create mapping template ex

        To test function click on Test function.

    2. Edit message mapping clicking on edit button in Applied mappings table.

    3. Delete message mapping clicking on decline button in Applied mappings table.

    4. Create new test case with anonymized data clicking on Clone anonymized test case button. It opens 'Clone anonymized test case' dialog box:

      clone anonymized tc

      After submitting the dialog new test case will be created.

  4. Testing Results tab shows the test case run results and looks:

    tc results

    On this tab you can:

    1. View Test Object clicking on 'Test Object'. It opens Integration Object Details page.

    2. Run the test case clicking on run or run on buttons:

      • Run uses static links to test objects defined in the test case.

      • Run on resolves these links dynamically at the moment of test run. It is possible to see how these links are resolved through Current Mapping Configuration button - it opens 'Mapping Configuration' dialog box:

        mapping configuration

        There are Agent Objects (parent nodes) and Test Objects (child nodes) in opened dialog box.

    3. Select test case run result(s) and:

      1. Poll remote messages and Open Last Polling Result.

        Since Figaf Tool 2.15 automatic polling is added for SAP CPI Agents. Once the messages have been run successfully, automatic polling is started.

        Since Figaf Tool 2.15.2 active polling statistics can be viewed. If there is polling in progress for the integration objects and you open another poll remote data dialog, then

        • recordings/test runs which are in both polling requests will be marked as SKIPPED_AS_DUPLICATE and statistics will be copied from original item and shown on UI in the new one.

        • recordings/test runs which are not in the original polling request will be SKIPPED in the new one.

        Since Figaf Tool 2305 polling will not be started if there is already a started polling requests with same objects in progress or polling has been already completed for all selected objects.

        If you have polled almost all messages and you just need to poll small number of messages to finish testing, you can enable option Ignore messages cache during polling on Application configuration page. When this option is enabled, already polled messages won’t be skipped. It affects performance when the amount of messages is large and you just need to poll small number to finish testing.

        Open Last Polling Result button is available after polling remote data and until page is refreshed.

      2. Compare results clicking on check.

        The option reprocesses comparison and report generation asynchronously. It makes sense to execute the option once you have configured a new "ignore rule".
      3. Delete the result(s) clicking on red trash.

    4. The options available in result’s menu button:

      1. Download report.

        Report type can be one of 2 types:

        1. xlsx report:

          • Contains information about test run/test suite run results.

          • By default only failed results are added to xlsx report. If you want to change this behavior, go to Configuration → Application page and enable Show successful messages in the test suite’s report property.

        2. zip archive with 3 csv reports:

          • test-run-metadata.csv - contains common information about test run/test suite run (e.g. test object, counters for success/error/unexpected/not/compared/unfinished test run/test suite run results).

          • diff-report.csv - contains diff report for test run/test suite run.

          • processed-messages-report.csv - contains processed messages report for test run/test suite run.

        The report type is defined from Use bundled CSV report generation strategy property configured on Application configuration page. If it’s true, the report type is zip, otherwise, it’s xlsx.

      2. View results opens Test Run Details page.

      3. Mark as Test Case creates new test case from chosen groups of testing results. Testing results become expected messages in the new test case.

      4. Update Test Case using Test Run Results. It replaces corresponding messages in the current test case by messages of testing result. There are 2 strategies available:

        1. Drop all old messages - all old messages will be deleted from the test case and messages of testing result will be copied.

        2. Replace only messages from the test run - only corresponding messages in the current test case will be replaced by messages of testing result. So if the test case has some messages which aren’t used during comparison, these messages will not be deleted.

9. Test Runs page

Test Runs page shows the test cases that were run in period defined in Running Date filter and looks:

test case runs

On this page you can:

  1. Poll remote messages.

    Open Last Polling Result button is available after polling remote data and until page is refreshed.

  2. Compare results clicking on check.

    The option reprocesses comparison and report generation asynchronously. It makes sense to execute the option once you have configured a new "ignore rule".
  3. View Agent Object clicking on 'Agent Object'. It opens Integration Object Details page.

  4. View Test Object clicking on 'Test Object'. It opens Integration Object Details page.

  5. The options available in run’s menu button:

    1. Download report.

      Report type can be one of 2 types:

      1. xlsx report:

        • Contains information about test run/test suite run results.

        • By default only failed results are added to xlsx report. If you want to change this behavior, go to Configuration → Application page and enable Show successful messages in the test suite’s report property.

      2. zip archive with 3 csv reports:

        • test-run-metadata.csv - contains common information about test run/test suite run (e.g. test object, counters for success/error/unexpected/not/compared/unfinished test run/test suite run results).

        • diff-report.csv - contains diff report for test run/test suite run.

        • processed-messages-report.csv - contains processed messages report for test run/test suite run.

      The report type is defined from Use bundled CSV report generation strategy property configured on Application configuration page. If it’s true, the report type is zip, otherwise, it’s xlsx.

    2. View results opens Test Run Details page.

10. Test Run Details page

Test Run Details page looks:

test run details page

On this page you can:

  1. Select data Perspective for the table:

    1. Tree Table is default value for all regression test cases except aggregation ones (for baseline and aggregation test cases Tree Table mode is disabled). This table has a merged functionality of Message runs tab and Results tab. Tree Table represents 2 tables: old Test Run Results table (i.e. Flat Table in the new terms) and Message Runs table on the other tab.

      test run details tree table

      It has full pagination, filtering and sorting. Sorting is a bit tricky because of pagination: if you sort by column belonging to a Message Run, it should work naturally, just like in all other tables. But if you sort by Test Run Result columns, Test Run Results will be sorted in scope of each Message Run.

    2. Flat Table is the old Test Run Results table:

      test run details flat table
  2. Poll remote messages.

    Open Last Polling Result button is available after polling remote data and until page is refreshed.

  3. Rerun test case if it’s run in scope of test suite run.

  4. Compare results clicking on check.

    The option reprocesses comparison and report generation asynchronously. It makes sense to execute the option once you have configured a new "ignore rule".
    If comparison fails for a particular test run result, you can view error details clicking on warning.
  5. Open related test case. It opens Test Case Details page.

  6. Open related test suite run. It opens Test Suite Run page. The link is shown only if you run test suite linked with related test case.

  7. Open Agent and Test Objects.

  8. View ignore lists (Applied items ignore expressions and Applied Message properties ignore expressions). IT shows ignore lists configured on Test Configuration tab of Integration Object page.

  9. Open actual test configuration. It opens Test Configuration tab of Integration Object page.

  10. View used comparison configurations. It opens Comparison Configuration dialog.

  11. Execute actions available in run’s menu full button:

    1. Download report.

      Report type can be one of 2 types:

      1. xlsx report:

        • Contains information about test run/test suite run results.

        • By default only failed results are added to xlsx report. If you want to change this behavior, go to Configuration → Application page and enable Show successful messages in the test suite’s report property.

      2. zip archive with 3 csv reports:

        • test-run-metadata.csv - contains common information about test run/test suite run (e.g. test object, counters for success/error/unexpected/not/compared/unfinished test run/test suite run results).

        • diff-report.csv - contains diff report for test run/test suite run.

        • processed-messages-report.csv - contains processed messages report for test run/test suite run.

      The report type is defined from Use bundled CSV report generation strategy property configured on Application configuration page. If it’s true, the report type is zip, otherwise, it’s xlsx.

    2. Mark selected message group as Test Case. It creates new test case from chosen groups of testing results. Testing results become expected messages in the new test case.

    3. Update Test Case using Test Run Results. It replaces corresponding messages in the current test case by chosen messages of testing result. There are 2 strategies available:

      1. Drop all old messages - all old messages will be deleted from the test case and chosen messages of testing result will be copied.

      2. Replace only selected messages - only corresponding messages in the current test case will be replaced by chosen messages of testing result.

  12. Execute actions available in message’s menu button:

    1. View result. It opens Test Run Result page.

    2. Download inbound message, expected result, actual result.

    3. Build groovy script (see Groovy scripts unit testing).

    4. Diffs opens Differences dialog box. Here you can add items (click twice on the required item) and message properties (click on the required message property) to ignore list.

      Add items and message properties to ignore list once.

      differences add to ignore list
    5. Binary diffs opens Differences dialog box (see above) with binary comparison. Current configurations are used for this comparison type.

    6. Diff2Html diffs opens Diff2Html comparison dialog. Current configurations are used for this comparison type.

    7. Ignore step (CPI) adds current step to ignore list.

    8. Browse related messages (PRO) is runtime viewer for related message monitoring entries. It looks for message monitoring entries on the PI system by the currently persisted PI message id (used in id and refId filters). For not received entries we know only the id of the sent message, then once the message is received it’s overwritten by a related AFD message id.

      test run related messages

      From the Related messages dialog it’s possible to open PI message page on SAP system and browse message log in the separate dialog:

      test run message log

      Use these options to analyze the root cause of the UNFINISHED state of the result to check what’s going on on PI system.

    9. Open message monitor (PRO).

    10. Open message monitor via dispatcher (PRO).

  13. Check messages statuses (whether messages have been sent successfully or some error occurred):

    1. Go to Message runs tab

    2. View data in the table.

      test run details page message runs tab

11. Shared Configuration page

Shared Configuration page contains shared configurations and looks:

comparison configuration

On this page you can:

  1. Create shared configuration clicking on plus button. It opens Create shared configuration:

    create comparison configuration dialog
    1. Title of shared configuration. Should be unique.

    2. Platform for which shared configuration should be configured.

    3. Use is the way which should be used during shared configuration determination: Messages/Partners or Test Cases.

      1. Messages is messages list. for which the shared configuration is applied.

      2. Partners is partners list. for which the shared configuration.

      3. Test Cases is test cases list. This configuration can be used if you want to run different test cases with different configurations.

    4. Integration Objects is integration objects list. All integration objects are from the same agent.

      Shared configuration is applied to any combinations of Messages, Partners, Test Cases, and Integration Objects values during comparison.
    5. Comparison Type defines used comparison strategy. Most of the time AUTO is fine.

    6. EDI functional group correlation key path defines path to data element or component element which can be used as a group identifier during comparison. Define a value if there are several functional groups in EDI outbound messages. The path format is described below.

    7. EDI message correlation key path defines path to data element or component element which can be used as a message identifier during comparison. Define a value if there are several messages in EDI outbound messages. Don’t define a value if there are several groups with 1 message in each group.

      EDI functional group correlation key path and EDI message correlation key path must be configured in the special format:

      <segment prefix><data element index and options><component element index (optional)>

      where:

      • <segment prefix> - shouldn’t contain [ or ], for example, UNE, DTM+137.

      • <data element index and options>: [<standalone index>] or [<standalone index>|<option>].

      • <component element index>: for example, 1.

        • <standalone index>: for example, 1

        • Available option:

          • substring - the option defines a substring rule for group/message correlation id path in EDI comparison. The option has the following syntax: substring(<characterIndex>|<characterIndex>), where characterIndex is index of the first or last character of substring that should be used as an identifier, could have positive or negative value:

            • substring(-2|), substring(-2), and substring(-2|0) uses 2 last characters, e.g. somestriNG.

            • substring(-5|-1) uses 4 characters starting from the fifth character from the end, e.g. somesTRINg.

            • substring(1|-1) uses all characters starting from the second and ending with penultimate, e.g. sOMESTRINg.

            • substring(-1|1) uses the first and the last characters, e.g. SomestrinG.

      Examples:

      • Only segment prefix: UNE, DTM+137.

      • Segment prefix with data element index: UNH[0], IMD[1].

      • Segment prefix with data element index and option: UNE[1|substring(-2|)].

      • Segment prefix with data element index and component element index: UNB[2][4].

    8. Item Ignore defines XPath expressions (divided by semicolon) to ignore, e.g. '/Invoice/Date'.

      For EDI type you can configure items in the special format

      <segment prefix><data element indexes and options><component element indexes (optional)>+<data element indexes and options><component element indexes (optional)>+<data element indexes and options><component element indexes (optional)>+...

      where:

      • <segment prefix> - shouldn’t contain [ or ], for example, UNE, DTM+137.

      • <data element indexes and options>: [<standalone/ranged index>, …​, <standalone/ranged index>|<option1>, <option2>]

      • <component element indexes>: [<standalone/ranged index>, …​, <standalone/ranged index>]

        • <standalone index>: for example, 1

        • <ranged index>: for example, 0-4

        • Available options:

          • empty-missing

          • java formatting which matches regex %[\\d.]*([dfeg]{1})$, for example: %.3f, %.6e.

      Examples:

      • Only segment prefix: UNE, DTM+137.

      • Segment prefix with data element indexes: UNH[0], IMD[0-2], IMD[0,2].

      • Segment prefix with data element indexes and options and component element indexes: UNB[0][1|empty-missing][0][2-5,7-8,9-10][4,5].

      For XML type you can configure items in the special format

      <XPath><options>

      where:

      • <XPath> - standard XPath expression, for example, //targetElem.

      • <options>: {{<option1>,<option2>}}

        • Available options:

          • empty-missing

          • java formatting which matches regex %[\\d.]*([dfeg]{1})$, for example: %.3f, %.6e.

          • @<xml_attribute> which configures ignoring of some particular xml attribute differences.

      Examples:

      • Only XPath expression: //targetElem.

      • XPath expression with additional ignoring options: /targetElem{empty-missing,%.3f}.

      • XML attribute ignoring: /targetElem/@id.

    9. Message Properties Ignore defines dynamic properties (divided by semicolon) to ignore, e.g. 'X-Vcap-Request-Id'. If you have a lot of dynamic properties with the same prefix (e.g. DynamicProperty1 and DynamicProperty2), you can define just common prefix (DynamicProperty) to ignore all dynamic properties with this prefix during comparison.g

      Item Ignore and Message Properties Ignore properties can be configured during result comparison in Differences dialog that can be opened from Test Run Details page.

    10. Order Expressions defines how to order messages.

    11. Collection Item Identification Expressions (only for XML type) defines rules to compare a collection with the different order of elements in the expected and actual messages. This setting should be configured in the following format:

      <path to the collection>-><relative path to the identification element>

      If path to your collection is /example/collection and path to element identified collection id example/collection/object/id, then Collection Item Identification Expressions is /example/collection->object/id.

    12. Use alphanumeric order number enables calculation of order number as a string instead of a number.

    13. Don’t calculate order number for a single entry - if true, order number is set to 1 (if there are 1 inbound and 1 outbound messages).

    14. Use expected message encoding during comparison. Enable if you want to forcibly use encoding from expected message instead of dynamically determined encoding from actual result. Have effect only for the following message types: EDIFACT, X12, TEXT.

    15. Ignore XML attributes configures XML attributes ignoring during comparison.

    16. XML attributes comparison configures separate XML attributes comparison.

    17. Trim whitespaces before XML comparison enables ignoring whitespaces during comparison of XML payloads. By default, option is enabled.

    18. Validate receiver integration (PRO only) - if true, AFD status will be compared with the expected value success. If they are not equal, it will be shown on Differences dialog as a dynamic configuration property http://figaf.com/testing/virtual-property|MessageStatus. The feature works only for SAP Log module and ICO Logging.

    19. Define additional dynamic properties to send with the test message (PRO only) enables configuration of additional dynamic properties.

    20. Test with mock data (CPI only) enables testing with mock data. Linked test cases are run with replacing real receivers responses by previously recorded responses (from corresponding receivers) on Agent Object. Figaf Tool clones original test object (IFlow) with replacing sender and receiver channels to HTTPS where receiver channel endpoints are special Figaf Tool API. To use this mocking service you need to have configured Figaf Tool to use the cloud connector (see cloud connector properties configured in run command).

    21. Use only finishing run steps (CPI only) enables functionality when only the last messages in a test case will be compared. If true, Run steps selection strategy and Run steps filter can’t be set.

    22. Run steps selection strategy (CPI only) and Run steps filter (CPI only) configure steps which should be tested. Since 2208.1 it’s possible to configure the steps in BPMN Model with viewer dialog clicking on configuration button.

      If it’s IFlow Chain object, it’s possible to configure Run steps filter for each IFlow used in IFlow Chain separately.
  2. Edit/View shared configuration clicking on its edit or view button.

  3. Delete selected shared configurations.

  4. View messages, partners, test cases, and integration objects lists clicking on corresponding menu full button.