Comparing Figaf IRT with SAP PIT

SAP has recently published SAP PIT, which is a testing tool for SAP PI/PO. Costumers have asked me about the difference between SAP PIT and Figaf IRT. Does it actually matter which tool you use? The answer is: Yes, it does matter. In this blog post, I will give you an overview to help you to understand the differences.

The Figaf tool is a lot faster to use for creating and running test cases on. It also have a lot better integration with a DevOps/Continues Delivery process so you can deliver your integrations lot faster.

Test the followingSender Modules, Routing, Mappings, Receiver ModuleRouting, Mappings
Test case creationAutomatic process multiple recordings at one timeSetup one recording of time
System requirement recordingAll 7.31, 7.4 and 7.5 add a FigafModule or use SAP logging no patches required7.31, 7.4,7.5 patched after February 2019.Fetch data from dual-stack systems.
PI version to run test onAll7.5 SP14
Location of testing application or dataSeparate java server and databaseSAP PI system
Can test your upgrade to sp14YesNo
Running test casesRun full test suites automatedRun one interface at the time
ComparisonXML, JSON, Text, Edifact, X12, BinaryXML, Text
Patterns supportedAsync, Sync, EDISeperator, Bridges: sync-async, async-syncAsync, Sync
Visual display of differenceYesYes (with SP15)
Data anonymizationYes No
Export of test data to PITYesNo
Integrated with Release management and change management toolYesNo
Mapping of business systemsAutomaticManual
Add test cases from failed messagesYesNo
Manual test case creationYesNo
Releases per year123-4 per year
Price10.000+ EUR/year Included in your license

This is based on the 7.5 SP15. SAP has a roadmap to improve the tool but I do not know it.

The Figaf tool we are adding options to also work with creating Documentation of your scenarios, your changes and the test your are running. It also has a component that allows you to monitor the system for anomalies and report custom errors to users. Next release will also feature integration with CTS+ enabling you to run test and configuration on the objects that are changed. So it is much more than just a test suite.

Do you want to know more about Figaf IRT? Then visit

Lessons learned while implementing SAP CPI and API management: Webinar

I held a webinar, where I talked about my experiences with SAP CPI and API management projects. And what you should remember when implementing it.

I cover some of the lessons that I had to have on the project. Like the flexibility of the platform allow you to be more agile with creating development. You can always add new paths to the processing to send results to other areas. We also touch on the use of the ProcessDirect adapter that allows you to refactor your integration for reuse. I cover my latest blog where I have talked about Calculating Fibonacci number in CPI, by the way not recommend by gives some lessons.

I have a SAP CPI course that you can take to get you started with CPI learn more

There are also some lessons for API management. It is really great for exposing Odata from your SAP Gateway and securing it pretty easy. There is some challenge with calling other scripts in javascript, so it is somewhat limited what you can do in it. API Management also has some problems with regards to transportation and documentation of changes.

With the Figaf tool you can:

  • Get a better understanding of what is developed and changed
  • Transport and documenting individual iflows
  • Monitor your CPI and set up integration with Slack, Jira
  • I will speed up your development because you will have the option to run
  • Manage your SAP PI/PO, CPI and API Management

Try the Figaf IRT tool here.

Here you can watch the replay:

As you can see, Figaf IRT can help you to optimize your workflow. If you have any questions, please contact me.

You can view the presentation here.

Documenting your SAP PI/PO scenarios

Documentation is a part of all IT practices. It is there to ensure we document what has been created. This is to ensure that some people later will be able to help support the process. This is also true for SAP PI/PO projects. Documentation is something that does cost a lot to produce, so you need to get value from it and automate it.

I have a problem with the way we normally are documenting your scenarios. In many cases, the requirement for documentation have been small adjusted based on what some people from other departments wished and the types of the object used. It has been a manual process to update all of this documentation. Since I started with XI 3.0 in 2004 one of the things that I have always considered and worked with was how to make the documentation process much better. It has resulted in a host of tools that can create the documentation.

One of the big problems have always been it was almost impossible to create and maintain some documentation so you could keep your previous history. This is the same if you autocreate the documentation or if you make it manually. The Word document will be placed in some repository and then never be touched. I recond that you have seen some thing like the following change history, that will never changed.

The change history of only an inital entry

I wanted to keep the histroy and link it with a business requirmeent for the change. We have added the Ticket concept in Figaf IRT, here you can create a object that look like a Service Request, Request for Change or what jira ticket. The function allow you to handle all your processing in just one application and assign changes to the objects that is affected by a changed. We are working to make this even more connected with your CTS+ transport system, so you can registere all objects in the ticket easier.

When you then generate the documentation after some time you will see what was changed on the ICO, or any linked objects. So if somebody has changed a message mapping used by the ICO then you will it in the list. That way you know when it was changed. If you then later want to drill into what object you can open the link and see the full ticket information. All information in the document will only reflect what is the current values for it.

What is changed for an Integration Sceanrio

An example of what the full file will look like:

We still have some way to go with it. We can go into all Repository Objects, fetching and show the most interesting values from the channels, and get documentation from the different objects. We are looking for customer requierments and feedback to see what should make sense in the process.

You can try this out on your own system with the Free part of the Figaf IRT tool, though it will not create the full documentation for you. Then you will need to have a licensed version.

Monitor your SAP API management easy

For a customer project, we have been using SAP API management to secure our API’s. It is a good way to expose Odata from a SAP Gateway in a way that cloud applications can communicate with. It is fairly simple to setup Oauth to allow users to have valid authentication.

We did run in a problem about how to monitor the application. What happens if an unauthorized requires is found, a spike arrest or other requests that fail. In the logging, you will be able to see that there is an error but you will not be able to pinpoint the data anything but a few error codes. We have added an option to log some users on the Backend but we also want to be notified if something unexpected happens and be able to drill in to the data.

Check how easy it is to find errors with the solution

How to log errors in API Management

The standard approach is to add logging to Loggly or other services where you have a syslog listner. You will then put the logging in a place where it makes sense. If you forinstanace have some spikearrest problem that somebody is trying to take down your service, then you don’t want to log all the events to your logging service.

The ideal place to set the logging cloud either be on the post flow after the request have been delivered to the client. This is okay as long as there are no errors occurring. If there are errors and you want some special logging for the errors there are the default error flow or standard error flows. This setting is described in the blog, here it states that you will need to edit the policy files manually, because the UI is not up to date yet.

I have opted to use the KeyValueMap(kvm) policy to store alerts in. It is the only local storage that exists on the API server that you can access, it is clustered and high performance. Also you have the option to save errors only once so that mean if you have an error like Spikearrest then you really don’t want to know if it have occured at some point in time. So we can duplicate entries and then read them later. It is not the best solution for it but it is pretty simple to implement and you will be able to use it fast to get started. You may consider other solution later for reporting if you want to be able to drille deeper into the problem.

We have added the following script to create a JSON payload message that can be parsed with the information.

var apiProxyFigafPoliciesVersion = '1'

var logdata = {
    messageId: context.getVariable("messageid"),
    currentSystemTime: context.getVariable("system.time"),
    clientReceivedStartTime: context.getVariable("client.received.start.time"),
    timePassedAfterClientReceivedStartTime: context.getVariable("system.timestamp") - context.getVariable("client.received.start.timestamp"),
    messageQueryString: context.getVariable("message.querystring"),
    requestUri: context.getVariable("request.uri"),

    apiProxyName: context.getVariable(""),
    apiProxyRevision: context.getVariable("apiproxy.revision"),

    faultName: context.getVariable(""),
    errorContent: context.getVariable("error.content"),
    errorMessage: context.getVariable("error.message"),
    errorStatusCode: context.getVariable("error.status.code")
context.setVariable("figaf.irt.apim.proxyerrorkey", logdata.faultName + "-" + logdata.apiProxyName)

Then we can log the value with the code to our KVM figafIrtErrorKVM

<!-- apiProxyFigafPoliciesVersion="1" -->
<KeyValueMapOperations mapIdentifier="figafIrtErrorKVM" async="true" continueOnError="true" enabled="true" xmlns="">
    <!-- PUT stores the key value pair mentioned inside the element -->
    <Put override="true">
        <Key ><Parameter  ref="messageid"/></Key>
        <Value ref="figaf.irt.apim.logmsg"></Value>
    <!-- the scope of the key value map. Valid values are environment, organization, apiproxy and policy -->

We then add the two javascript policy and the KVM policy to the default flow for the process.

You will need to read the entries in the KVM to see the errors that have occured. There is an API that you can call that will give you that information. We did find that since this was clustered in some way you needed to force it to update


Once we have read the file we can delete each entry.

After getting some errors we get a KVM that looks like the following. There are individual messages with errors and also some global errors like spikearrest for one API.

Rule processing

Once you are downloading the messages you want an easy way to send a notification when something like the errors occurs. We have rewritten our rule processing to allow us to handle more complex rules and process them more effecintly. This mean that you will be able to send email notifications, or send webhooks that will sent the notification to Slack, Jira or where you have your support team listening.

I do have some ideas on how we can make a regresion testing possible. When more customers are adopting using the API Management process from Figaf then it could be possible to add it to support customers.

Want to try it out on your own system then try to see below.

Figaf also support the use of change tracking and transport of SAP API management.

What is the test coverage of your SAP PI message mappings

On a customer demo I got asked what is the test coverage of the tool. We do show in the UI how many ICO’s you have test for but how about message mappings and also modules used in your landscape and how many times they was run.

We already build a report that showed how many times a message mapping was run in a given period based on the data in the PI monitor. So it was just about combining the two sources to give users a good view of what is going on.

For each integration flow we added the number of test cases created with the IRT tool. Then number if then propaged down to each message mapping so we can show how many message mappings is tested and more important which is not.

We also added a tab with the modules used in your landscape and how they performed. Probably we also need to add the IRT test cases to this. But it also depend on how people want to with our with our our modules.

Check out the demo and then try it for free on your own system

You can try it out on your own system. Just download the Figaf IRT tool run it on your laptop and you will be able to see the data after the landscape have been downloaded.