Documenting your SAP PI/PO scenarios

Documentation is a part of all IT practices. It is there to ensure we document what has been created. This is to ensure that some people later will be able to help support the process. This is also true for SAP PI/PO projects. Documentation is something that does cost a lot to produce, so you need to get value from it and automate it.

I have a problem with the way we normally are documenting your scenarios. In many cases, the requirement for documentation have been small adjusted based on what some people from other departments wished and the types of the object used. It has been a manual process to update all of this documentation. Since I started with XI 3.0 in 2004 one of the things that I have always considered and worked with was how to make the documentation process much better. It has resulted in a host of tools that can create the documentation.

One of the big problems have always been it was almost impossible to create and maintain some documentation so you could keep your previous history. This is the same if you autocreate the documentation or if you make it manually. The Word document will be placed in some repository and then never be touched. I recond that you have seen some thing like the following change history, that will never changed.

The change history of only an inital entry

I wanted to keep the histroy and link it with a business requirmeent for the change. We have added the Ticket concept in Figaf IRT, here you can create a object that look like a Service Request, Request for Change or what jira ticket. The function allow you to handle all your processing in just one application and assign changes to the objects that is affected by a changed. We are working to make this even more connected with your CTS+ transport system, so you can registere all objects in the ticket easier.

When you then generate the documentation after some time you will see what was changed on the ICO, or any linked objects. So if somebody has changed a message mapping used by the ICO then you will it in the list. That way you know when it was changed. If you then later want to drill into what object you can open the link and see the full ticket information. All information in the document will only reflect what is the current values for it.

What is changed for an Integration Sceanrio

An example of what the full file will look like:

We still have some way to go with it. We can go into all Repository Objects, fetching and show the most interesting values from the channels, and get documentation from the different objects. We are looking for customer requierments and feedback to see what should make sense in the process.

You can try this out on your own system with the Free part of the Figaf IRT tool, though it will not create the full documentation for you. Then you will need to have a licensed version.

Monitor your SAP API management easy

For a customer project, we have been using SAP API management to secure our API’s. It is a good way to expose Odata from a SAP Gateway in a way that cloud applications can communicate with. It is fairly simple to setup Oauth to allow users to have valid authentication.

We did run in a problem about how to monitor the application. What happens if an unauthorized requires is found, a spike arrest or other requests that fail. In the logging, you will be able to see that there is an error but you will not be able to pinpoint the data anything but a few error codes. We have added an option to log some users on the Backend but we also want to be notified if something unexpected happens and be able to drill in to the data.

Check how easy it is to find errors with the solution

How to log errors in API Management

The standard approach is to add logging to Loggly or other services where you have a syslog listner. You will then put the logging in a place where it makes sense. If you forinstanace have some spikearrest problem that somebody is trying to take down your service, then you don’t want to log all the events to your logging service.

The ideal place to set the logging cloud either be on the post flow after the request have been delivered to the client. This is okay as long as there are no errors occurring. If there are errors and you want some special logging for the errors there are the default error flow or standard error flows. This setting is described in the blog, here it states that you will need to edit the policy files manually, because the UI is not up to date yet.

I have opted to use the KeyValueMap(kvm) policy to store alerts in. It is the only local storage that exists on the API server that you can access, it is clustered and high performance. Also you have the option to save errors only once so that mean if you have an error like Spikearrest then you really don’t want to know if it have occured at some point in time. So we can duplicate entries and then read them later. It is not the best solution for it but it is pretty simple to implement and you will be able to use it fast to get started. You may consider other solution later for reporting if you want to be able to drille deeper into the problem.

We have added the following script to create a JSON payload message that can be parsed with the information.

var apiProxyFigafPoliciesVersion = '1'

var logdata = {
    messageId: context.getVariable("messageid"),
    currentSystemTime: context.getVariable("system.time"),
    clientReceivedStartTime: context.getVariable("client.received.start.time"),
    timePassedAfterClientReceivedStartTime: context.getVariable("system.timestamp") - context.getVariable("client.received.start.timestamp"),
    messageQueryString: context.getVariable("message.querystring"),
    requestUri: context.getVariable("request.uri"),

    apiProxyName: context.getVariable(""),
    apiProxyRevision: context.getVariable("apiproxy.revision"),

    faultName: context.getVariable(""),
    errorContent: context.getVariable("error.content"),
    errorMessage: context.getVariable("error.message"),
    errorStatusCode: context.getVariable("error.status.code")
context.setVariable("figaf.irt.apim.proxyerrorkey", logdata.faultName + "-" + logdata.apiProxyName)

Then we can log the value with the code to our KVM figafIrtErrorKVM

<!-- apiProxyFigafPoliciesVersion="1" -->
<KeyValueMapOperations mapIdentifier="figafIrtErrorKVM" async="true" continueOnError="true" enabled="true" xmlns="">
    <!-- PUT stores the key value pair mentioned inside the element -->
    <Put override="true">
        <Key ><Parameter  ref="messageid"/></Key>
        <Value ref="figaf.irt.apim.logmsg"></Value>
    <!-- the scope of the key value map. Valid values are environment, organization, apiproxy and policy -->

We then add the two javascript policy and the KVM policy to the default flow for the process.

You will need to read the entries in the KVM to see the errors that have occured. There is an API that you can call that will give you that information. We did find that since this was clustered in some way you needed to force it to update


Once we have read the file we can delete each entry.

After getting some errors we get a KVM that looks like the following. There are individual messages with errors and also some global errors like spikearrest for one API.

Rule processing

Once you are downloading the messages you want an easy way to send a notification when something like the errors occurs. We have rewritten our rule processing to allow us to handle more complex rules and process them more effecintly. This mean that you will be able to send email notifications, or send webhooks that will sent the notification to Slack, Jira or where you have your support team listening.

I do have some ideas on how we can make a regresion testing possible. When more customers are adopting using the API Management process from Figaf then it could be possible to add it to support customers.

Want to try it out on your own system then try to see below.

Figaf also support the use of change tracking and transport of SAP API management.

What is the test coverage of your SAP PI message mappings

On a customer demo I got asked what is the test coverage of the tool. We do show in the UI how many ICO’s you have test for but how about message mappings and also modules used in your landscape and how many times they was run.

We already build a report that showed how many times a message mapping was run in a given period based on the data in the PI monitor. So it was just about combining the two sources to give users a good view of what is going on.

For each integration flow we added the number of test cases created with the IRT tool. Then number if then propaged down to each message mapping so we can show how many message mappings is tested and more important which is not.

We also added a tab with the modules used in your landscape and how they performed. Probably we also need to add the IRT test cases to this. But it also depend on how people want to with our with our our modules.

Check out the demo and then try it for free on your own system

You can try it out on your own system. Just download the Figaf IRT tool run it on your laptop and you will be able to see the data after the landscape have been downloaded.

Monitor SAP CPI system performance

Many of the customers I talk with start to see the SAP Cloud Platform Integration(CPI) as a critical tool for their integration needs. In that perspective, it makes sense to be proactive and monitor how it performances.

There are two pages that you can use to check the status reported by SAP as I write about on my blog it is the pages Cloud Status and Status page. They may be relevant to monitor both to warn about what is happening in your landscape.

But what about how to monitor you own tenant and how it is performing. That is the question that most people want to answer because it is what matters for them and then sooner they know about an issue they can setup manual processing or figure out how to do something else. It also provides a way to to messuare if SAP is living up to the SLA of CPI.

How you can messure your CPI tennant

You can setup logging in a number of different tool that will give you some indication of how the system is responding. You can also use the Figaf IRT tool that now will allow you to monitor your CPI system. It is pretty simple and does not require any coding form your side.

You will have a dashboard that looks like the following to see the performance of your system. You can see on the diagram that the system is down for maintenance for a period of time over the weekend. It is possible to see we cannot access the management and runtime engine for a period of time. This is a partner tenant so there may be a different performance and number of nodes as a real system. The system also calls an integration flow every 5 minutes to measure how the runtime is perfroming. This is then returning a response, but from time to time it takes almost 2 second to perform. Also, notice that during the maintenance period we still get a response in the range 400 ms. If it is possible to call process messages even when there is a maintenance window it is pretty cool.

Current view of the SAP CPI monitoring

We are now collecting all the metrics for this. Next is that we will be setting up alerts on this with our alert engine. Then you can get a notification if you are getting latencies above, i.e. 2 seconds or the CPU load is above 80%. Or create a Jira ticket with the simple webhook. We will also be adding status of your JMS queue and other relevant metrics asked for by users.

We have monitoring packages starting at only 150 EUR/Month and then you can also monitor other iflows.

Try it out at IRTCloud or you can also try it on your own deployment of the IRT server you can run on your own cloud. It also has quite a lot of other options to monitor SAP CPI both trigger alerts for failed messages, restart CPI messages and give users access to monitor data them self.

Automated SAP PIT Test case creation

A month ago SAP released a testing tool for SAP PI/PO. One thing I have been contemplating a lot is how to test the first upgrade. One challenge with the SAP PIT testing tool is to be able to run a test on your first upgrade with the tool. I do see a problem with the first two upgrades. If you accept the premise that we need to test all upgrades for SAP PI/PO and therefore need a tool for it. Then it has some challenges for the beginning.

The first upgrade

The system needs you to implement OSS Note 2650265 which is an upgrade to MESSAGING SYSTEM SERVICE released February 2019 and applies to many services packs for 7.31 and above. If you want to implement this upgrade you will need to patch all components because they are linked in SAP Netweaver Java. This means that for this to work you need to implement quite a number of changes for it to work. I see the process as the following:

  1. Implement the patch according to the note on your development system.
  2. Since you don’t know if any thing is affected you will need to run a full test with your SAP PI
  3. Implement the note in production
  4. Once the note is in production you can upgrade your development system to 7.5 SP 14.
  5. Then you can start to create test cases on the 7.5 SP14 and validate that nothing has changed. You will then compare production data with data on SP14 so you cannot compare how the upgrade works.
  6. Once you are done with creating the test cases you can implement SP 14 in your landscape
  7. Next upgrade can be made a bit easier because you have the test data

For me it does sound like a lot of extra manual tests and changes to your landscape limiting how much you can do in the process.

Figaf IRT

The Figaf tools allow you to test all SAP PI 7.31, 7.4 and 7.5 system without installing new support packs. We have a number of options to record messages. Either as SAP is doing with looking in the log messages, except we can use a patch that is 2 years old to enrich your monitoring web services. Or we even have a web scraping option that is even older than it. But we do have a better solution which is to add a module to your processing chain. That way you can test much faster on your system with a lower impact on them.

Screenshot of the new button to export data to PIT

SAP Customers prefer to use standard tools, in the places where it makes sense and they can find enough value. We, therefore, are working on a way to enable you to export your Figaf IRT test cases to SAP PIT, so you can stop using the Figaf tools for testing. This way you can use Figaf to help test the first upgrades and then use SAP PIT for the future

We do hope that we are proving enough value together with ways to make your SAP testing better and faster, and also allow you tigth itnegration to test the interfaces that you are changing.

How how does the migration work

It is a pretty simple just process. Once you have a Testing Template in Figaf IRT and have run it you will see the Export to PIT button. The messages then exist on your SAP PI system. Figaf IRT will create a Test Suite in PIT with the same messages. IRT know which messages should be added to the test cases so it will request. Then it is just up to PIT to fetch the messages.

It is a licensed feature of the Figaf IRT so you will need to purchase a license, it can save you some time for your testing.

You can see a demo of it here.

There will be a few more changes to make for the process to work, like:

  • Set a name for the template
  • Send ignoring elements over to the test case
  • Update it the process if PITs API is changing since it is not published.

There may also come future development in the PIT tool, we can use to run this process better. Lets see in the next support packs. It is possible to add running of tests cases and integrate it with our DevOps appoach so you can test the interfaces that is changed by a mapping.

If you want to see how fast you can create test cases in Figaf and run your tests in it then download the tool for free and get started in an hour. We do also have many extra features that you will not find in SAP PIT.