Posts

What is the test coverage of your SAP PI message mappings

On a customer demo I got asked what is the test coverage of the tool. We do show in the UI how many ICO’s you have test for but how about message mappings and also modules used in your landscape and how many times they was run.

We already build a report that showed how many times a message mapping was run in a given period based on the data in the PI monitor. So it was just about combining the two sources to give users a good view of what is going on.

For each integration flow we added the number of test cases created with the IRT tool. Then number if then propaged down to each message mapping so we can show how many message mappings is tested and more important which is not.

We also added a tab with the modules used in your landscape and how they performed. Probably we also need to add the IRT test cases to this. But it also depend on how people want to with our with our our modules.

Check out the demo and then try it for free on your own system

You can try it out on your own system. Just download the Figaf IRT tool run it on your laptop and you will be able to see the data after the landscape have been downloaded.

Anonymize your SAP PI Test data

With GDPR it is increasingly important to be able to mask or anonymize your test data. If I’m a customer at your company it would probably not be a good idea that you are testing with my order. But if you can anonymize it so nobody will know that I ordered it, you can do all the testing that you want. Normally it is something that takes a lot of time to do and then you don’t do it. We want to make testing easy so you will use it.

We can see at customers that it is a problem, and have now created a feature to you in 10 minutes can anonymize your data for test cases. Check the video below where I’ll create a new test data from my SAP PI system, convert it and then have my options.

After this conversion, you can run the data on all your systems when you have upgraded or someone is updating the mappings used.

We will add some more functionality to it. Like, have a names like first, lastnames, streetnames. It will be a list that you can edit and add names that makes sense in your organisation.

Also, I’m considering how to handle Service Interfaces with marked as sensitive data. I ‘m considering to set a flag that you cannot test with the data. And need to create an anonomized test case for it. And there could probably also be some user role that can view the un-anonymized data, and use the studio.

The flow of developing the test case could also be improved, but I guess we will take it with some real sceanrios.

You can start using Figaf IRT for free. There is a free package where you can get access to use it to test SAP PI/CPI test cases.

 

Why our testing is different than most test systems for SAP PI

In a recent podcast with one of Figafs IRT customers Mark Oshifeso from Anadarko Petroleum about testing SAP PI/PO we covered how they used the tool. We get a lot of praise from Mark for what we are able to do with the testing.

One of the things is that we cover is the Workflow vs Record Replay testing.

Mark created the following slide to explain the difference. You can view the full slide deck at SlideShare.

The Workflow approach to setting up testing is the to create programming that allows users to set up all test cases. This is really a powerful approach you can set up all kinds of integration. It will allow you to program everything like place a fine in this location or login to the SAP application and trigger a Sales order. And you can also make all kinds of validation of results.

The bad things about the workflow are that it requires a lot of work to set up test cases. So you need to be able to call the adapter in the correct way or you need to be able to check the file is placed correctly.  Some validation can be pretty difficult to check if you need to call some third party system to get some status.  You need to make every test different cases where you cover different so it adds to the complexity. And Labour intensive to set up. So you will most of the time end up with a limited number of test cases.

Record/Replay: This approach is the new way of creating tests. It requires that you set some boundaries to limit what you are able to do in an integration. So you will only be able to test a subset of the integration. The boundaries enable you to collect test data automatically if the system allows for some sort of extraction of the data. This extraction will then give an input file, and expected output. The input file can then be processed again and output should be close to the expected output. With this approach, you can set up test 100 test cases for an ICO in the amount of time it takes to collect test data. For IRT you can select many different ICOs at the same time

It also mean that the learning curve for picking up how to use the tool is much easier.