Creating unit tests for SAP CPI

Cross-Post: This blog has also been posted on blogs.sap.com

The best way to be able to fix any of your integrations is to be able to run tests really fast. If you are doing SAP CPI development in ie. Groovy it can be a little challenging to set up test cases for your SAP CPI scripts.

There are a number of different blogs on how to make it possible. I think Eng Swee Yeoh has created a number of blogs on how to test SAP CPI.

You will need some boilerplate code to set up test cases and run and do your verifications. We have now added a way to automate the process.

Last week we showed how we could expose your SAP CPI and API management as a Git repository. In the Git repository, we have added mock services for all the necessary implementations, so it is possible to run the scripts without connecting to the services.

Since it is mock some parts of the running may not be 100% if you use some exotic functions. That is why you still should be testing in real CPI.

In one of the coming releases, we will also be adding mesageFactory to test the attachments that you have created.

The only thing you need to do is to add the SAP CPI jars you can download here.

Testing with Figaf

Last week we released a version of Figaf IRT that enabled you to expose your SAP CPI content via Git. With this integration, we now have another nice option. We can create unit tests based on your scripts.

If you have created test cases for an iflow in Figaf IRT then you have the option to “Generate Groovy Scripts Test data”. This will create test cases for all the scripts that you have in that iflow.

The program will create all input and output messages in a Json format like the following. This will allow Figaf IRT to create automated assertions of all the parameters in the message.

{
  "input" : {
    "headers" : {
      "SAP_MplCorrelationId" : "AF1eZ3x-sdvOIJSgilALr8Lldude",
      "CamelHttpResponseCode" : "200",
      "apikey" : "124234523",
 //  ....
    },
    "properties" : {
      "CamelStreamCacheUnitOfWork" : "DefaultUnitOfWork",
      "CamelSplitComplete" : "false",
      "CamelSplitIndex" : "1",
      "CamelCorrelationId" : "ID-vsa6859459-36905-1565339707446-5304-2",
      "CamelMessageHistory" : "[DefaultMessageHistory[routeId=Process_1, node=CallActivity_27_1566467962530], DefaultMessageHistory[routeId=Process_17, node=CallActivity_39_1566467962450], DefaultMessageHistory[routeId=Process_17, node=MessageFlow_34_1566467962454], DefaultMessageHistory[routeId=Process_17, node=setHeader13227], DefaultMessageHistory[routeId=Process_17, node=setHeader13228], DefaultMessageHistory[routeId=Process_17, node=to14489], DefaultMessageHistory[routeId=Process_17, node=removeHeader3781], DefaultMessageHistory[routeId=Process_17, node=removeHeader3782], DefaultMessageHistory[routeId=Process_17, node=removeHeader3783], DefaultMessageHistory[routeId=Process_17, node=to14490], DefaultMessageHistory[routeId=Process_17, node=CallActivity_41_1566467962463]]",
      ".hasMoreRecords" : "false",
   // ....
    },
    "body" : "{\r\n\"d\" : {\r\n\"__metadata\": {\r\n\"uri\": \"https://services.odata.org/V2/Northwind/Northwind.svc/\"type\": \"NorthwindModel.Customer\"\r\n}, \"CustomerID\": \"TOMSP\", \"CompanyName\": \"Toms Spezialit\\u00e4ten\", \"ContactName\": \"Karin Josephs\", \"ContactTitle\": \"Marketing Manager\", \"Address\": \"Luisenstr. 48\", \"City\": \"M\\u00fcnster\", \"Region\": null, \"PostalCode\": \"44087\", \"Country\": \"Germany\", \"Phone\": \"0251-031259\", \"Fax\": \"0251-035695\", \"Orders\": {\r\n\"__deferred\": {\r\n\"uri\": \"https://services.odata.org/V2/Northwind/Northwind.svc/Customers('TOMSP')/Orders\"\r\n}\r\n}, \"CustomerDemographics\": {\r\n\"__deferred\": {\r\n\"uri\": \"https://services.odata.org/V2/Northwind/Northwind.svc/Customers('TOMSP')/CustomerDemographics\"\r\n}\r\n}\r\n}\r\n}"
  },
  "output" : {
    
    //same as input
    
     }
}

It will also create a testing method like the following that contains all the scripts in the flow and link to all the test messages that you have in the repository. You can remove all the scripts that you do not want to be testing with.

Every time you run the generate test case it will update the file and add more test data with an increased number. So you can run generate as many times as you want it would be wise only to add each test case once, otherwise, you are not adding any value to the testing.

The generated test case looks like the following. And contains testing of all your scripts.

package com.figaf

import org.junit.jupiter.params.ParameterizedTest
import org.junit.jupiter.params.provider.ValueSource

class GroovyScriptsTest extends AbstractGroovyTest {

    @ParameterizedTest
    @ValueSource(strings = [
            "src/test/resources/test-data-files/SetHeaders2/processData/test-data-1.json"
    ])
    void test_SetHeaders2Groovy(String testDataFile) {
        String groovyScriptPath = "src/main/resources/script/SetHeaders2.groovy"
        basicGroovyScriptTest(groovyScriptPath, testDataFile, "processData", getIgnoredKeysPrefixes(), getIgnoredKeys())
    }

    @ParameterizedTest
    @ValueSource(strings = [
            "src/test/resources/test-data-files/setHeaders/processData/test-data-2.json",
            "src/test/resources/test-data-files/setHeaders/processData/test-data-3.json"
    ])
    void test_setHeadersGroovy(String testDataFile) {
        String groovyScriptPath = "src/main/resources/script/setHeaders.groovy"
        basicGroovyScriptTest(groovyScriptPath, testDataFile, "processData", getIgnoredKeysPrefixes(), getIgnoredKeys())
    }


    @Override
    List<String> getIgnoredKeys() {
        List<String> keys = super.getIgnoredKeys()
        keys.addAll(Arrays.asList())
        return keys
    }

}

I think this approach will make it a lot easier to set up testing and run a set of assertions. As you don’t need to do anything and all required information is added to the environment. If there are parameters that you want to exclude from your script you can add it to the getIgnoredKeys.

Custom testing

The standard way will only allow you to test input with the expected output. It may not be what you want to do in all cases. There may be test cases you are creating that need to be evaluated differently.

You then have the option to just send a message to the processing and then insert your assertions to the processing. An example of such a testing approach is the following. So here you can do all your custom assertions and get the errors if anything does not match.

An example of such a custom validation could look like the following.

package com.figaf

import org.assertj.core.api.Assertions
import org.assertj.core.api.SoftAssertions
import org.junit.jupiter.api.Test

class SetHeadersTest extends AbstractGroovyTest {

    @Test
    void customTest() {
        String groovyScriptPath = "src/main/resources/script/setHeaders.groovy"
        String testDataFilePath = "src/test/resources/test-data-files/setHeaders/processData/test-data-1.json"
        def (MessageTestData messageDataExpected, MessageTestData messageDataActual) =
        processMessageData(groovyScriptPath, testDataFilePath, "processData")
        String actualModeValue = messageDataActual.getProperties().get("newError3")
        Assertions.assertThat(actualModeValue).isNotNull()
        Assertions.assertThat(actualModeValue)
                .endsWith("Test3")
    }

    @Test
    void customTestSoftAssertions() {
        String groovyScriptPath = "src/main/resources/script/setHeaders.groovy"
        String testDataFilePath = "src/test/resources/test-data-files/setHeaders/processData/test-data-1.json"
        def (MessageTestData messageDataExpected, MessageTestData messageDataActual) =
        processMessageData(groovyScriptPath, testDataFilePath, "processData")
        String actualPropValue = messageDataActual.getProperties().get("newError3")
        String expectedPropValue = messageDataExpected.getProperties().get("newError3")

        SoftAssertions softly = new SoftAssertions()

        softly.assertThat(actualPropValue).isNotNull()
        softly.assertThat(actualPropValue).endsWith("Test3")
        softly.assertThat(actualPropValue).isEqualTo(expectedPropValue)

        softly.assertAll()
    }
}

In the examples you can see different ways to run the assertion.

Try Figaf IRT

There is a community version of Figaf IRT that allows you to run the application in single-user and with some limitations, but all of this should be working. It is all free.

I do hope that you see the functions and can see how much it can improve your usage of SAP CPI and want to buy the full package.

We are improving the functionality over time, so it will be easier to manage your SAP CPI system.

Speed up your SAP CPI development process

Thursday, I hosted the webinar “How to speed up SAP CPI development”. It is an important topic because developers need to find a way to learn how to create what they can do to improve the speed of the integration service.

I have been talking about the topic easier but this time we have added an important component. That is Figaf IRT now can make your development process much simpler. So now we really can speed up the development and also change management.

There are two areas where the application allows users to create faster integration.

  • Development in your favorite IDE, this is where we have improved a lot lately.
  • Change management, this is where you document changes and transport individual iflows.

You can see the presentation here.

Development process

When you are working with SAP CPI you will often end up having to edit an XSLT or a Groovy script. It is normally not really easy to do in the build-in editor. Sure there may be simple changes where you can do it, but if you want to make any complicated logic you need to create

We have been adding functions to make it a lot easier over the last few months. The biggest impact has been our Git integration.

We have also added a method to share scripts and other resources between iflows. The challenge with sharing of scripts between flows is to keep it synchronized and ensure a change does not break other iflows.

Git integration

The Git integration allows you to take your SAP CPI iflow and just expose it to all content to a Git repository. We have also added configuration files templates so you just can use them. With this, you get plugins that will allow users to deploy scripts from their IDE.

We have also added a unit testing component that allows users to test with real payloads. This speeds up the processing about ensure that you have the correct to start your groovy script. From your IDE you also have the option to test how the created iflows works.

Change management

We have build a solution that allows users to make it possible to document what they have done. I believe that all changes in an Integration environment must be because of business reasons like Service Request, Ticket or what is called in your method.

We have made it simple to link Service Request and the changes that happen. This speed up the development process.

It is also possible to transport individual iflows across your landscape. You can even use the virtual environment on your CPI system. This means that you are able to develop and test on the same tenant, so you can save an instance. Figaf also allows you to manage configuration a lot easier across your environment.

Testing is also a part of your release. You can run tests really simple on all your iflow. This makes it possible to see if your changes have a negative impact on your process. It is also possible to test with mock services, so you can test integrations even if they require you to have access to third-party apps, that do not support calling with the same data multiple times. Then it will enable you to test your flows, scripts are working the same way also after an upgrade.

Try IRT self

You can try out all the functions with Git or change management with our free community edition. If you enjoy it you can upgrade to any of our solutions that allow a lot more integration development options.

Download IRT now or run it in our cloud

Figaf IRT 2.10 Making SAP CPI development faster Git

It has been a long time since our last release. We did have a summer vacation and had a big new function to release. 

I have been teasing that we have Git integration for SAP CPI and also added it for SAP API mgt

Why Git:

The thing that I like with our Git integration is that it allows users to access all sources from your project in just one simple repository. It will speed up the development because you will get easier access to all groovy script and XSLT, so you can reuse it. We have tried using Github, SAP Git from Cloud Platform and also Bitbucket, other will probably also work. Just be sure it is a private repository.

It gives you an opportunity to work in your favorite IDE like Eclipse or IntelliJ to edit the code. They have a much better code completion that allows you to write the code faster. 

It is our first Git release, so we are looking for feedback about how the workflow should be and if there are any ways we can improve it. 

Gradle Plugins:

From an SAP developer perspective, we are not really used to be able to use plugins in our IDE. SAP has released the Netweaver Developer Studio for PI that and the old Eclipse Plugin for CPI development. Here you had some function you could use, but it would only work in that IDE. The better way to deliver integration tools is to deliver Gradle plugins. Normal Java developers have a lot of Gradle plugins they can use to bundle application or run them in a specific way. So it would make sense to add some code to Gradle plugins that allow you to upload, deploy and test your iflows directly from your IDE. That way you don’t need to do leave your IDE to validate if your code is running correctly. We have created 3 plugins are all open source. One to CPI, one to API Mgt and one to handle testing with Figaf IRT. The plugins can be found in the Gradle plugins directory or on our Github page.

Simple workflow

One of the big challenges with starting your own infrastructure is that you need to configure all your plugin. That is the pain we are removing. You don’t need to be a Git expert or know a lot of Java configuration all configurations are delivered templates that you can just reuse. It makes your developers much more productive. 

Other improvements:

We have made some more improvements in the application and fixed some bugs. Among them are once you use the Figaf IRT to transport individual iflows that it use the unofficial Update metadata API. This allows you to keep the version history of your changes and do rollbacks. It is not ideal but we will use it until SAP support the option to update an iflow with the official API. 

What is next:

The next big challenge that we would like to tackle makes it easier for the user to create test data and assertations for CPI development. Half a year ago we created an option to create a Groovy script to set up a message with test data. The approach did make it easier but you had to make some manual changes. That is what we are trying to change. We want to make it possible to generate scripts both input and expected output, so you can run the test case a lot easier. This will speed up your development even more. 

Try it out now and see how it works

There is a free version of Figaf IRT that you can use. It will enable you to expose your CPI system using Git. You can run Figaf IRT on your Laptop and it takes 10 minutes to get started.

We hope to find a way to license the feature while making it easy to try. But for now, there is only a restriction on the number of agents/CPI systems you can connect with.

Test EDIFACT/X12 on SAP PI/PO

Yesterday I had a demostration with a client of the Seeburger Migration tool, that was doing a migration of their EDIFACT/X12 from Message mappings from Seeburger to B2B Add-on on the format. I wanted to show how it was possible to make it possible to test that the mappings were correctly migrated and converted.

I do think that this also could be useful for other SAP PI/PO customers, that are either considering a migration project or just want better control of what is happening to their EDI messages.

It is crucial to test all your EDI documents because of the volume of the business going that way. Some of the customers I have worked the EDI part was 20-50% of their turnover, so it is the part that is too big to fail. It is pretty difficult to test it normally because you need to have a good understanding of what can change in your mappings. Both when it comes to upgrades and also when it comes to changes done by developers.

As you can see of the video it is pretty simple to setup a test case based on data from your productive system. Because it is where you have the most reliable amount of data and variance. It is just to complex to test everything based manual test case creation, that is why we believe it should be automated.

We will set up both EDI document in both directions so you can see validation of EDI document and also that it works with the EDISeperator. We only show EDIFACT but X12 and EANCOM works the same way.

We do not currently have an integration with dual stack systems. We did implement it in the old release of IRT, so it will be possible to update it to the current version. We currently do no have access to a dual stack system, so if this is something you want. We hope you will help working with us to make it possible.

How about change management

So Figaf IRT is a lot more than just a testing application. We started doing testing of SAP PI/PO but have since expanded the service to handle the full change management. Just link the CTS transport with the Business request and you are able to do document all changes. And also test the changes that you make. It reuses all your EDI test cases for the process.

Try IRT

You can try out Figaf IRT tool. It just takes 15 minutes to set up on your laptop to see how easy it is and what it can do for your SAP PI/PO and CPI development. The solution can scale to run for your full enterprise.

Virtual SAP CPI landscapes

Some SAP CPI customers do not have a full 3 tier system landscape for their SAP CPI. But you may want to experience having all the systems connected and be able to test the flow. You can do it manually, with using the copy functionality, but it requires a lot of governance to keep the QA updated.

We did originally create the virtual landscape just to test, how our transport system worked because we did only have one CPI system. Now we have improved the functionality because it enables customers to take advance of our transport and leverage using one CPI system. We did earlier change Sender system to match with having a prefix with the system name.

We have now improved the functionality so the system is added to the Process Direct adapter. Here we are adding the prefix in the URL to both sender and receiver channels. You will thereby have the option to run it in a “QA environment”.

You can see a demonstration of how this works in the video.

Updated: If you import a package it will be auto deployed.

The transportation system also provide option to create Iflow documentation, and you can test the impact of a change. You can also make modification of Externalize Parameters across the landscape, so you can specify that the target URL on QA is this. Then IRT will handle the configuration once it deploys.

Try it out the Figaf IRT tool as an onpremish or use our cloud for it.