Creating unit tests for SAP CPI

Cross-Post: This blog has also been posted on blogs.sap.com

The best way to be able to fix any of your integrations is to be able to run tests really fast. If you are doing SAP CPI development in ie. Groovy it can be a little challenging to set up test cases for your SAP CPI scripts.

There are a number of different blogs on how to make it possible. I think Eng Swee Yeoh has created a number of blogs on how to test SAP CPI.

You will need some boilerplate code to set up test cases and run and do your verifications. We have now added a way to automate the process.

Last week we showed how we could expose your SAP CPI and API management as a Git repository. In the Git repository, we have added mock services for all the necessary implementations, so it is possible to run the scripts without connecting to the services.

Since it is mock some parts of the running may not be 100% if you use some exotic functions. That is why you still should be testing in real CPI.

In one of the coming releases, we will also be adding mesageFactory to test the attachments that you have created.

The only thing you need to do is to add the SAP CPI jars you can download here.

Testing with Figaf

Last week we released a version of Figaf IRT that enabled you to expose your SAP CPI content via Git. With this integration, we now have another nice option. We can create unit tests based on your scripts.

If you have created test cases for an iflow in Figaf IRT then you have the option to “Generate Groovy Scripts Test data”. This will create test cases for all the scripts that you have in that iflow.

The program will create all input and output messages in a Json format like the following. This will allow Figaf IRT to create automated assertions of all the parameters in the message.

{
  "input" : {
    "headers" : {
      "SAP_MplCorrelationId" : "AF1eZ3x-sdvOIJSgilALr8Lldude",
      "CamelHttpResponseCode" : "200",
      "apikey" : "124234523",
 //  ....
    },
    "properties" : {
      "CamelStreamCacheUnitOfWork" : "DefaultUnitOfWork",
      "CamelSplitComplete" : "false",
      "CamelSplitIndex" : "1",
      "CamelCorrelationId" : "ID-vsa6859459-36905-1565339707446-5304-2",
      "CamelMessageHistory" : "[DefaultMessageHistory[routeId=Process_1, node=CallActivity_27_1566467962530], DefaultMessageHistory[routeId=Process_17, node=CallActivity_39_1566467962450], DefaultMessageHistory[routeId=Process_17, node=MessageFlow_34_1566467962454], DefaultMessageHistory[routeId=Process_17, node=setHeader13227], DefaultMessageHistory[routeId=Process_17, node=setHeader13228], DefaultMessageHistory[routeId=Process_17, node=to14489], DefaultMessageHistory[routeId=Process_17, node=removeHeader3781], DefaultMessageHistory[routeId=Process_17, node=removeHeader3782], DefaultMessageHistory[routeId=Process_17, node=removeHeader3783], DefaultMessageHistory[routeId=Process_17, node=to14490], DefaultMessageHistory[routeId=Process_17, node=CallActivity_41_1566467962463]]",
      ".hasMoreRecords" : "false",
   // ....
    },
    "body" : "{\r\n\"d\" : {\r\n\"__metadata\": {\r\n\"uri\": \"https://services.odata.org/V2/Northwind/Northwind.svc/\"type\": \"NorthwindModel.Customer\"\r\n}, \"CustomerID\": \"TOMSP\", \"CompanyName\": \"Toms Spezialit\\u00e4ten\", \"ContactName\": \"Karin Josephs\", \"ContactTitle\": \"Marketing Manager\", \"Address\": \"Luisenstr. 48\", \"City\": \"M\\u00fcnster\", \"Region\": null, \"PostalCode\": \"44087\", \"Country\": \"Germany\", \"Phone\": \"0251-031259\", \"Fax\": \"0251-035695\", \"Orders\": {\r\n\"__deferred\": {\r\n\"uri\": \"https://services.odata.org/V2/Northwind/Northwind.svc/Customers('TOMSP')/Orders\"\r\n}\r\n}, \"CustomerDemographics\": {\r\n\"__deferred\": {\r\n\"uri\": \"https://services.odata.org/V2/Northwind/Northwind.svc/Customers('TOMSP')/CustomerDemographics\"\r\n}\r\n}\r\n}\r\n}"
  },
  "output" : {
    
    //same as input
    
     }
}

It will also create a testing method like the following that contains all the scripts in the flow and link to all the test messages that you have in the repository. You can remove all the scripts that you do not want to be testing with.

Every time you run the generate test case it will update the file and add more test data with an increased number. So you can run generate as many times as you want it would be wise only to add each test case once, otherwise, you are not adding any value to the testing.

The generated test case looks like the following. And contains testing of all your scripts.

package com.figaf

import org.junit.jupiter.params.ParameterizedTest
import org.junit.jupiter.params.provider.ValueSource

class GroovyScriptsTest extends AbstractGroovyTest {

    @ParameterizedTest
    @ValueSource(strings = [
            "src/test/resources/test-data-files/SetHeaders2/processData/test-data-1.json"
    ])
    void test_SetHeaders2Groovy(String testDataFile) {
        String groovyScriptPath = "src/main/resources/script/SetHeaders2.groovy"
        basicGroovyScriptTest(groovyScriptPath, testDataFile, "processData", getIgnoredKeysPrefixes(), getIgnoredKeys())
    }

    @ParameterizedTest
    @ValueSource(strings = [
            "src/test/resources/test-data-files/setHeaders/processData/test-data-2.json",
            "src/test/resources/test-data-files/setHeaders/processData/test-data-3.json"
    ])
    void test_setHeadersGroovy(String testDataFile) {
        String groovyScriptPath = "src/main/resources/script/setHeaders.groovy"
        basicGroovyScriptTest(groovyScriptPath, testDataFile, "processData", getIgnoredKeysPrefixes(), getIgnoredKeys())
    }


    @Override
    List<String> getIgnoredKeys() {
        List<String> keys = super.getIgnoredKeys()
        keys.addAll(Arrays.asList())
        return keys
    }

}

I think this approach will make it a lot easier to set up testing and run a set of assertions. As you don’t need to do anything and all required information is added to the environment. If there are parameters that you want to exclude from your script you can add it to the getIgnoredKeys.

Custom testing

The standard way will only allow you to test input with the expected output. It may not be what you want to do in all cases. There may be test cases you are creating that need to be evaluated differently.

You then have the option to just send a message to the processing and then insert your assertions to the processing. An example of such a testing approach is the following. So here you can do all your custom assertions and get the errors if anything does not match.

An example of such a custom validation could look like the following.

package com.figaf

import org.assertj.core.api.Assertions
import org.assertj.core.api.SoftAssertions
import org.junit.jupiter.api.Test

class SetHeadersTest extends AbstractGroovyTest {

    @Test
    void customTest() {
        String groovyScriptPath = "src/main/resources/script/setHeaders.groovy"
        String testDataFilePath = "src/test/resources/test-data-files/setHeaders/processData/test-data-1.json"
        def (MessageTestData messageDataExpected, MessageTestData messageDataActual) =
        processMessageData(groovyScriptPath, testDataFilePath, "processData")
        String actualModeValue = messageDataActual.getProperties().get("newError3")
        Assertions.assertThat(actualModeValue).isNotNull()
        Assertions.assertThat(actualModeValue)
                .endsWith("Test3")
    }

    @Test
    void customTestSoftAssertions() {
        String groovyScriptPath = "src/main/resources/script/setHeaders.groovy"
        String testDataFilePath = "src/test/resources/test-data-files/setHeaders/processData/test-data-1.json"
        def (MessageTestData messageDataExpected, MessageTestData messageDataActual) =
        processMessageData(groovyScriptPath, testDataFilePath, "processData")
        String actualPropValue = messageDataActual.getProperties().get("newError3")
        String expectedPropValue = messageDataExpected.getProperties().get("newError3")

        SoftAssertions softly = new SoftAssertions()

        softly.assertThat(actualPropValue).isNotNull()
        softly.assertThat(actualPropValue).endsWith("Test3")
        softly.assertThat(actualPropValue).isEqualTo(expectedPropValue)

        softly.assertAll()
    }
}

In the examples you can see different ways to run the assertion.

Try Figaf IRT

There is a community version of Figaf IRT that allows you to run the application in single-user and with some limitations, but all of this should be working. It is all free.

I do hope that you see the functions and can see how much it can improve your usage of SAP CPI and want to buy the full package.

We are improving the functionality over time, so it will be easier to manage your SAP CPI system.

Speed up your SAP CPI development process

Thursday, I hosted the webinar “How to speed up SAP CPI development”. It is an important topic because developers need to find a way to learn how to create what they can do to improve the speed of the integration service.

I have been talking about the topic easier but this time we have added an important component. That is Figaf IRT now can make your development process much simpler. So now we really can speed up the development and also change management.

There are two areas where the application allows users to create faster integration.

  • Development in your favorite IDE, this is where we have improved a lot lately.
  • Change management, this is where you document changes and transport individual iflows.

You can see the presentation here.

Development process

When you are working with SAP CPI you will often end up having to edit an XSLT or a Groovy script. It is normally not really easy to do in the build-in editor. Sure there may be simple changes where you can do it, but if you want to make any complicated logic you need to create

We have been adding functions to make it a lot easier over the last few months. The biggest impact has been our Git integration.

We have also added a method to share scripts and other resources between iflows. The challenge with sharing of scripts between flows is to keep it synchronized and ensure a change does not break other iflows.

Git integration

The Git integration allows you to take your SAP CPI iflow and just expose it to all content to a Git repository. We have also added configuration files templates so you just can use them. With this, you get plugins that will allow users to deploy scripts from their IDE.

We have also added a unit testing component that allows users to test with real payloads. This speeds up the processing about ensure that you have the correct to start your groovy script. From your IDE you also have the option to test how the created iflows works.

Change management

We have build a solution that allows users to make it possible to document what they have done. I believe that all changes in an Integration environment must be because of business reasons like Service Request, Ticket or what is called in your method.

We have made it simple to link Service Request and the changes that happen. This speed up the development process.

It is also possible to transport individual iflows across your landscape. You can even use the virtual environment on your CPI system. This means that you are able to develop and test on the same tenant, so you can save an instance. Figaf also allows you to manage configuration a lot easier across your environment.

Testing is also a part of your release. You can run tests really simple on all your iflow. This makes it possible to see if your changes have a negative impact on your process. It is also possible to test with mock services, so you can test integrations even if they require you to have access to third-party apps, that do not support calling with the same data multiple times. Then it will enable you to test your flows, scripts are working the same way also after an upgrade.

Try IRT self

You can try out all the functions with Git or change management with our free community edition. If you enjoy it you can upgrade to any of our solutions that allow a lot more integration development options.

Download IRT now or run it in our cloud

SAP CPI and GIT integration

If you want to leverage the SAP CPI capabilities and improve the speed of delivery you need a better way than use the Web front end of SAP CPI. Many users will create a GIT repository to add files to so users will get access to all files for the project and can use their favorite IDE for it. It does require a lot of effort to keep everything synchronized.

We have started to improve the functionality for users to be able to use SAP CPI from their own IDE. For this, we have created a few new features.

  • From Figaf IRT we have added an option to synchronize all your integration flows to a Git repository. You can use Github or SAP’s Git service, so all your team can get access to the same source.
  • We have created a Gradle plugin that allows you do download, upload and deploy integration flows from your IDE. This is a free plugin you can use even if you are not using Figaf IRT.
  • We have created a Gradle plugin that enables you to run a test case for one or more iflows as a part of a change.

Our design principle has been to make it as easy for developers to get to use Git and deliver as much preconfigured. You can add more plugins if you want to.

This is an initial version of the Git integration, and we will probably see some changes and ways to improve it over time. We have some more things that we could add like test scripts to run local testing and we need some improvements with commits. It will depend on customers inputs.

You can signup to try out the Figaf IRT tool now on your own CPI system and then once the new feature is added you will be able to run it.

Virtual SAP CPI landscapes

Some SAP CPI customers do not have a full 3 tier system landscape for their SAP CPI. But you may want to experience having all the systems connected and be able to test the flow. You can do it manually, with using the copy functionality, but it requires a lot of governance to keep the QA updated.

We did originally create the virtual landscape just to test, how our transport system worked because we did only have one CPI system. Now we have improved the functionality because it enables customers to take advance of our transport and leverage using one CPI system. We did earlier change Sender system to match with having a prefix with the system name.

We have now improved the functionality so the system is added to the Process Direct adapter. Here we are adding the prefix in the URL to both sender and receiver channels. You will thereby have the option to run it in a “QA environment”.

You can see a demonstration of how this works in the video.

Updated: If you import a package it will be auto deployed.

The transportation system also provide option to create Iflow documentation, and you can test the impact of a change. You can also make modification of Externalize Parameters across the landscape, so you can specify that the target URL on QA is this. Then IRT will handle the configuration once it deploys.

Try it out the Figaf IRT tool as an onpremish or use our cloud for it.

Testing SAP CPI with Mock data service

If you start making any changes to your SAP CPI system you will end up with problems where you need to be able to test your integration flows, if you make modifications to your scripts. In most cases, it will require that you have access to the same backed system and create the same requests.

Figaf have been supporting testing of SAP CPI for about 1 year. We was able to do it in the following way.

  1. Switch the integration flow into trace mode.
  2. Let the user send some messages
  3. Figaf IRT could then pick up all the trace data and use it as a baseline.

To test an integration flow it did the following.

  1. Switch the integration flow into trace mode.
  2. Figaf would send the input data
  3. Figaf IRT could then pick up all the trace data
  4. Compare the data with the original data.

It did provide an easy way to test your iflows with and enabled users to compare the data. To see if anything had changed.

Why we needed a new way of testing

The old way is good if the flows you don’t have many backed services you are connecting with and they are able to reply the same data. If you have SuccessFactors it may be difficult to create or fire the same employ multiply times. Or if your interface is expecting a delta of data, then you will not without making modification be able to fetch the correct data.

And then we want to be able to run the test more often, so we can be sure nothing is impacted. It could be our new approach about adding shared resources to SAP CPI, so if you make a modification in a script it will update all the other iflows.

Or if possible be able to run it on the next release of SAP CPI to see that nothing is affected.

How testing SAP CPI with mock data work

The testing work a little different than if you have any receiver adapters. Figaf will create a copy of the iFlow, and replace all Receiver channels with a HTTP call to the Figaf IRT server. It will send information about the current test case and what part of the process it is in. The API exposed at the Figaf server will then return the HTTP headers that have been changed in the process and also the body.

As a user the only thing you need to change is how to select that the test should be performed with mock data.

If there is a change in the iflow then Figaf would automatically synchronize the testing copy, and deploy it.

You can see the demo of how the service works.

As you can see it works pretty simple to get to work. Just one tick mark on the Testing Template. The solution does work with Cloud Connector if you have CPI on your local installation of IRT, you will just need some settings to specify URL and cloud connector information when configuring the iflows.

What are the limits of mock data

With the mock data there is some limits. Some we can probably find a way to solve with more development and feedback from customers.

  • You will not be able to test your adapter.
  • You be able to test any binary objects in the headers, it is not possible to recover them.
  • It does not support Enrich scenarios at the moment.

Try it your self

You can download Figaf tool and run it in your own landscape or you can try it out in our cloud deployment.