Posts

Change Management can be simple

In the old-time, you had the SAP R3 ABAP system. There it was a pretty good tool to do change management. There was a limited number of objects that you could change and they could all be managed by one way of handling transports. In the new world, it is a lot more complicated.

Now the landscape is completely different. You have many different services that you need to find a way to manage. And if you start using SAP cloud solutions many of them have their own way of handling transports.

I have created a video to show, how I see the concept in some more details.

You have two options for Change Management: A generic approach or a specialized approach for your most important systems.

Generic Change management

No matter what you need to develop change, everything is handled in the same way. Then all developers know how to do change management for everything. It will speed up making it possible for other developers to handle the tooling.

The downside is that you will need to make a process that can encompass all systems. You will, therefore, get to little information or spend to much time creating the documents that you need for your project.

You may also miss an important part, and that is the ability to like changes both ways

Tool specific change management

If you have a lot of development or highly critical tools. It does make sense to have a better way to handle it.

When we are working on our Java application for the Figaf tool. We use Jira to create tickets in, Developers will then use this ticket number for all commits to a given ticket. Changes to our unit tests are also linked with the ticket. That is allowing us to trace all changes back to a ticket. That is a hyper-specialized tool to make change management for just one case. It can then be bundled with Jenkins to make build and test automation.

This type of specialization of your change management saves you time for your development. Because developers don’t need to note all the different places they are making modifications. It is handled automatically by the tool and captured as a part of the change.

Sure it will cost you more to buy or create an environment. It will cost both money, time and effort to get it working. Hopefully, this investment will be returned with better documentation, easier change management and faster delivery of code.

If you are using SAP Integrations products like SAP PI/PO, CPI or API mgt then we have a tool that will allow you to optimize your delivery process. Check the tools at https://figaf.com/IRT there are even free versions that allow you to get started.

With our approach we will handle all the tool-specific requirements in our application and then let your original system containing all organizational data.

If you want to see how our tool works to make DevOps possible check this video

Creating unit tests for SAP CPI

Cross-Post: This blog has also been posted on blogs.sap.com

The best way to be able to fix any of your integrations is to be able to run tests really fast. If you are doing SAP CPI development in ie. Groovy it can be a little challenging to set up test cases for your SAP CPI scripts.

There are a number of different blogs on how to make it possible. I think Eng Swee Yeoh has created a number of blogs on how to test SAP CPI.

You will need some boilerplate code to set up test cases and run and do your verifications. We have now added a way to automate the process.

Last week we showed how we could expose your SAP CPI and API management as a Git repository. In the Git repository, we have added mock services for all the necessary implementations, so it is possible to run the scripts without connecting to the services.

Since it is mock some parts of the running may not be 100% if you use some exotic functions. That is why you still should be testing in real CPI.

In one of the coming releases, we will also be adding mesageFactory to test the attachments that you have created.

The only thing you need to do is to add the SAP CPI jars you can download here.

Testing with Figaf

Last week we released a version of Figaf IRT that enabled you to expose your SAP CPI content via Git. With this integration, we now have another nice option. We can create unit tests based on your scripts.

If you have created test cases for an iflow in Figaf IRT then you have the option to “Generate Groovy Scripts Test data”. This will create test cases for all the scripts that you have in that iflow.

The program will create all input and output messages in a Json format like the following. This will allow Figaf IRT to create automated assertions of all the parameters in the message.

{
  "input" : {
    "headers" : {
      "SAP_MplCorrelationId" : "AF1eZ3x-sdvOIJSgilALr8Lldude",
      "CamelHttpResponseCode" : "200",
      "apikey" : "124234523",
 //  ....
    },
    "properties" : {
      "CamelStreamCacheUnitOfWork" : "DefaultUnitOfWork",
      "CamelSplitComplete" : "false",
      "CamelSplitIndex" : "1",
      "CamelCorrelationId" : "ID-vsa6859459-36905-1565339707446-5304-2",
      "CamelMessageHistory" : "[DefaultMessageHistory[routeId=Process_1, node=CallActivity_27_1566467962530], DefaultMessageHistory[routeId=Process_17, node=CallActivity_39_1566467962450], DefaultMessageHistory[routeId=Process_17, node=MessageFlow_34_1566467962454], DefaultMessageHistory[routeId=Process_17, node=setHeader13227], DefaultMessageHistory[routeId=Process_17, node=setHeader13228], DefaultMessageHistory[routeId=Process_17, node=to14489], DefaultMessageHistory[routeId=Process_17, node=removeHeader3781], DefaultMessageHistory[routeId=Process_17, node=removeHeader3782], DefaultMessageHistory[routeId=Process_17, node=removeHeader3783], DefaultMessageHistory[routeId=Process_17, node=to14490], DefaultMessageHistory[routeId=Process_17, node=CallActivity_41_1566467962463]]",
      ".hasMoreRecords" : "false",
   // ....
    },
    "body" : "{\r\n\"d\" : {\r\n\"__metadata\": {\r\n\"uri\": \"https://services.odata.org/V2/Northwind/Northwind.svc/\"type\": \"NorthwindModel.Customer\"\r\n}, \"CustomerID\": \"TOMSP\", \"CompanyName\": \"Toms Spezialit\\u00e4ten\", \"ContactName\": \"Karin Josephs\", \"ContactTitle\": \"Marketing Manager\", \"Address\": \"Luisenstr. 48\", \"City\": \"M\\u00fcnster\", \"Region\": null, \"PostalCode\": \"44087\", \"Country\": \"Germany\", \"Phone\": \"0251-031259\", \"Fax\": \"0251-035695\", \"Orders\": {\r\n\"__deferred\": {\r\n\"uri\": \"https://services.odata.org/V2/Northwind/Northwind.svc/Customers('TOMSP')/Orders\"\r\n}\r\n}, \"CustomerDemographics\": {\r\n\"__deferred\": {\r\n\"uri\": \"https://services.odata.org/V2/Northwind/Northwind.svc/Customers('TOMSP')/CustomerDemographics\"\r\n}\r\n}\r\n}\r\n}"
  },
  "output" : {
    
    //same as input
    
     }
}

It will also create a testing method like the following that contains all the scripts in the flow and link to all the test messages that you have in the repository. You can remove all the scripts that you do not want to be testing with.

Every time you run the generate test case it will update the file and add more test data with an increased number. So you can run generate as many times as you want it would be wise only to add each test case once, otherwise, you are not adding any value to the testing.

The generated test case looks like the following. And contains testing of all your scripts.

package com.figaf

import org.junit.jupiter.params.ParameterizedTest
import org.junit.jupiter.params.provider.ValueSource

class GroovyScriptsTest extends AbstractGroovyTest {

    @ParameterizedTest
    @ValueSource(strings = [
            "src/test/resources/test-data-files/SetHeaders2/processData/test-data-1.json"
    ])
    void test_SetHeaders2Groovy(String testDataFile) {
        String groovyScriptPath = "src/main/resources/script/SetHeaders2.groovy"
        basicGroovyScriptTest(groovyScriptPath, testDataFile, "processData", getIgnoredKeysPrefixes(), getIgnoredKeys())
    }

    @ParameterizedTest
    @ValueSource(strings = [
            "src/test/resources/test-data-files/setHeaders/processData/test-data-2.json",
            "src/test/resources/test-data-files/setHeaders/processData/test-data-3.json"
    ])
    void test_setHeadersGroovy(String testDataFile) {
        String groovyScriptPath = "src/main/resources/script/setHeaders.groovy"
        basicGroovyScriptTest(groovyScriptPath, testDataFile, "processData", getIgnoredKeysPrefixes(), getIgnoredKeys())
    }


    @Override
    List<String> getIgnoredKeys() {
        List<String> keys = super.getIgnoredKeys()
        keys.addAll(Arrays.asList())
        return keys
    }

}

I think this approach will make it a lot easier to set up testing and run a set of assertions. As you don’t need to do anything and all required information is added to the environment. If there are parameters that you want to exclude from your script you can add it to the getIgnoredKeys.

Custom testing

The standard way will only allow you to test input with the expected output. It may not be what you want to do in all cases. There may be test cases you are creating that need to be evaluated differently.

You then have the option to just send a message to the processing and then insert your assertions to the processing. An example of such a testing approach is the following. So here you can do all your custom assertions and get the errors if anything does not match.

An example of such a custom validation could look like the following.

package com.figaf

import org.assertj.core.api.Assertions
import org.assertj.core.api.SoftAssertions
import org.junit.jupiter.api.Test

class SetHeadersTest extends AbstractGroovyTest {

    @Test
    void customTest() {
        String groovyScriptPath = "src/main/resources/script/setHeaders.groovy"
        String testDataFilePath = "src/test/resources/test-data-files/setHeaders/processData/test-data-1.json"
        def (MessageTestData messageDataExpected, MessageTestData messageDataActual) =
        processMessageData(groovyScriptPath, testDataFilePath, "processData")
        String actualModeValue = messageDataActual.getProperties().get("newError3")
        Assertions.assertThat(actualModeValue).isNotNull()
        Assertions.assertThat(actualModeValue)
                .endsWith("Test3")
    }

    @Test
    void customTestSoftAssertions() {
        String groovyScriptPath = "src/main/resources/script/setHeaders.groovy"
        String testDataFilePath = "src/test/resources/test-data-files/setHeaders/processData/test-data-1.json"
        def (MessageTestData messageDataExpected, MessageTestData messageDataActual) =
        processMessageData(groovyScriptPath, testDataFilePath, "processData")
        String actualPropValue = messageDataActual.getProperties().get("newError3")
        String expectedPropValue = messageDataExpected.getProperties().get("newError3")

        SoftAssertions softly = new SoftAssertions()

        softly.assertThat(actualPropValue).isNotNull()
        softly.assertThat(actualPropValue).endsWith("Test3")
        softly.assertThat(actualPropValue).isEqualTo(expectedPropValue)

        softly.assertAll()
    }
}

In the examples you can see different ways to run the assertion.

Try Figaf IRT

There is a community version of Figaf IRT that allows you to run the application in single-user and with some limitations, but all of this should be working. It is all free.

I do hope that you see the functions and can see how much it can improve your usage of SAP CPI and want to buy the full package.

We are improving the functionality over time, so it will be easier to manage your SAP CPI system.

How do you Document your SAP CPI Iflows

Documentation of integration has always been a strange thing. It has been pretty difficult to make sure we documented the details correctly and had enough information with. Normally you will just get some document designed to document some other thing and then try to adapt it to your integration. That will never give a good result. You may be compliant but just a waste of time if the documentation is not useful.

For SAP PI/PO we have for ages been using a Word template for the documentation of each interface. We support that with the Figaf IRT tool, so you can generate it fast. You can see an example of the SAP PI documentation here. The inspiration was to avoid having the documentation that was never updated and always had the initial version.

SAP Best practive

I did find an example of a SAP CPI template in the best practice guide. I did not like if for a few assumptions:

  • It was focusing on the wrong things that were a bit to detailed and too generic. Like a file conversion, or mapping it just has empty tabs that users need to fill in.
  • It could not be automated every well, which ment it did require a lot of user governance to host the information
  • It was juggling 3 different adapters and not showing the required details

I wanted to improve the process and make it even easier to capture the most important part of a CPI Iflow. It ended up with the following information:

  • Overview of the iFlow header information like name and description
  • History of changes logged with business requirements
  • Connection sender and receiver together with information
  • Test cases you have created for the tool
  • Flow description which is a table representation of the iflow. It will give some overview of how the different steps are connected
  • Configuration parameters configured in the full landscape, so you got some information on what is being used and the relavant resources.
  • Resources and the lat change data for them.

You can download an example of the document here for the SF to AD iflow.

There are ways to improve the document with more information. If you have anything that you think will provide value and make it easier to understand the documentation. Do let us know.

Automated documentation does have some limits like being able to update it different places, and add the extra information that makes sense.

You can try it own on your own iflows and see how it performs in your landscape.

Run Figaf IRT in your own Cloud

We have added the Figaf Cloud. as a way to allow users to run our tool to manage the SAP CPI and API management. It is a great way to scale and deploy your application.

We have it hosted in Google Cloud Platform in a Kubernetes cluster. It i pretty easy to do in GCP and it allow scaling of the application pretty easy. It did take some time to understand how to build the app and secure the application.

For customers, it is a really good way to get started using Figaf IRT because they don’t even need to install anything. And all their partners can get access to the network and run applications. So a customer can run all the application to perform DevOps on the SAP CPI and API management systems. The DevOps include

  • Monitoring and alerting
  • Change Tracking or Versioning, so you can link business requirement with a change
  • Documentation
  • Testing
  • Transport of CPI/API flows

With all the focus on data protection, it may prove to be difficult to control all your data that exists in the Figaf IRT solution. You want to have control of where it is located.

We have an ompremish installation of the Figaf IRT tool and it contains the same features as the cloud, though there can be a bit of delay with it because of the release cycles of the applications. One problem that we often hear is so where should we install the IRT application. It can sometime be deficult to find a database and a Virtual server where the Java application can run.

Cloud

For some customers it is much easier to go buy/start a server in the cloud from GCP, AWS or Azure. You can have a direct connection from the cloud enviorment to your on-premish system and thereby get access. The integration department can then get the server self and run it there.

Our installation is pretty simple you can basicly just start the application with the click of a button. It does require a bit more effort if you want to run it with specific databases and setup SSL. It take 30 minutes to install IRT if you have installed the database. The good thing about the cloud enviorments is they all have the PostgreSQL server that we use as a managed. It will then eliminate some of the Normal DB operations that you are doing and just handle it with a few click in the web configuration.

It will also be possible to provide upgrades fairly simple. We have made a docker image that you can use.

We have added a Docker image to DockerHub, so you can run IRT pretty simple to get started with using it. You will need to make a few configurations to get it running. We have a guide on running it on Azure with the hosted database.

The container for running IRT on a Docker image in one of the places starts at 200 USD a month, depending on how you want to scale and have it to perform.

Try it

You can also try Figaf IRT out on your own laptop fast.

Figaf IRT now supports API management

Over the last couple of weeks, we have been making transports easier for SAP CPI. Now the turn has come to SAP API management. As I understand from APIGEE the normal way is to have a build server that allows you to transport and test scripts. It does require some time and skill in setting up such a server. From the last roadmap session SAP was planing to relase some CTS+ integration in Q3 if I recall correct.

We are using the same approach to document changes of APIProxies. So you can see which Proxy part has been updated, or which script was changed. This makes it a lot easier to document what is going on and why something was changed. You will therefore know what is changed in each release.

We also allows users to make transports based on the business requirements. It is therefore easier to document changes made to the proxy, and link all changes to a business reason.

You can try it out your self at Figaf IRTCloud and see how it performs in your landscape.

With Figaf IRT you have one enviorment that allow you to document all your SAP Integrations in just one application.

We are going to add other functions to monitor your API management, so you can get some more detailed alerts about what is going on in your landscape.