Using ITX 9.0.0.1 and Open API: Producer Scenario

My last article (using-itx9.0.0.1-with-open-api) showed a way to use ITX to consume OpenAPI data, but did not show the other scenario, where ITX is the producer of such data. We'll have a look at this option and, again, I'll provide the necessary objects to replicate the scenario.

Scenario

A map is developed to receive OpenAPI calls over http, and is integrated into a TX System to be triggered by calls to the API. The Launcher is then used to run the system, and serve the incoming calls.

As a sample, we'll use a target MongoDB database, and will write some information in it, based on the MongoDB examples IBM recently provided.

Prerequisites

It is assumed you are familiar enough with ITX in order to be able to accomplish normal ITX development tasks without detailed instructions

ITX environment: Design studio

Make sure you have a 9.0.0.1 or higher version installed, as this is the version which brings the OpenAPI importer functionality, which you'll need to implement the scenarios. The version is available via FixCentral. You can check the installed version by clicking "Help / About IBM Transformation Extender Design Studio".

Chap3_CheckDSVersion_1.png

The pop-up window below shows 9.0.0.1 version is installed. The number between parentheses is the "build number", which could be used to track a more specific version.

Chap3_CheckDSVersion_2.png

Note that, unlike earlier versions, 9.0.0.1 and above require you to remove the previous version before installing, as it does not support installing "on top" an existing 9.x version.

You also need to have a Launcher installation available. For the demo purpose, I'm using a local Windows installation on the same PC as my Design Studio installation.

Checking the version can be done using the dtxver or dtxinfo commands, which should give a result similar to the one below:

CheckLauncherVersion.png

(Note that the relevant information is the one about the "EventServerService", the rest show the different adapters versions, and in my case the Design studio will also appear).

MongoDB

 I used version 3.6 of MongoDB, installed locally on my laptop. It was downloaded from MongoDB download center and installed using the default options.

Once it is installed, MongoDB can be either run as a service/daemon, or from a shell environment. I chose the latter, and stored the database on a "data" disk, rather than on my C: drive

 

Start MongoDB

 When properly started, MongoDB shows a "waiting for connections" message.

MongoDB waiting for connection

 

Downloads

 The API definition is stored in a JSON file in the Swagger format. Contents of the file are shown below, or you can download them by clicking here.

{
  "swagger" : "2.0",
  "info" : {
    "version" : "0.0.1",
    "title" : "ITX fishing"
  },
   "host": "itx.api.satisco.com",   "basePath": "/fishing",

  "schemes": [    "http"  ],
  "consumes": [    "application/json"  ],
  "produces": [    "application/json"  ],
  "paths" : {
    "/event" : {
      "get" : {
        "description" : "Gets `Fish` objects.\n",
        "parameters" : [ {
          "name" : "id",
          "in" : "query",
          "description" : "Fish Id",
          "required" : true,
          "type" : "string"
        } ],
        "responses" : {
          "200" : {
            "description" : "Successful response",
            "schema" : {
              "type" : "array",
              "title" : "Fish",
              "items" : {
                "$ref" : "#/definitions/Event"
              }
            }
          },
          "404" : {
            "description" : "Error response",
            "schema" : {
              "type" : "array",
              "title" : "Fish",
              "items" : {
                "$ref" : "#/definitions/Event_1"
              }
            }
          }
        }
      },
      "post" : {
        "description" : "Adds `Fish` objects.\n",
        "parameters" : [ {
          "name" : "species", "in" : "query",
          "description" : "Fish species",
          "required" : true, "type" : "string"
        }, {
          "name" : "Weight", "in" : "query",
          "description" : "Fish weight",
          "required" : true, "type" : "string"
        }, {
          "name" : "UoM", "in" : "query",
          "description" : "Fish weight uom",
          "required" : true, "type" : "string"
        } ],
        "responses" : {
          "200" : {
            "description" : "Successful response",
            "schema" : { "type" : "array",
              "title" : "Fish",
              "items" : {
                "$ref" : "#/definitions/Event"
              }
            }
          },
          "400" : {
            "description" : "Error response",
            "schema" : {  "type" : "array",
              "title" : "Fish",
              "items" : {
                "$ref" : "#/definitions/Event"
              }
            }
          }
        }
      }
    }
  },
  "definitions" : {
    "Event" : {
      "properties" : {   "status" : {   "type" : "string"       },
                               "message" : {   "type" : "string"   }
                          }
                 }
                       }
}

The JSON file for the Database collection structure is the fishing.json file, as provided with MongoDB sample for ITX. Its contents are quite simple. Click here to download this file.

[
{ "fish" : "Red Snapper", "weight" : 20, "uom" : "pounds" },
{ "fish" : "Whitebait", "weight" : 9, "uom" : "pounds" },
{ "fish" : "Blue Snapper", "weight" : 20, "uom" : "pounds" },
{ "fish" : "Sea Bass", "weight" : 35, "uom" : "pounds" }
]

Additional tools

I used Postman as a test tool to call the API: Download from here

To check the results in the database, a tool like Studio 3T can be very convenient: Download from here

 

 Scenario implementation

In this "exercise", we will:

  • Import the Swagger specifications into a type tree and a sample map.
  • Generate the Integration Flow Designer system to host the map and trigger its execution based on REST calls.
  • Run the system and check the results.

Importing the Swagger file

The JSON file describes the API's format and will be the base to generate a type tree.

In the Design Studio, create a new project and import the ITXFishing_SWAGGER.json file in it, so that it can be used from the importer. You can also take the opportunity to import the fishing.json file at the same time, since we'll use it later to write to MongoDB.

Next, start the importer (Right-click in the project, select the "import..." option) and choose the "OpenAPI" importer from ITX's importers.

Choose OpenAPI importer

Click Next, select the ITXFishing_SWAGGER.json file, then click OK.

OpenAPI_Import2.png

Click Next again, then select the POST operation.

OpenAPI_Import3.png

 Click NEXT, then choose the "Use IBM Transformation Extender Launcher to host the REST api" option.

OpenAPI_Import4.png

 

Click NEXT, then check the settings before clicking NEXT. In my case, I changed the port as my Launcher coexists with other versions and I cannot therefore use the default ports.

OpenAPI_Import5.png

 

Next screen shows the proposed names for the artefacts to be created. I kept the default names, but you're obviously free to use your own names. Click the "Generate sample map source" and "Generate sample integration flow designer library (XML format)" options.

OpenAPI_Import6.png

 

The importer generates the needed objects, and displays the results in the next window. There could be warnings about objects' distinguishability, which you can safely ignore. For some reasons, messages in my Studio remained in Frnech, even though I configured the studio to the en_US locale.

OpenAPI_Import7.png

 When you click "Finish", the Studio asks you whether you want to open the type tree or not. I suggest you do it so that you have a chance to review the structure. This structure is quite simple, and very similar to the one obtained for the "consumer" scenario, so I'll refer you to the previous article, should you need some directions there.

 

Map structure

 

The ITXFishing_SWAGGER-POSTSampleMap.mms (the default name is quite long!) can be found in the project. Let's open it for a chance to review its structure.

OpenAPI_MapReview1.png

 

The first thing we can notice is that the second output card has no rules, and therefore the map won't even compile right out of the box.

OpenAPI_MapReview2.png

Before going further, I'll suggest you edit the map, and send a "200 / success" response all the time. This will make use able to compile the map, and then we'll have a chance to deploy a first version of the system to make sure everything works as expected.

Edit the map as shown below, don't forget to hit "enter" to validate the rules after you edited them.

OpenAPI_MapReview3.png

Save the map, the open Integration Flow Designer.

 

Editing the system

 

In Integration Flow Designer, select the "Import" option, to import teh XML file the importer generated.

OpenAPI_SystemEdit2.png

Then, the importer asks for a source file name. Choose any name you want, then click "Save"

OpenAPI_SystemEdit3.png

Then generated system should look similar to the one below.

OpenAPI_SystemEdit4.png

Notice the "POST" map is triggered by the incoming query.

You should edit the settings for output card #2, as the use of multiple wildcards will create an issue when analyzing the system.

Deploy your system to your systems directory, and proceed to the Launcher  configuration step.

 

Launcher configuration

 

In the Launcher Administration tool, in the "Advanced" tab, add a listener, set it to the right ports, and make sure it is enabled.

OpenAPI_LauncherConfig1.png

 

The HTTP port is the port on which the Listener will listen for queries, The Launcher port is the port to which the queries will be forwarded to the launcher, in order for it to decide which map to trigger. There could be several ports, if needed, but the Launcher is also able to route queries to different maps based on the URL it is watching.

Make sure the "Mode" setting is set to "enabled", otherwise the listener will be useless.

In the "Deployment directories you should find the directory to which you deployed the system. Obviously, if this is not the case , you need to either deploy to a configured directory, or to add the directory to the list of deployment directories.

OpenAPI_LauncherConfig2.png

You also need to make sure the Launcher is properly configured, with at least one access setting to administer the systems, and separate processes for execution. This is not mandatory, but makes control easier. I also tend to avoid automatic startup, at least for non-productive environments, for the same reasons.

OpenAPI_LauncherConfig3.png

 

When this is done, it is now time to start the daemon/service, and connect to it to check everything works as expected.

First test

 

After you started the Launcher, start the Management Console, and connect it to your Launcher. If the system starts correctly, you should see a display similar to the one below in the Management Console.

OpenAPI_FirstTest1.png

Notice there should be exactly one active connection, and one active listener, which correspond to the http connection and the associated listener. If this is not the case, your system won't work and you'll have to find why. Debugging such issues is well beyond the purpose of this article.

If the system does not start at all, then the usual analysis techniques apply, and again discussing them is well beyond our scope.

 Start your testing tool, and create a query to invoke the API. The sample below uses Postman, but many tools could do the job.

OpenAPI_FirstTest2.png

The "500 Internal Server Error" shown above is a "normal" status if you did everything right. This is due to the map failing with an error code of 76, which indicates the adapter failed to put data on the output. The Management Console can confirm this.

OpenAPI_FirstTest3.png

If you get a 503 "Service unavailable" error, check the URL you use, as it's likely to be wrong.

In order to fix the 500 error, we need to fix the system, then deploy it again and then restart it. I suggest you first pause, then stop it (this is the recommended way to properly stop a system) before getting to Integration Flow Designer.

In Integration Flow Designer, edit output card #2 settings, and remove the "-HDR+" option, which is right for the input card, but not for the output one, unless you specify header values.

OpenAPI_FirstTest4.png

Save the edited system, deploy it again, then restart and test again. This time you should see a success message ans a 200 status code.

OpenAPI_FirstTest5.png

 

Improving the map

Our sample as it works now only serves as a proof of concept, but does not achieve any useful purpose. A next step would be to add a card that uses the input data to update the MongoDB database, so that we can check the API could be used to some real purpose.

Get back to the map, and add a third output card. The card uses the fishing.json file as its "type tree", with the JSON type.

OpenAPI_MongoDB_Out1.png

 

Note that, since we're using a "native" file instead of a type tree, the structure is not the usualu type tree one, and you need to pick the top level object:

 

OpenAPI_MongoDB_Out1_1.png

 

You can directly select the MongoDB adapter, and set the right settings to write to the database. Make sure you reorder the output cards so that this card is the second one, and the success response is only sent when the map has completed all its operations.

The mapping is quite simple, as the query is simple also. Simply get the three query parametersfrom the first output card and map them to their respective column equivalents. Be careful, the weight is a text in the query and a number in the JSON file.

OpenAPI_MongoDB_Out2.png

You can also notice that we used an index, since the JSON file allows multiple inserts and the API only carries one.

Build the map, and save it so that the system can use it.

If everything went fine, the system should reflect the addition of a third card, and it should show the right settings for the second card. It is highly recommended to check all other settings to make sure they did not get altered by the map update.

OpenAPI_MongoDB_Out3.png

If everything is correct, save the system, then deploy it and restart it. Should the system not start properly, the most likely issue is there was a mismatch between the map you and the system. Make sure you compiled the map and deployed the system based on the latest versions. If the issue persists, use the usual debug techniques to find the issue.

When the system is started, submit a query with easily identifiable values and check the return is 200.

OpenAPI_MongoDB_Out4.png

 

The desired outcome is obviously that the MongoDB receives and stored the data!

 

OpenAPI_MongoDB_Out5.png

If this does not happen, the trace from the MongoDB adapter is your best friend to understand the issue. I found that, at least with the settings I used, when the DB refused the data, the map still completed successfully.

 

Conclusion

We've demonstrated ITX could be an OpenAPI provider, and the implementation is quite simple for somebody familiar with ITX in general.

As for the preceding article, a complete working solution can be downloaded as a zip file that directly imports in the Studio.

Click here to download ITX9001_OpenAPI_Producer.zip

 

Recent Stories
IBM Recognized as Operations Leader for Seventh Consecutive Year

Using ITX 9.0.0.1 and Open API: Producer Scenario

Using ITX9.0.0.1 with Open API