How IBM® z/Transaction Processing Facility makes use of DFDL
IBM® z/Transaction Processing Facility (z/TPF) is a high-volume, high-throughput transaction processor. z/TPF has made extensive use of IBM DFDL to solve several challenges in an open and extensible manner.
IBM® z/Transaction Processing Facility (z/TPF) is a high-volume, high-throughput transaction processor, that can handle large, continuous loads of transactions across geographically dispersed networks. z/TPF is also an operating system and a unique database, all designed to work together as one system. The z/TPF database and other data formats are highly optimized for transactional workloads, but this can present challenges for other platforms to consume z/TPF data. Data Format Description Language (DFDL) has proven itself to be a robust solution in describing data formats such that the data can be easily consumed by other platforms or from other platforms. DFDL has also proven itself to be very useful in solving data transformation problems. As such, z/TPF has made extensive use of IBM's implementation of DFDL to solve several challenges in an open and extensible manner. Further, z/TPF has created a high performance DFDL parser that is specialized for data formats used by z/TPF.
Rapid REST Service Enablement¶
DFDL enables z/TPF customers to rapidly REST enable their existing services without making application changes. Prior to DFDL support, for each service, customers had to write code to manually parse XML and JSON documents and transform the data into the format the existing application requires. It is time consuming to write and test that code for each service and is often error prone and brittle. With DFDL usage in C/C++ code, the customer uses IBM DFDL tooling to create a DFDL schema for the service. Then the transformation from XML/JSON format to the native application data format is done automatically by the z/TPF system. In the same fashion, DFDL is used to create XML/JSON responses from native application data formats. This has enabled z/TPF customers to dramatically reduce the time and effort required to create and deploy REST services. Further, z/TPF's DFDL parser has been extended to include support to allow transformations to occur to and from native application memory structures composed blocks of memory linked by pointers.
Making Data Easily Consumable by Other Platforms (Push Model)¶
z/TPF customers often send transactional data in real-time to other platforms for multiple business purposes. Transactional data can include data from the transaction itself, database records that were updated by the transaction, as well as application enriched context data to explain why the database updates were made. The data can be put into a stream to feed an event driven architecture system, processed by an analytics platform, or placed in a read-only database (SQL or NoSQL) to be used by other parts of the customer business. Prior to DFDL support, this data would have to be transformed from various formats on z/TPF into a form consumable by the other platform in a process known as Extract Transform and Load (ETL). As part of this ETL processing, fields may need to be hidden, transformed, etc. based upon the use case. A further challenge of these types of scenarios is keeping the various systems in sync regarding the format of the data as the database on z/TPF grows and changes over time. This ultimately results in a lot of custom code being written and maintained. DFDL substantially simplifies this by automatically transforming the z/TPF data, omitting fields, dealing with various data format versions, and more. DFDL can be used to format the transferred data as XML or JSON depending upon the use case. The investment to leverage DFDL has substantially decreased IBM and customer investment in solutions to make z/TPF data available for processing by other systems.
Making Data Easily Consumable by Other Platforms (Pull Model)¶
z/TPF MongoDB interface support allows z/TPF customers to access data in z/TPF databases using a standard MongoDB client. Instead of writing a custom data handler, the DFDL parser on z/TPF was modified to operate upon BSON data to transform the data to and from the z/TPF database format. This MongoDB interface provides a standard NoSQL interface to read and write proprietary formatted data on z/TPF where the application developer does not have to know anything about z/TPF and requires no user code on z/TPF. This reduces the development costs and time to market for cloud-based applications that need to access and update data residing on z/TPF.
Enabling Real-time Signal Events¶
Often, customers need their z/TPF applications to conditionally send a signal and data to an event consumer system in their enterprise. For example, if an airplane’s inventory is exhausted for given route, the z/TPF system may need to send various information to another system in the enterprise. Signal events provides a transparent configurable infrastructure that the application code can call to send the message off platform. DFDL is used to transform the outgoing message into XML or JSON. In this way, z/TPF customers can quickly code, test, and deploy signaling mechanisms that can be easily consumed by their existing event consumers.
Enabling Cross-Programming Language Application Service Calls¶
z/TPF customers want to progressively modernize their applications by adding new business logic written in Java that needs to call or be invoked by existing application code written in C/C++. Instead of writing custom connectors and transformations for every possible call, DFDL is used to map C structures to Java objects, and vice-versa. This provides a seamless interface that allows programmers to mix new (Java) and old (C/C++) services within an application to easily extend existing applications and build new Java applications that leverage existing assets.
Quickly Creating New Performance Monitoring Tooling¶
IBM has implemented various z/TPF performance monitoring tools to aid customers in understanding how their z/TPF systems are running. Some of this tooling now leverages DFDL. For performance reasons, the z/TPF system streams the data in a raw binary format to some media for post processing. This raw data is then consumed by a remote system leveraging IBM DFDL and JAXB to directly convert the binary data into Java objects for further analysis. This is entirely accomplished without writing any parsing code, which substantially reduces the time required to code, test and deliver complex solutions that rely on binary z/TPF data.
Easy Conversion to Standard Data Formats¶
z/TPF has found other ways of using DFDL on the system. For example, a tool was written very quickly to dump in memory tables to a CSV file simply by having a DFDL file that described the in-memory table. This provides an easily produced set of diagnostic data in a standard format. DFDL can also be used to format memory for other tooling uses. And as already mentioned, DFDL has a breadth of uses in being able to transform or serialize binary data to and from JSON, XML, etc.
Summary¶
DFDL has become a cornerstone of z/TPF to consume, exchange, and transform data. The DFDL and z/TPF infrastructure provides tremendous development and maintenance efficiencies along with quality improvements for IBM and our customers.
Bradd Kadlecik (IBM z/TPF)
Comments