When developing applications and data processing workflows in today’s modern IBM Z environment, we may find ourselves with a dilemma: how can we move data seamlessly between z/OS applications and applications running on Linux on Z and LinuxOne systems?

How can we make data residing in Cloud Object Storage (COS) available to z/OS applications and enable those same applications to store data in the cloud? And how can we do all of this while minimizing the consumption of CPU, storage, and I/O resources?

This cross-platform data integration has never been easier, thanks to the innovative Alebra Parallel Data Mover (PDM) SubSystem Interface (SSI). With the PDM SSI, data and applications residing on Linux or COS can be accessed directly from the input and output DD statements of a z/OS program.

You can move data between these platforms with no need for a costly interim stop on disk. PDM is able to start a process on a Linux system and send its output directly to the input of a z/OS application, or send the output of a z/OS application straight to the input of a Linux process.

Data can be compressed and/or encrypted in transit. PDM will take advantage of zEnterprise Data Compression (zEDC) when available, maximizing performance while minimizing CPU cycle consumption on z/OS. Fast and secure encryption is achieved using state of the art Elliptic Curve Cryptography algorithms.

Where available, processing for SSI data transfer is performed using zIIP processors, further reducing costly CPU consumption on z/OS.

Conversion of data between EBCDIC and ASCII and between z/OS record formats is handled automatically by PDM, ensuring that data is always in the format expected by a given application. This eliminates the need for an application to perform these transformations on its own. All of this work may be offloaded to the Linux system, keeping CPU consumption on critical z/OS systems to an absolute minimum.

Clearly, the possibilities for efficient cross-platform integration are endless, but a few successful use cases deserve highlighting.

Streamline Unloading of DB2 Data for Processing on Linux

When you need to make DB2 data available to Linux applications, it may appear that you are stuck with two subpar options.

Option 1: Use the SQL interface to query the dataset, write that output to a dataset, and somehow move that dataset over to the Linux system for processing. Querying an active DB2 database comes with significant overhead, and sending the output to a dataset means allocating storage and performing disk IO.

Option 2: Use IBM’s DB2 High Performance Unload Utility (HPU) to dump the database to a dataset, and again move that dataset over to the Linux system for processing. While this utility is a very efficient way to dump a database to disk, this method still consumes valuable storage to land the data on both the source and target systems and adds delays to data availability, since it is a 3 step process where each step must complete before the next can start.

The PDM SSI used in combination with IBM DB2 HPU solves this problem. IBM HPU bypasses the SQL interface and efficiently reads the database from the underlying VSAM linear datasets. By calling the SSI on HPU’s output DD statement, the database can be passed directly to the input of the Linux program that will process it, such as a database load utility.

This example (modified from the IBM HPU documentation) performs a dump and sends the output directly to the input of a Linux applicartion called “dbloader”:

//STEP1    EXEC PGM=INZUTILB,REGION=0M,DYNAMNBR=99,
//         PARM='DB2P,DB2UNLOAD'
//STEPLIB  DD  DSN=DB2UNLOAD.LOAD,DISP=SHR
//         DD  DSN=PRODDB2.DSNEXIT,DISP=SHR
//         DD  DSN=PRODDB2.DSNLOAD,DISP=SHR
//*
//SYSIN    DD  *
     UNLOAD TABLESPACE DBNAME1.TSNAME1
      DB2 YES
      QUIESCE YES
      OPTIONS DATE DATE_A
       SELECT COL1,COL2 FROM USER01.TABLE01
        ORDER BY 1 , COL2 DESC
        OUTDDN (UNLDDN1)
        FORMAT VARIABLE ALL
/*
//SYSPRINT DD  SYSOUT=*
//*
//********* ddnameS USED BY THE SELECT STATEMENT **********
//*
//UNLDDN1  DD  DISP=SHR,
//         SUBSYS=(DMES,'PROFILE=DB2PROF',
//         'PATH=/usr/bin/dbloader',
//         'FILETYPE=X')

Extract SMF data and send it to another LPAR or Linux system for analysis

SAS is a popular solution for analyzing SMF data for capacity planning. However, licensing constraints might mean SAS can only be run on a particular LPAR, or only the Linux version may be available. With the PDM SSI, the output of the SMF data dump utility (IFASMFDP) can be easily sent to a dataset residing on another LPAR or to a file on a Linux system where it can then be processed.

In this example, SMF data from one LPAR is dumped to a dataset (SMF92.NCP.D210526) residing on another LPAR (NCS2), where it can be analyzed by a copy of SAS that’s licensed there.

//SMFEXTR   EXEC PGM=IFASMFDP                                        
//DUMPIN     DD  DISP=SHR,DSN=SMF.ROLLUP.TAPE.DAILY(0)      
//ADUPRINT   DD  SYSOUT=*                                            
//SYSPRINT   DD  SYSOUT=*                                            
//SMFOUT     DD  DISP=(NEW,CATLG),                                  
//            DSN=SMF92.NCP.D210526,                        
//            SUBSYS=(DMES,'HOST=NCS2','FILEDISP=NEW','DATATYPE=E'),
//            SPACE=(CYL,(100,100),RLSE),                            
//            UNIT=3390,                                            
//            LRECL=32760,BLKSIZE=32760,RECFM=VBS         
//SYSIN      DD  *                                                  
    INDD(DUMPIN,OPTIONS(DUMP))                                      
    OUTDD(SMFOUT,TYPE(92))                                          
    DATE(2021146,2021146)                                            
    START(0900)                                                      
    END(1200)                                                        
//

With the “DATATYPE” parameter set to “E” for EBCDIC, the SSI preserves RDWs and BDWs present in the VBS-formatted data produced by IFASMFDP.

The data may also be dumped to a Linux system in this manner:

//SMFEXTR   EXEC PGM=IFASMFDP                                        
//DUMPIN     DD  DISP=SHR,DSN=SMF.ROLLUP.TAPE.DAILY(0)      
//ADUPRINT   DD  SYSOUT=*                                            
//SYSPRINT   DD  SYSOUT=*                                            
//SMFOUT     DD  DISP=(NEW,CATLG),                                  
//            DSN=’/data/smf/D210526’,                        
//            SUBSYS=(DMES,'HOST=linuxsas1','FILEDISP=NEW','DATATYPE=E',
//            PROFILE=LNXCRED),                                            
//            LRECL=32760,BLKSIZE=32760,RECFM=VBS         
//SYSIN      DD  *                                                  
    INDD(DUMPIN,OPTIONS(DUMP))                                      
    OUTDD(SMFOUT,TYPE(92))                                          
    DATE(2021146,2021146)                                            
    START(0900)                                                      
    END(1200)                                                        
//

Rather than sending the dump to another z/OS LPAR, this job writes the data to a file (/data/smf/D210526) on a Linux server (called linuxsas1 in this example). The PROFILE parameter points to a PDS member called LNXCRED holding the login credentials for the Linux server.

Accessing Cloud Object Storage via the PDM SSI

PDM is now able to read and write data residing on several Cloud Object Storage platforms, including Amazon S3, Google Cloud Platform, and Apache Hadoop. The DB2 HPU example above is modified to write the output directly to a Cloud Storage Object:

//STEP1    EXEC PGM=INZUTILB,REGION=0M,DYNAMNBR=99,
//         PARM='DB2P,DB2UNLOAD'
//STEPLIB  DD  DSN=DB2UNLOAD.LOAD,DISP=SHR
//         DD  DSN=PRODDB2.DSNEXIT,DISP=SHR
//         DD  DSN=PRODDB2.DSNLOAD,DISP=SHR
//*
//SYSIN    DD  *
     UNLOAD TABLESPACE DBNAME1.TSNAME1
      DB2 YES
      QUIESCE YES
      OPTIONS DATE DATE_A
       SELECT COL1,COL2 FROM USER01.TABLE01
        ORDER BY 1 , COL2 DESC
        OUTDDN (UNLDDN1)
        FORMAT VARIABLE ALL
/*
//SYSPRINT DD  SYSOUT=*
//*
//********* ddnameS USED BY THE SELECT STATEMENT **********
//*
//UNLDDN1  DD  DISP=SHR,
//         SUBSYS=(DMES,'PROFILE=S3PROF',
//         'PATH=s3://example/path/dbName1')

Data may also be read from Cloud Object Storage and sent to the input of a batch program.

This simple example reads two objects residing in Amazon S3 and compares them with IEBCOMPR:

//COMP     EXEC PGM=IEBCOMPR
//SYSPRINT DD SYSOUT=*
//SYSUT1   DD DCB=(RECFM=FB,LRECL=80,BLKSIZE=27920),
//         SUBSYS=(DMES,
//         'PATH=s3://example/path/file1',
//         ‘PROFILE=S3PROF’,
//         ‘HOST=s3EdgeNode’)
//SYSUT2   DD DCB=(RECFM=FB,LRECL=80,BLKSIZE=27920),
//         SUBSYS=(DMES,
//         'PATH=s3://example/path/file2',
//         ‘HOST=s3EdgeNode’,
//         ‘PROFILE=S3PROF’)

The configuration and credentials for accessing S3 are contained in the PDS member called S3PROF. The host is a Linux server with a network connection to Amazon S3.

Integrate Cross-Platform Data With the PDM SSI

The PDM SubSystem Interface is a powerful tool for integrating data and applications across z/OS, Linux, Unix, Windows, and Cloud Object Storage platforms. Alebra’s Engineering Support team is available to help develop jobs and processes to ensure your organization takes full advantage of the SSI and the IBM Z platform. Reach out to us at alebra.com/contact for more information.

Chris Pittsley | Engineering and Support Manager | chris.pittsley@alebra.com