Connect IBM Z to Cloud Object Storage

Parallel Data Mover now includes built-in support for Amazon S3, Google Storage, and Apache Hadoop. With PDM, your IBM Z applications can access these Cloud Object Storage services directly, without the need to first land data on disk or manually call the service’s API.

Accessing Cloud Object Storage via the PDM Extended Client (DMEXCLI)

IBM Z datasets residing on disk or tape can be moved to and from Object Storage services via the PDM Extended Client batch interface (DMEXCLI).

The following example copies a dataset from a z/OS system named “zos1” to a file on a node named “hadoopnode” which has access to the Hadoop File System.

Nodedef Id=aaa Node=hadoopnode Userid=userx Password=passxxxxx
Nodedef Id=bbb Node=zos1 Userid=userx Password=passxxxxx
copy from(file=qual1.qual2.dataset id=bbb) +
 to(file=hdfs:usr/userav/received_files/hadoopfile id=hadoopnode)')

Another example copies a file from a Hadoop node named “hadoopnode” to a dataset on a z/OS system named “zos1”.

Nodedef Id=aaa Node=hadoopnode Userid=userx Password=passxxxxx
Nodedef Id=bbb Node=zos1 Userid=userx Password=passxxxxx
copy from(file=hdfs:/usr/userav/source_files/hadoopfile id=aaa) +
 to(file=MY.DATASET.QUAL(MEMNAME) id=bbb)')

Accessing Cloud Object Storage Directly via BSAM and QSAM

Object Storage can be accessed directly by any application that uses BSAM or QSAM to read and write data via the PDM SubSystem interface.

This example (modified from the IBM HPU documentation) performs a dump and sends the output directly to Amazon S3:

PDM is now able to read and write data residing on several Cloud Object Storage platforms, including Amazon S3, Google Cloud Platform, and Apache Hadoop. The DB2 HPU example above is modified to write the output directly to a Cloud Storage Object:

//STEP1 EXEC PGM=INZUTILB,REGION=0M,DYNAMNBR=99,
// PARM='DB2P,DB2UNLOAD'
//STEPLIB DD DSN=DB2UNLOAD.LOAD,DISP=SHR
// DD DSN=PRODDB2.DSNEXIT,DISP=SHR
// DD DSN=PRODDB2.DSNLOAD,DISP=SHR
//*
//SYSIN DD *
 UNLOAD TABLESPACE DBNAME1.TSNAME1
 DB2 YES
 QUIESCE YES
 OPTIONS DATE DATE_A
 SELECT COL1,COL2 FROM USER01.TABLE01
 ORDER BY 1 , COL2 DESC
 OUTDDN (UNLDDN1)
 FORMAT VARIABLE ALL
/*
//SYSPRINT DD SYSOUT=*
//*
//********* ddnameS USED BY THE SELECT STATEMENT **********
//*
//UNLDDN1 DD DISP=SHR,
// SUBSYS=(DMES,'PROFILE=S3PROF',
// 'PATH=s3://example/path/dbName1')')

Data may also be read from Cloud Object Storage and sent to the input of a batch program. This simple example reads two objects residing in Amazon S3 and compares them with IEBCOMPR:

//COMP EXEC PGM=IEBCOMPR
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DCB=(RECFM=FB,LRECL=80,BLKSIZE=27920),
// SUBSYS=(DMES,
// 'PATH=s3://example/path/file1',
// ‘PROFILE=S3PROF’,
// ‘HOST=s3EdgeNode’)
//SYSUT2 DD DCB=(RECFM=FB,LRECL=80,BLKSIZE=27920),
// SUBSYS=(DMES,
// 'PATH=s3://example/path/file2',
// ‘HOST=s3EdgeNode’,
// ‘PROFILE=S3PROF’)')

The configuration and credentials for accessing S3 are contained in the PDS member called S3PROF. The host is a Linux server with a network connection to Amazon S3.

Dont hesitate to contact us

Call Us

+1 (651) 366.6140

Write to us

support@alebra.com

PO Box 120390
New Brighton, MN 55112