Intelligently Connected

3810 Pheasant Ridge Drive NE
Minneapolis, MN 55449

651.366.6140 / Toll-Free:888.340.2727

When investing in a new data management system, like our PDM, one of the first agenda items should be to decide what to do with your legacy data.

Some companies want to sidestep this question, thinking their best choice is to only use the new system with current projects. Others think the best way forward is to put everything into the new system. Both of these choices are based on overly simplistic assumptions about the value of data.

Leaving out all legacy data would certainly make implementation easier and faster, but that assumes the older data is of little value and ignores the time and energy it will take to retrieve the data if it later becomes needed. Similarly, pulling it all in makes the assumption that all of your legacy data is still valuable, when that may not be the case. The problem seems to be that most companies view the movement of legacy data as an all-or-nothing exercise, when in fact, our data integration platform makes migrating select legacy applications possible with minimum risk.

The flexible configuration and the extremely fast data sharing benefits of our z/OpenGate transport make it possible to retain the functionality of your legacy applications; all while keeping your data safe and secure on the mainframe, where it will still be available for use by other applications.

If an all-or-nothing approach to moving your company’s legacy data doesn’t seem to be the most efficient method for your company’s needs and you’d like to learn more about how our data integration platform can migrate only select legacy data, contact us today.

January 4th, 2018

Leave a Comment

The rollout of the Hadoop Distributed File System (HDFS) was indeed a major technological innovation. Hadoop gave the ability to store massive amounts of data while MapReduce provided a new distributed (parallel) processing capability to crunch this data rapidly. With the first rollout of Hadoop, Relational Database Management Systems (RDBMS) and its companion Structured Query Language (SQL), a pair of major innovations from three decades prior where left behind.

Regional Database Management Systems & Structured Query Language

Within large enterprises, relational databases and SQL are key components of many mission critical applications. There are dozens of database types and products with the most popular being IBM DB2, Oracle, MySQL, Microsoft SQL Server and PostgreSQL. In order to make Hadoop appeal to a broader set of industries, adding support to process existing data in relational format was key to a wider adoption. Enter Hive.


Hive provides a similar look, feel and function of the relational databases previously found on non-Hadoop systems. It allows enterprises to load existing data from these systems, while keeping its relational format, using the full power of HDFS and MapReduce. Programmers familiar with SQL have an easy transition to Hive Query Language (HiveQL).

While most large enterprises have no current plans to replace existing mission-critical systems with new Hadoop/Hive applications, they are becoming a popular addition to support the rapidly increasing demand for more and better data analytics.

To learn more about Hadoop, Hive or how our solutions can make them even more efficient, contact us today.

December 6th, 2017

A survey released by IT management software vendor BMC Software explains that mainframe computers still play a significant role in large data centers and that businesses are using them to run new applications as well as legacy software. In fact, enterprises are increasing the amount of work done on mainframes because of the platform’s advantages in security, reliability and cost-effectiveness. The bottom line is, mainframes are here to stay for the foreseeable future, so it is essential to ensure you’re getting the most out of yours.

When looking ahead to mainframe use in 2018, the majority of survey respondents listed cost reduction as their top priority which makes perfect sense. Many of them have already seen a rise in their MIPS usage over the last few years and that trend is expected to continue into the new year. Unfortunately, additional mainframe capacity is needed to support the increasing workloads, which will drive up operating costs.

The other key issues of concern of those surveyed were data security and application availability. Data privacy and security should always be a top concern for any enterprise. Application availability can be impacted by outdated and inefficient hardware making the process of pulling data from a hard drive or network more difficult and time consuming than it needs to be.

The results of the survey align with what we expected, as the concerns expressed (cost reduction, security and availability) are the same that our clients so often come to us to help them solve. Luckily, we have a solution. Our Parallel Data Mover (PDM) is a unique parallel data streaming technology that tracks, accelerates, and automates data movement and data integration with applications across your enterprise, all while reducing MIPS consumption by 50-95% and improving efficiency.

Want to learn more or get started with PDM or any of our other solutions? Contact us today.

November 29th, 2017

Managed file transfer (MFT) eliminates human errors in tasks such as process automation, transactional integrations, and data extraction. MFT has become a crucial element for secure and effective management of data and is more important than ever in today’s ever-expanding digital world.

IBM explains 3 main reasons the managed file transfer trend will continue to grow:

Data Volumes Growing:  Data volumes are growing while batch processing windows continue to shrink.  Today’s workload requirements for MFT tend to be higher-frequency batching and larger and more varied files than in the past. There also is a demand for innovative application processes that exploit MFT capabilities for streaming transfers.

Security:  Cybersecurity requirements and concerns continue to heighten, leading to the adoption of newer security technologies. Security solutions are increasing the security processing overhead of applications, including but not limited to MFT. MFT systems, where possible, must compensate for security processing overhead, through support for hardware-accelerators, high-performance security process software, and ongoing file transfer throughput improvements.

Big Data and IoT:  Businesses are more frequently deploying MFT technology beyond its traditional role to enable bulk transaction file exchange, particularly in areas such as IoT data analysis or other big data analytics. These scenarios typically demand the use of current, not aging data. This puts a premium on the speed of file transfers.


When dealing with critical business transactions involving crucial information, choosing the right tool is essential. This is why managed file transfer is a core function of our Parallel Data Mover (PDM) suite of solutions. It provides highly managed, secure, fast and efficient transfer of data files between z/OS systems or between z/OS and Linux, UNIX, Windows (LUW) and Hadoop systems. In short, our PDM managed file transfer option can provide you with an outstanding means of managing many thousands of file transfers in a reliable, secure and cost-effective manner.


To learn more about our PDM-MFT, contact us today.

October 31st, 2017

Mainframes are the predominant platform for large-scale processing for mission-critical applications, so they are a necessary part of any major corporation. Unfortunately, as the need for processing power grows, most corporation’s mainframe budgets tend to remain relatively flat. This is a problem that is all too common across many industries. IBM recognized this problem and responded by creating its System z Integrated Information Processor (zIIP) to help reduce mainframe costs.

What is zIIP?

 IBM describes zIIP as:

“a purpose-built processor designed to operate asynchronously with the general processors in the mainframe to help improve utilization of computing capacity and control costs. It is designed for select data and transaction processing workloads and for select network encryption workloads. zIIPs allow customers to purchase additional processing power without affecting the total million service units (MSU) rating or machine model designation.”

Basically, work from suitable tasks running in a z/OS mainframe operation system are allowed to access the zIIP’s engine rather than using the main processor. The capacity of the zIIP engine doesn’t count toward the overall MIPS rating of the mainframe and the CPU usage incurred on the zIIP is not chargeable in typical workload based software maintenance. This way, the mainframe’s transactional data stores and applications that run on distributed servers can work together, all at a lower cost.

zIIP & Alebra

 Although there is always some overhead associated with processors, we are always seeking ways to minimize it. This is why we have taken measures to enable zIIP processing to eliminate as much of this remaining overhead as allowed by zIIP processing rules. This enablement applies to the most common file transfer operations. CPU usage on CP processors for zIIP enabled transfers using TCP/IP will be significantly reduced and transfers using Alebra’s z/OpenGate transport will have almost no overhead remaining on CP processors. Our competitor’s data movers use around 128 MIPS compared to the 7 our z/OpenGate consumes. However, when our z/OpenGate is paired with a zIIP processor it drops the MIPS consumption to zero, saving you even more than just using a zIIP alone.

Contact us to learn more about z/OpenGate, zIIP enablement or any of our other solutions.

September 28th, 2017

Minneapolis, MN (September 19, 2017)—Alebra Technologies Inc., the leader in high-performance cross-platform data movement and remote access, today announced new capabilities in its unrivaled mainframe data integration platform.

Alebra’s new Hadoop Loader features direct two-way transfer capability, providing a cost-effective way to move large amounts of data to that specific environment without all the functionality of the company’s flagship Parallel Data Mover.  Customers adopting this solution will, however, have an easy upgrade path to Alebra’s PDM r7 solution.

“While data is being collected, processed and stored on diverse platforms, what companies need are tools that leverage enterprise data and deliver cutting-edge performance for competitive advantage,” said Tom Lehn, CEO, Alebra Technologies. “The Hadoop Loader focuses on moving data between the mainframe and Hadoop platforms quickly, securely and directly while consuming far less overhead than other solutions.

Alebra’s solution enables users to routinely populate Hadoop Distributed File Systems with huge volumes of corporate data.  The Hadoop Loader tracks, accelerates and automates direct two-way data transfers between a Hadoop cluster and mainframe, Linux, Unix, Windows and other Hadoop clusters.  The direct data transfer function reduces processing time and lowers operating costs by eliminating the need for interim file copies, thereby reducing storage requirements on both ends of the transfer.  Mainframe datasets and entire databases can be moved without landing first on the Hadoop server.

About Alebra Technologies, Inc.

Founded in 1998, Alebra Technologies Inc. is focused on developing software solutions that allow companies to move large amounts of data, in real-time, outside of traditional networks. Alebra’s data integration platform seamlessly integrates its customers’ data across mainframe, Linux, UNIX, Windows and Hadoop platforms using alternatives to traditional data movement and sharing methods. The company’s solutions not only deliver data to any application, but do so in less time and at a significantly lower cost, regardless of the platform or location.   to learn more, visit


For more information, contact:

Steve Schoonmaker (651) 247-9393

September 20th, 2017

Minneapolis, MN (September 6, 2017)—Alebra Technologies Inc., the leader in high performance cross-platform data movement and remote access, today announced new capabilities in its unrivaled mainframe data integration platform.

Alebra’s Parallel Data Mover release 7 (PDM r7) functionality delivers anytime data transfers without impacting ongoing operations, and enables organizations to reduce operating costs by dramatically reducing mainframe MIPS consumption.

“Alebra has always been the leader in data movement performance and efficiency,” said Bill Yeager, chief technical officer of Alebra Technologies.  “With this release of PDM, we have taken a giant leap to maximize both these factors.  PDM is now capable of running in a zIIP engine, reducing even further the already low overhead you have come to expect from PDM.  In fact, our internal performance tests report literally zero MIPS consumption when transferring data.  The end result is that customers can move significant amounts of data in less time, at any time of the day or night and not worry about impact to production systems or rolling averages.”

The new PDM capabilities extend Alebra’s production deployments for improving infrastructure performance and integrating cross-platform environments to direct connections to Hadoop data lakes.  Mainframe datasets and entire databases can be moved quickly and securely without landing on the Hadoop server while consuming far less overhead than other solutions.

The key competitive difference of PDM is marked by its high availability, efficiency and scalability.  The new PDM capabilities combined with direct connectivity to Hadoop and other HDFS platforms deliver cutting-edge performance for competitive advantage.

For more information on the new PDM r7 capabilities, please visit our website at

About Alebra Technologies, Inc.

Founded in 1998, Alebra Technologies Inc. is focused on developing software solutions that allow companies to move large amounts of data, in real-time, outside of traditional networks. Our data integration platform seamlessly integrates your data across mainframe, Linux, UNIX, Windows and Hadoop platforms using alternatives to traditional data movement and sharing methods. Our solutions not only deliver data to any application, but do so in less time and at a significantly lower cost, regardless of the platform or location.   For more information, visit

For more information, contact:
Steve Schoonmaker (651) 247-9393

September 6th, 2017

transferring data to hadoop

Many companies use Hadoop to analyze customer behavior on their websites, process call center activity, and mine social media data. Based on this data, companies can make decisions in real time to understand customer needs, mitigate problems, and ultimately gain an advantage over the competition. But other organizations simply use it to reduce their data storage costs.

The Process

Data is generated from a variety of devices, and this data can be both structured and unstructured. Structured data is stored in a Relational Database Management System (RDBMS) and unstructured data is stored in a file system. Data users may employ a mainframe, or use a distributed system like Hadoop. But to utilize Hadoop, mainframe data must be moved to the server where the Hadoop system lives.

Why use Hadoop rather than a different distributed system?

The Benefits of Hadoop

  1. Scalability: Unlike traditional relational database systems (RDBMS) that can’t easily scale to process large amounts of data, it enables companies to run applications on thousands of nodes involving thousands of terabytes of data.
  2. Cost Effectiveness: An issue with traditional relational database management systems is they are extremely cost prohibitive to scale when dealing with large amounts of data. In an effort to reduce costs, many companies in the past would have had to down-sample data and classify it based on certain assumptions as to which data was the most valuable. The raw data would be deleted, as it would be too cost-prohibitive to keep. This meant that when business priorities changed, the complete raw data set was no longer available. Hadoop is designed as a scale-out architecture that can affordably store all of a company’s data for later use, therefore avoiding this issue of lost data.
  3. Flexibility: It enables businesses to easily access new data sources and tap into different types of data (both structured and unstructured) to generate value. This means businesses can use Hadoop to derive valuable business insights from data sources such as social media, email conversations or clickstream data. In addition, it can be used for a wide variety of purposes, such as log processing, recommendation systems, data warehousing, market campaign analysis and fraud detection.
  4. Processing Speed: Its unique storage method is based on a distributed file system that basically ‘maps’ data wherever it is located on a cluster. The tools for data processing are often on the same servers where the data is located, resulting in much faster data processing. If you’re dealing with large volumes of unstructured data, Hadoop is able to efficiently process terabytes of data in just minutes, and petabytes in hours.
  5. Fault Tolerance: Another key advantage is its fault tolerance. When data is sent to an individual node, that data is also replicated to other nodes in the cluster, which means that in the event of failure, there is another copy available for use.

There are several different ways your  organization can utilize all the features of Hadoop, but first you need a safe and secure way to move your data there. We have experts at this type of data transfer. We will not only deliver your data to any application, but do it in less time and at a significantly lower cost, regardless of the platform or location. Contact us to learn how we can help you!

August 9th, 2017

Data Transfers

Bulk data movement, access, copying and sharing requires a lot of bandwidth and can wreak havoc with interactive applications and other normal network traffic on the same system. 10-15 years ago, people might have expected a slight delay when trying to access data, but in today’s fast paced world people expect information within seconds at anytime from anywhere, from any device. Which means network delays, caused by bulk data transfers, have a negative ripple effect on the internal productivity of your company – and your ability to provide the quality of service needed to attract and retain customers. This could cost you sales and have a negative effect on your bottom line.

What’s the Solution?

This is precisely why we created the z/OpenGate. z/OpenGate creates a channel-to-channel connection that integrates mainframes to Linux, Unix, and Windows (LUW), mainframe to mainframe, LUW to LUW, and mainframes to Hadoop and other HDFS platforms. It’s not only an order of magnitude faster, more secure and reliable than traditional networks, it will also shorten batch run times and lower your mainframe’s MIPS consumption.

Network vs. Channel MIPS

z/OpenGate far exceeds the efficiency of conventional TCP/IP networks.

z/OpenGate was designed to help you improve your customer’s experience by providing anytime, anywhere information access. Improve your bottom line with a cross-platform data integration solution that doesn’t make you wait. Don’t wait. Contact us today.

July 20th, 2017

When moving databases from one system to another, often temporary copies of the unloaded data are created. These copies are called interim copies. Even though these copies are temporary, they can influence the speed and functionality of your overall systems and affect the job functions of some members of your IT team.

Who Do Interim Copies Affect?

The three main job functions affected by are all located in the IT Department and they are:
• The Storage Architect who’s responsible for understanding how IT operations impact storage requirements.
• The Application Architect who is responsible for the overall availability and performance of the end user application.
• The Capacity Planner who is responsible for determining the IT system capacity needed to support current and forecasted workloads that run on the mainframe and open systems servers.

How Do Interim Copies Affect Your Systems?

There are 3 main ways they can affect your systems. They are listed below in order of importance:

1. Creating interim copies requires extra processing steps which takes time. In an age where people expect real-time answers and information, reducing the elapsed time of a unit of work is crucial to providing a real-time response to user requests of all types from any device. Or simply, reducing the time to accomplish a task means a shorter response time from a user’s perspective.

2. They can raise your MIPS (Millions of Instructions Per Second) consumption, which will raise your overall operating costs. Consider this – the average cost per MIPS annually is $4,500. So, if interim copies are consuming an additional 100 MIPS daily, it could cost you an additional $450,000 annually.

3. When interim copies are created they need to be stored. This means you will need extra storage space on a disc drive, which again will raise your overall operating costs.

Addressing the 3 points above will greatly reduce costs and the time your organization is wasting dealing with interim copies. Alebra’s unique solutions can lower operating overhead and enhance enterprise data integration with an extremely fast transport.

Contact us today to learn more about our solutions for interim copies.

July 12th, 2017

Next Page »