Intelligently Connected

ALEBRA TECHNOLOGIES, INC.
3810 Pheasant Ridge Drive NE
Minneapolis, MN 55449

651.366.6140 / Toll-Free:888.340.2727

Managed file transfer (MFT) eliminates human errors in tasks such as process automation, transactional integrations, and data extraction. MFT has become a crucial element for secure and effective management of data and is more important than ever in today’s ever-expanding digital world.

IBM explains 3 main reasons the managed file transfer trend will continue to grow:

Data Volumes Growing:  Data volumes are growing while batch processing windows continue to shrink.  Today’s workload requirements for MFT tend to be higher-frequency batching and larger and more varied files than in the past. There also is a demand for innovative application processes that exploit MFT capabilities for streaming transfers.

Security:  Cybersecurity requirements and concerns continue to heighten, leading to the adoption of newer security technologies. Security solutions are increasing the security processing overhead of applications, including but not limited to MFT. MFT systems, where possible, must compensate for security processing overhead, through support for hardware-accelerators, high-performance security process software, and ongoing file transfer throughput improvements.

Big Data and IoT:  Businesses are more frequently deploying MFT technology beyond its traditional role to enable bulk transaction file exchange, particularly in areas such as IoT data analysis or other big data analytics. These scenarios typically demand the use of current, not aging data. This puts a premium on the speed of file transfers.

 

When dealing with critical business transactions involving crucial information, choosing the right tool is essential. This is why managed file transfer is a core function of our Parallel Data Mover (PDM) suite of solutions. It provides highly managed, secure, fast and efficient transfer of data files between z/OS systems or between z/OS and Linux, UNIX, Windows (LUW) and Hadoop systems. In short, our PDM managed file transfer option can provide you with an outstanding means of managing many thousands of file transfers in a reliable, secure and cost-effective manner.

 

To learn more about our PDM-MFT, contact us today.

October 31st, 2017

Leave a Comment

Mainframes are the predominant platform for large-scale processing for mission-critical applications, so they are a necessary part of any major corporation. Unfortunately, as the need for processing power grows, most corporation’s mainframe budgets tend to remain relatively flat. This is a problem that is all too common across many industries. IBM recognized this problem and responded by creating its System z Integrated Information Processor (zIIP) to help reduce mainframe costs.

What is zIIP?

 IBM describes zIIP as:

“a purpose-built processor designed to operate asynchronously with the general processors in the mainframe to help improve utilization of computing capacity and control costs. It is designed for select data and transaction processing workloads and for select network encryption workloads. zIIPs allow customers to purchase additional processing power without affecting the total million service units (MSU) rating or machine model designation.”

Basically, work from suitable tasks running in a z/OS mainframe operation system are allowed to access the zIIP’s engine rather than using the main processor. The capacity of the zIIP engine doesn’t count toward the overall MIPS rating of the mainframe and the CPU usage incurred on the zIIP is not chargeable in typical workload based software maintenance. This way, the mainframe’s transactional data stores and applications that run on distributed servers can work together, all at a lower cost.

zIIP & Alebra

 Although there is always some overhead associated with processors, we are always seeking ways to minimize it. This is why we have taken measures to enable zIIP processing to eliminate as much of this remaining overhead as allowed by zIIP processing rules. This enablement applies to the most common file transfer operations. CPU usage on CP processors for zIIP enabled transfers using TCP/IP will be significantly reduced and transfers using Alebra’s z/OpenGate transport will have almost no overhead remaining on CP processors. Our competitor’s data movers use around 128 MIPS compared to the 7 our z/OpenGate consumes. However, when our z/OpenGate is paired with a zIIP processor it drops the MIPS consumption to zero, saving you even more than just using a zIIP alone.

Contact us to learn more about z/OpenGate, zIIP enablement or any of our other solutions.

September 28th, 2017

Minneapolis, MN (September 19, 2017)—Alebra Technologies Inc., the leader in high-performance cross-platform data movement and remote access, today announced new capabilities in its unrivaled mainframe data integration platform.

Alebra’s new Hadoop Loader features direct two-way transfer capability, providing a cost-effective way to move large amounts of data to that specific environment without all the functionality of the company’s flagship Parallel Data Mover.  Customers adopting this solution will, however, have an easy upgrade path to Alebra’s PDM r7 solution.

“While data is being collected, processed and stored on diverse platforms, what companies need are tools that leverage enterprise data and deliver cutting-edge performance for competitive advantage,” said Tom Lehn, CEO, Alebra Technologies. “The Hadoop Loader focuses on moving data between the mainframe and Hadoop platforms quickly, securely and directly while consuming far less overhead than other solutions.

Alebra’s solution enables users to routinely populate Hadoop Distributed File Systems with huge volumes of corporate data.  The Hadoop Loader tracks, accelerates and automates direct two-way data transfers between a Hadoop cluster and mainframe, Linux, Unix, Windows and other Hadoop clusters.  The direct data transfer function reduces processing time and lowers operating costs by eliminating the need for interim file copies, thereby reducing storage requirements on both ends of the transfer.  Mainframe datasets and entire databases can be moved without landing first on the Hadoop server.

About Alebra Technologies, Inc.

Founded in 1998, Alebra Technologies Inc. is focused on developing software solutions that allow companies to move large amounts of data, in real-time, outside of traditional networks. Alebra’s data integration platform seamlessly integrates its customers’ data across mainframe, Linux, UNIX, Windows and Hadoop platforms using alternatives to traditional data movement and sharing methods. The company’s solutions not only deliver data to any application, but do so in less time and at a significantly lower cost, regardless of the platform or location.   to learn more, visit www.alebra.com.

 

For more information, contact:

Steve Schoonmaker (651) 247-9393

Steve.Schoonmaker@alebra.com

September 20th, 2017

Minneapolis, MN (September 6, 2017)—Alebra Technologies Inc., the leader in high performance cross-platform data movement and remote access, today announced new capabilities in its unrivaled mainframe data integration platform.

Alebra’s Parallel Data Mover release 7 (PDM r7) functionality delivers anytime data transfers without impacting ongoing operations, and enables organizations to reduce operating costs by dramatically reducing mainframe MIPS consumption.

“Alebra has always been the leader in data movement performance and efficiency,” said Bill Yeager, chief technical officer of Alebra Technologies.  “With this release of PDM, we have taken a giant leap to maximize both these factors.  PDM is now capable of running in a zIIP engine, reducing even further the already low overhead you have come to expect from PDM.  In fact, our internal performance tests report literally zero MIPS consumption when transferring data.  The end result is that customers can move significant amounts of data in less time, at any time of the day or night and not worry about impact to production systems or rolling averages.”

The new PDM capabilities extend Alebra’s production deployments for improving infrastructure performance and integrating cross-platform environments to direct connections to Hadoop data lakes.  Mainframe datasets and entire databases can be moved quickly and securely without landing on the Hadoop server while consuming far less overhead than other solutions.

The key competitive difference of PDM is marked by its high availability, efficiency and scalability.  The new PDM capabilities combined with direct connectivity to Hadoop and other HDFS platforms deliver cutting-edge performance for competitive advantage.

For more information on the new PDM r7 capabilities, please visit our website at www.alebra.com.

About Alebra Technologies, Inc.

Founded in 1998, Alebra Technologies Inc. is focused on developing software solutions that allow companies to move large amounts of data, in real-time, outside of traditional networks. Our data integration platform seamlessly integrates your data across mainframe, Linux, UNIX, Windows and Hadoop platforms using alternatives to traditional data movement and sharing methods. Our solutions not only deliver data to any application, but do so in less time and at a significantly lower cost, regardless of the platform or location.   For more information, visit www.alebra.com.

For more information, contact:
Steve Schoonmaker (651) 247-9393
Steve.Schoonmaker@alebra.com

September 6th, 2017

transferring data to hadoop

Many companies use Hadoop to analyze customer behavior on their websites, process call center activity, and mine social media data. Based on this data, companies can make decisions in real time to understand customer needs, mitigate problems, and ultimately gain an advantage over the competition. But other organizations simply use it to reduce their data storage costs.

The Process

Data is generated from a variety of devices, and this data can be both structured and unstructured. Structured data is stored in a Relational Database Management System (RDBMS) and unstructured data is stored in a file system. Data users may employ a mainframe, or use a distributed system like Hadoop. But to utilize Hadoop, mainframe data must be moved to the server where the Hadoop system lives.

Why use Hadoop rather than a different distributed system?

The Benefits of Hadoop

  1. Scalability: Unlike traditional relational database systems (RDBMS) that can’t easily scale to process large amounts of data, it enables companies to run applications on thousands of nodes involving thousands of terabytes of data.
  2. Cost Effectiveness: An issue with traditional relational database management systems is they are extremely cost prohibitive to scale when dealing with large amounts of data. In an effort to reduce costs, many companies in the past would have had to down-sample data and classify it based on certain assumptions as to which data was the most valuable. The raw data would be deleted, as it would be too cost-prohibitive to keep. This meant that when business priorities changed, the complete raw data set was no longer available. Hadoop is designed as a scale-out architecture that can affordably store all of a company’s data for later use, therefore avoiding this issue of lost data.
  3. Flexibility: It enables businesses to easily access new data sources and tap into different types of data (both structured and unstructured) to generate value. This means businesses can use Hadoop to derive valuable business insights from data sources such as social media, email conversations or clickstream data. In addition, it can be used for a wide variety of purposes, such as log processing, recommendation systems, data warehousing, market campaign analysis and fraud detection.
  4. Processing Speed: Its unique storage method is based on a distributed file system that basically ‘maps’ data wherever it is located on a cluster. The tools for data processing are often on the same servers where the data is located, resulting in much faster data processing. If you’re dealing with large volumes of unstructured data, Hadoop is able to efficiently process terabytes of data in just minutes, and petabytes in hours.
  5. Fault Tolerance: Another key advantage is its fault tolerance. When data is sent to an individual node, that data is also replicated to other nodes in the cluster, which means that in the event of failure, there is another copy available for use.

There are several different ways your  organization can utilize all the features of Hadoop, but first you need a safe and secure way to move your data there. We have experts at this type of data transfer. We will not only deliver your data to any application, but do it in less time and at a significantly lower cost, regardless of the platform or location. Contact us to learn how we can help you!

August 9th, 2017

Data Transfers

Bulk data movement, access, copying and sharing requires a lot of bandwidth and can wreak havoc with interactive applications and other normal network traffic on the same system. 10-15 years ago, people might have expected a slight delay when trying to access data, but in today’s fast paced world people expect information within seconds at anytime from anywhere, from any device. Which means network delays, caused by bulk data transfers, have a negative ripple effect on the internal productivity of your company – and your ability to provide the quality of service needed to attract and retain customers. This could cost you sales and have a negative effect on your bottom line.

What’s the Solution?

This is precisely why we created the z/OpenGate. z/OpenGate creates a channel-to-channel connection that integrates mainframes to Linux, Unix, and Windows (LUW), mainframe to mainframe, LUW to LUW, and mainframes to Hadoop and other HDFS platforms. It’s not only an order of magnitude faster, more secure and reliable than traditional networks, it will also shorten batch run times and lower your mainframe’s MIPS consumption.

Network vs. Channel MIPS

z/OpenGate far exceeds the efficiency of conventional TCP/IP networks.

z/OpenGate was designed to help you improve your customer’s experience by providing anytime, anywhere information access. Improve your bottom line with a cross-platform data integration solution that doesn’t make you wait. Don’t wait. Contact us today.

July 20th, 2017

When moving databases from one system to another, often temporary copies of the unloaded data are created. These copies are called interim copies. Even though these copies are temporary, they can influence the speed and functionality of your overall systems and affect the job functions of some members of your IT team.

Who Do Interim Copies Affect?

The three main job functions affected by are all located in the IT Department and they are:
• The Storage Architect who’s responsible for understanding how IT operations impact storage requirements.
• The Application Architect who is responsible for the overall availability and performance of the end user application.
• The Capacity Planner who is responsible for determining the IT system capacity needed to support current and forecasted workloads that run on the mainframe and open systems servers.

How Do Interim Copies Affect Your Systems?

There are 3 main ways they can affect your systems. They are listed below in order of importance:

1. Creating interim copies requires extra processing steps which takes time. In an age where people expect real-time answers and information, reducing the elapsed time of a unit of work is crucial to providing a real-time response to user requests of all types from any device. Or simply, reducing the time to accomplish a task means a shorter response time from a user’s perspective.

2. They can raise your MIPS (Millions of Instructions Per Second) consumption, which will raise your overall operating costs. Consider this – the average cost per MIPS annually is $4,500. So, if interim copies are consuming an additional 100 MIPS daily, it could cost you an additional $450,000 annually.

3. When interim copies are created they need to be stored. This means you will need extra storage space on a disc drive, which again will raise your overall operating costs.

Addressing the 3 points above will greatly reduce costs and the time your organization is wasting dealing with interim copies. Alebra’s unique solutions can lower operating overhead and enhance enterprise data integration with an extremely fast transport.

Contact us today to learn more about our solutions for interim copies.

July 12th, 2017

encryption

With cyber-attacks on both personal and public computer systems increasing in frequency, protecting your data has never been more important. Attack types have diversified, and ransomware attacks, in which sensitive data is held hostage or even exposed to public viewing, have become an area of particular concern for modern web users.

As the unfortunate experiences of many have made clear, simple password protection, anti-viral software and firewalls are not always sufficient to repel an attack. This is especially true for businesses and other organizations, which have to utilize high volumes of sensitive data on a daily basis. For information of critical sensitivity, many businesses decide to use encryption for protection.

Encryption; we’ve all heard the term, but most of us have only a vague idea of what it really is.

What is Encryption?

In general terms, it’s the process of encoding information so that only authorized parties can access it. The concept is almost as old as communication itself, and coded messages have been used to protect classified information for thousands of years. In the world of computing, encryption consists of converting electronic data into a seemingly incomprehensible form called ciphertext. The algorithm that creates the ciphertext also creates a unique encryption key which can return the information to its original form.

Why is it Used?

Many organizations will collect terabytes (or more) of data throughout their lifespans. Much of this information will be sensitive and may be stored without access or monitoring for long periods of time, making it susceptible to unauthorized access. Here, encryption is advisable.

In fact, the encryption of data has become exponentially more important in recent years, due to the connected nature of our world today. Every day, massive volumes of data are transmitted via the Internet and other networks (e.g. WLANs or Wireless Local Area Networks). Naturally, this transmission further opens data up to theft, destruction and other unwanted access. Deploying encryption can help to eliminate this vulnerability.

However, confidentiality of data is not its only advantage. Due to the design of the algorithms, encryption also provides authentication of the data’s origin and confirmation that the data has not been modified. On the legal side of things, the digital signature of encrypted data also serves of proof of sender and recipient.

What is The Cost of Encryption?

Most organizations (and certainly the ones that have had their data compromised) will agree: you cannot put a financial price on the protection of your data and that of your customers. The inability to protect data can irreversibly destroy a company’s reputation – no one wants to work with an organization that may allow sensitive information to fall into the wrong hands – and may even land you in legal hot water.

However, one must also consider the other main cost of encryption – time. While there are many factors which affect this (type of data, the volume of information, computer processor speed, etc.), generally the higher the quality, the longer it will take.

It must also be remembered that this is a two-step process. First, there is the initial encryption before the transmission or storage of the data. Then, when a second party has received or retrieved the data, one must allow time for the decryption process. This amount of time varies widely – from a few seconds for a small document to hours or even days for massive troves of data.

One must also consider the computer cycles that will be used on each end. ‘Cycles per byte’ is a measurement of the number of clock cycles a microprocessor performs for each byte of data processed. Due to the complexity of encryption, this number can be quite high – meaning that other tasks you may want to complete on your system are slowed down – or must be postponed until the encryption process has ended.

Experienced data security experts, however, can easily discern an encryption option that will suit your organization’s needs, delivering you a cost-effective solution that at the same time adequately protects your data from threats. If certain basic rules are adhered to, to ensure ultimate protection (e.g. replacing easily compromised copper wire with fiber optic, and using trusted encryption standards like AES), encryption can provide real peace of mind for individuals and organizations alike with respect to their data.

If you’d like to learn more or find out which encryption option would best suit your business, contact us today!

July 12th, 2017

data networks

When considering data networks, companies usually have the same four concerns. Large shops are typically going to do thousands, if not tens of thousands, of data transfers a day. This is a lot of information to track and if something goes wrong, the time and cost to correct it could be more than the company wants to handle. The overall speed of data transfers is another concern we hear. Especially in this day and age, when companies are trying to move more data and do it in a shorter period of time. The third concern is overhead. The fear that the system they need will cost too much on a monthly basis. Lastly is security. If all of their systems are connected together, it would be easier for someone to hack their systems. This typically means everything would have to be encrypted, which would require more time and overhead.

We have designed our data networks to address these concerns.

Volume

Our solutions can manage thousands of operations, because it is scalable. Hardware and software can be added at any point to expand the solution without impacting your normal business operations. This means no loss of revenue due to business disruptions.

Speed

In today’s fast paced world, people want the information they want, when and where they want it and on any device. This cannot be accomplished without a high-speed infrastructure. We understand this, which is why all of our solutions operate at a very high speed.

Overhead

We don’t think you should have to incur more overhead to do more work, so our solution uses 1/20th the overhead of conventional networks. This lowers your monthly software licenses charges.

Security

Our solutions offer a point to point connection, so we can connect your mainframe to a server over a dedicated line that has no outside accessibility. This means there is no need for encryption, which will save you time and improve the overall transfer speed. However, if you decide that encryption is still a must, our solution is easily configurable to support the highest level of encryption.

Our data networks are faster, cost less and more secure than conventional networks. If you would like to learn more or hear about our other solutions, contact us today: http://alebra.com/contact/

June 22nd, 2017

application

Application redesign and reprogramming are typically large and risky efforts.  In the meantime, existing systems must be maintained and modified as the business changes to meet evolving needs. These changes must be replicated in the new systems and developers often find themselves aiming at a moving target.

This leaves leaders with 2 options: buy a new solution or migrate their legacy application.

Buying a new solution introduces another set of technical and operational risks. These risks can be greater than rewriting existing applications, not to mention the risk of business disruption and internal confusion when new applications are implemented. Legacy migration is often a more attractive option because the functionality of legacy applications is already familiar internally and requires a much smaller investment in most cases.

Our data integration platform makes legacy migration possible and offers:

  • Keeping data safe and secure on the mainframe
  • Flexible configurations that allow full or partial workload migration
  • Cross-platform, high-speed data sharing
  • Full historical data conversion with no disruption to your employees or customers
  • Increase in application performance – manifested by speed
  • Shorter batch windows
  • Lower mainframe MIPS consumption
  • Leaving data on the mainframe where it is available for use by other applications

Replatforming workloads used to be a challenge.  Now it’s a matter of which one goes first.  Contact us.  We’ll show you how.

May 25th, 2017

Next Page »