Intelligently Connected

3810 Pheasant Ridge Drive NE
Minneapolis, MN 55449

651.366.6140 / Toll-Free:888.340.2727


Application redesign and reprogramming are typically large and risky efforts.  In the meantime, existing systems must be maintained and modified as the business changes to meet evolving needs. These changes must be replicated in the new systems and developers often find themselves aiming at a moving target.

This leaves leaders with 2 options: buy a new solution or migrate their legacy application.

Buying a new solution introduces another set of technical and operational risks. These risks can be greater than rewriting existing applications, not to mention the risk of business disruption and internal confusion when new applications are implemented. Legacy migration is often a more attractive option because the functionality of legacy applications is already familiar internally and requires a much smaller investment in most cases.

Our data integration platform makes legacy migration possible and offers:

  • Keeping data safe and secure on the mainframe
  • Flexible configurations that allow full or partial workload migration
  • Cross-platform, high-speed data sharing
  • Full historical data conversion with no disruption to your employees or customers
  • Increase in application performance – manifested by speed
  • Shorter batch windows
  • Lower mainframe MIPS consumption
  • Leaving data on the mainframe where it is available for use by other applications

Replatforming workloads used to be a challenge.  Now it’s a matter of which one goes first.  Contact us.  We’ll show you how.

May 25th, 2017

Leave a Comment

mainframes Mainframes are not just relics of a former computer era. They’re massive machines that typically process tons of small and simple daily transactions. Some of the world’s top corporations entrust their data to mainframe computers. One of the biggest reasons why they prefer keeping this platform at the center of their technology strategies is that mainframe computers offer unmatched functionality and reliability. No other platform can handle the demands of major financial institutions and large corporations.

Did you know that a big percentage of the world’s transaction data occurs on mainframes? Data published by the Datatrain suggests that 1.1 million high-volume consumer transactions occur every second on mainframe computers. They also disclose that more than 70 percent of Global Fortune 500 companies use mainframes to handle their core business functions. So why are companies growing their use of mainframes?


The number of users transacting online is expected to go well over 38.5 billion over the next few years. To deal with the growing demand for online transaction processing, financial institutions, governments, and universities need mainframe computers. Mainframes can keep up with increasing demand, by processing millions of transactions per second. These machines are ideal for hosting some of the most important, mission-critical applications and they’re designed to allow a large number of customers to access the same data simultaneously, without any issues. They use large CPUs and System Assistance Processors to transfer data rapidly and with precision.


Mainframes retain performance levels when software and hardware changes are made. They efficiently adapt to new workloads of varying complexity.  As companies grow in employees and customers, additional capacity can be added to support business growth without disrupting normal business processes.


Modern mainframe computers exhibit RAS (reliability, availability and serviceability) characteristics. They’re designed to remain in service at all times. Financial institutions, for example, can’t afford to have system outages. They need robust and highly reliable systems with no unplanned downtime. Mainframes meet this particular requirement. IBM System Z models have a mean time to failure of 40 years, which means these machines run continuously for an average of 40 years with zero downtime. The architecture allows for software and hardware upgrades without any downtime, which is a huge plus for banks and other large organizations.

High Security

Large scale enterprises like banks, insurance companies, and government organizations want to keep user and internal data secure and retrievable. This is quite a challenging task for those hundreds and thousands of online transaction requests that need to be processed, encrypted and decrypted simultaneously in real time and at speeds that won’t be noticeable to users. Mainframes are undoubtedly more secure than commercial desktop operating systems. They have extensive capabilities to share critical data with users authorized to see it, but they still protect data through built-in cryptographic hardware acceleration.


Mainframes can be customized to fit an individual user’s needs. Processors can be turned on and off, depending on the needs of the business. Operating parameters can be customized as well. Some of the latest mainframes, like the IBM System z13, run much faster than previous models. They’re designed to process more than 30,000 transactions per second and up to 2.5 billion transactions per day. These machines have an incredible capacity and outstanding processing power.

Modern mainframes are designed to deliver more value than ordinary computers and they’re built to handle more workloads. They’re also perfectly positioned to support new technologies and offer tremendous efficiency as the digital transformation continues.

Mainframes have been here for more than 50 years. Rather than losing relevancy, they’re only getting faster and more reliable with each passing year.

Contact us today to learn more about our mainframe capabilities:

May 9th, 2017

Leave a Comment

4-Hour Rolling Average

The 4-hour rolling average plays a significant role in determining your mainframe operating costs. It’s just a metric, but a crucially important one. IBM uses it to determine your company’s software monthly license charges. So, when you find a way to lower your 4-hour rolling average (4HRA) you can help your IT department directly save money.

This sounds simple enough, but improving this metric without reducing your workload capacity is easier said than done. However, once you take the time to understand how the 4-hour rolling average works, how it’s calculated, and how it can save money, being 4HRA sensitive can revolutionize your company’s monthly software expenses.

How Software Pricing Works

Generally, software monthly license charge (MLC) is priced based on peak MSU (millions of service units) usage per month, not on actual machine capacity. To provide a fair pricing model, IBM supports the use of sub-capacity licensing and bases its price on the 4-hour rolling average of your mainframe’s MSU consumption. Basically, the tech giant looks at the average MSU used for all 4-hour periods within a month and uses the peak 4HRA to determine your monthly license charges. Where you may be losing out, is if your peak utilization per hour is particularly high during some hours due to running jobs while your mainframe is heavily used for high priority workloads.

IBM allows you to implement soft capping. This technique allows you to set a capacity limit for your system. Then, you are charged based on whatever is lower, your peak 4HRA or your defined capacity. Soft capping combined with active capacity management can lead to significant IT cost reduction.

Managing Your Resources

While workload does impact the 4-hour rolling average, how a system is tuned will also contribute to this metric. The quality of your code, system parameters, and data structures will all impact the 4HRA.

When a developer goes in and looks at what are the quiet hours for the LPARs – the logical partition of your mainframe into sets of resources – it is then possible to fine-tune your workloads to run during these times. Code should be written so the jobs, which can be moved, will run during these lean periods, not during peak processing hours. It’s also possible to make improvements by streamlining these workloads.

This will force your mainframe to consume only MSUs that are essential for the necessary workloads, rather than overconsuming in some periods. Even something as simple as moving a job by half an hour can reduce the peak 4HRA.

Getting the Right Insights for Accurate Tuning

It’s essential to adjust the right workloads to reduce costs. Modeling technology can be used to get a clear picture into your mainframe’s resource consumption and how it changes over time. Your IT team can get the necessary insight to identify the best options for tuning your workloads.

Lowering your software MLC bill can make a dramatic difference to your company’s bottom line. By simply tuning your workloads and actively managing your MSU consumption, it is possible to see a significant expense reduction.

Ready to learn about other options to lower your company’s 4-hour rolling average and see how much you can save? We’ll introduce you to a unique technology innovation that helps lower your 4HRA. Contact us today:

IT Solutions

April 26th, 2017

If you are like me, you would feel misinformed. This article does not talk about soda but about the consumption of mainframe MIPS (millions of instructions per second) required to move data around the enterprise. Practically every IT budget places a priority on how many Mainframe MIPS are being consumed. If there’s a disconnect, between what is planned for and what is used, then enterprises may find themselves with unanticipated costs.

According to The Hidden Cost of TCP/IP on Z/OS, a White Paper prepared by Bill Yeager, Chief Technical Officer at Alebra Technologies, many mainframe systems use TCP/IP to move data to and from the Mainframe. In addition, he states that it is natural to expect that as the volume of transfers increases so will the expenses of using the resulting processors. However, what did surprise Mr. Yeager was that customers had indicated to him that the system usage reported under their job accounting did not match what they were seeing as far resource consumption.

He set out to see if he could explain this phenomenon, so he set up benchmark runs using standalone processor environments with no other workloads present. What he found is the job accounting for the FTP clients captured only 50% of what was being consumed. He believes that the differential is significant enough that either or both of the following should be considered:

􀂃 A charge-back system should be put in place to account for this time.
􀂃 Alternative methods should be considered

Want to see just how much this may be impacting your business costs? Contact us today:

April 21st, 2017

Read the interview by Tom Lehn, Alebra Technologies’ CEO.

December 21st, 2016