Intelligently Connected

ALEBRA TECHNOLOGIES, INC.
3810 Pheasant Ridge Drive NE
Minneapolis, MN 55449

651.366.6140 / Toll-Free:888.340.2727

Data Transfers

Bulk data movement, access, copying and sharing requires a lot of bandwidth and can wreak havoc with interactive applications and other normal network traffic on the same system. 10-15 years ago, people might have expected a slight delay when trying to access data, but in today’s fast paced world people expect information within seconds at anytime from anywhere, from any device. Which means network delays, caused by bulk data transfers, have a negative ripple effect on the internal productivity of your company – and your ability to provide the quality of service needed to attract and retain customers. This could cost you sales and have a negative effect on your bottom line.

What’s the Solution?

This is precisely why we created the z/OpenGate. z/OpenGate creates a channel-to-channel connection that integrates mainframes to Linux, Unix, and Windows (LUW), mainframe to mainframe, LUW to LUW, and mainframes to Hadoop and other HDFS platforms. It’s not only an order of magnitude faster, more secure and reliable than traditional networks, it will also shorten batch run times and lower your mainframe’s MIPS consumption.

Network vs. Channel MIPS

z/OpenGate far exceeds the efficiency of conventional TCP/IP networks.

z/OpenGate was designed to help you improve your customer’s experience by providing anytime, anywhere information access. Improve your bottom line with a cross-platform data integration solution that doesn’t make you wait. Don’t wait. Contact us today.

July 20th, 2017

Leave a Comment

When moving databases from one system to another, often temporary copies of the unloaded data are created. These copies are called interim copies. Even though these copies are temporary, they can influence the speed and functionality of your overall systems and affect the job functions of some members of your IT team.

Who Do Interim Copies Affect?

The three main job functions affected by are all located in the IT Department and they are:
• The Storage Architect who’s responsible for understanding how IT operations impact storage requirements.
• The Application Architect who is responsible for the overall availability and performance of the end user application.
• The Capacity Planner who is responsible for determining the IT system capacity needed to support current and forecasted workloads that run on the mainframe and open systems servers.

How Do Interim Copies Affect Your Systems?

There are 3 main ways they can affect your systems. They are listed below in order of importance:

1. Creating interim copies requires extra processing steps which takes time. In an age where people expect real-time answers and information, reducing the elapsed time of a unit of work is crucial to providing a real-time response to user requests of all types from any device. Or simply, reducing the time to accomplish a task means a shorter response time from a user’s perspective.

2. They can raise your MIPS (Millions of Instructions Per Second) consumption, which will raise your overall operating costs. Consider this – the average cost per MIPS annually is $4,500. So, if interim copies are consuming an additional 100 MIPS daily, it could cost you an additional $450,000 annually.

3. When interim copies are created they need to be stored. This means you will need extra storage space on a disc drive, which again will raise your overall operating costs.

Addressing the 3 points above will greatly reduce costs and the time your organization is wasting dealing with interim copies. Alebra’s unique solutions can lower operating overhead and enhance enterprise data integration with an extremely fast transport.

Contact us today to learn more about our solutions for interim copies.

July 12th, 2017

Leave a Comment

encryption

With cyber-attacks on both personal and public computer systems increasing in frequency, protecting your data has never been more important. Attack types have diversified, and ransomware attacks, in which sensitive data is held hostage or even exposed to public viewing, have become an area of particular concern for modern web users.

As the unfortunate experiences of many have made clear, simple password protection, anti-viral software and firewalls are not always sufficient to repel an attack. This is especially true for businesses and other organizations, which have to utilize high volumes of sensitive data on a daily basis. For information of critical sensitivity, many businesses decide to use encryption for protection.

Encryption; we’ve all heard the term, but most of us have only a vague idea of what it really is.

What is Encryption?

In general terms, it’s the process of encoding information so that only authorized parties can access it. The concept is almost as old as communication itself, and coded messages have been used to protect classified information for thousands of years. In the world of computing, encryption consists of converting electronic data into a seemingly incomprehensible form called ciphertext. The algorithm that creates the ciphertext also creates a unique encryption key which can return the information to its original form.

Why is it Used?

Many organizations will collect terabytes (or more) of data throughout their lifespans. Much of this information will be sensitive and may be stored without access or monitoring for long periods of time, making it susceptible to unauthorized access. Here, encryption is advisable.

In fact, the encryption of data has become exponentially more important in recent years, due to the connected nature of our world today. Every day, massive volumes of data are transmitted via the Internet and other networks (e.g. WLANs or Wireless Local Area Networks). Naturally, this transmission further opens data up to theft, destruction and other unwanted access. Deploying encryption can help to eliminate this vulnerability.

However, confidentiality of data is not its only advantage. Due to the design of the algorithms, encryption also provides authentication of the data’s origin and confirmation that the data has not been modified. On the legal side of things, the digital signature of encrypted data also serves of proof of sender and recipient.

What is The Cost of Encryption?

Most organizations (and certainly the ones that have had their data compromised) will agree: you cannot put a financial price on the protection of your data and that of your customers. The inability to protect data can irreversibly destroy a company’s reputation – no one wants to work with an organization that may allow sensitive information to fall into the wrong hands – and may even land you in legal hot water.

However, one must also consider the other main cost of encryption – time. While there are many factors which affect this (type of data, the volume of information, computer processor speed, etc.), generally the higher the quality, the longer it will take.

It must also be remembered that this is a two-step process. First, there is the initial encryption before the transmission or storage of the data. Then, when a second party has received or retrieved the data, one must allow time for the decryption process. This amount of time varies widely – from a few seconds for a small document to hours or even days for massive troves of data.

One must also consider the computer cycles that will be used on each end. ‘Cycles per byte’ is a measurement of the number of clock cycles a microprocessor performs for each byte of data processed. Due to the complexity of encryption, this number can be quite high – meaning that other tasks you may want to complete on your system are slowed down – or must be postponed until the encryption process has ended.

Experienced data security experts, however, can easily discern an encryption option that will suit your organization’s needs, delivering you a cost-effective solution that at the same time adequately protects your data from threats. If certain basic rules are adhered to, to ensure ultimate protection (e.g. replacing easily compromised copper wire with fiber optic, and using trusted encryption standards like AES), encryption can provide real peace of mind for individuals and organizations alike with respect to their data.

If you’d like to learn more or find out which encryption option would best suit your business, contact us today!

July 12th, 2017

Leave a Comment

data networks

When considering data networks, companies usually have the same four concerns. Large shops are typically going to do thousands, if not tens of thousands, of data transfers a day. This is a lot of information to track and if something goes wrong, the time and cost to correct it could be more than the company wants to handle. The overall speed of data transfers is another concern we hear. Especially in this day and age, when companies are trying to move more data and do it in a shorter period of time. The third concern is overhead. The fear that the system they need will cost too much on a monthly basis. Lastly is security. If all of their systems are connected together, it would be easier for someone to hack their systems. This typically means everything would have to be encrypted, which would require more time and overhead.

We have designed our data networks to address these concerns.

Volume

Our solutions can manage thousands of operations, because it is scalable. Hardware and software can be added at any point to expand the solution without impacting your normal business operations. This means no loss of revenue due to business disruptions.

Speed

In today’s fast paced world, people want the information they want, when and where they want it and on any device. This cannot be accomplished without a high-speed infrastructure. We understand this, which is why all of our solutions operate at a very high speed.

Overhead

We don’t think you should have to incur more overhead to do more work, so our solution uses 1/20th the overhead of conventional networks. This lowers your monthly software licenses charges.

Security

Our solutions offer a point to point connection, so we can connect your mainframe to a server over a dedicated line that has no outside accessibility. This means there is no need for encryption, which will save you time and improve the overall transfer speed. However, if you decide that encryption is still a must, our solution is easily configurable to support the highest level of encryption.

Our data networks are faster, cost less and more secure than conventional networks. If you would like to learn more or hear about our other solutions, contact us today: http://alebra.com/contact/

June 22nd, 2017

application

Application redesign and reprogramming are typically large and risky efforts.  In the meantime, existing systems must be maintained and modified as the business changes to meet evolving needs. These changes must be replicated in the new systems and developers often find themselves aiming at a moving target.

This leaves leaders with 2 options: buy a new solution or migrate their legacy application.

Buying a new solution introduces another set of technical and operational risks. These risks can be greater than rewriting existing applications, not to mention the risk of business disruption and internal confusion when new applications are implemented. Legacy migration is often a more attractive option because the functionality of legacy applications is already familiar internally and requires a much smaller investment in most cases.

Our data integration platform makes legacy migration possible and offers:

  • Keeping data safe and secure on the mainframe
  • Flexible configurations that allow full or partial workload migration
  • Cross-platform, high-speed data sharing
  • Full historical data conversion with no disruption to your employees or customers
  • Increase in application performance – manifested by speed
  • Shorter batch windows
  • Lower mainframe MIPS consumption
  • Leaving data on the mainframe where it is available for use by other applications

Replatforming workloads used to be a challenge.  Now it’s a matter of which one goes first.  Contact us.  We’ll show you how.

May 25th, 2017

mainframes Mainframes are not just relics of a former computer era. They’re massive machines that typically process tons of small and simple daily transactions. Some of the world’s top corporations entrust their data to mainframe computers. One of the biggest reasons why they prefer keeping this platform at the center of their technology strategies is that mainframe computers offer unmatched functionality and reliability. No other platform can handle the demands of major financial institutions and large corporations.

Did you know that a big percentage of the world’s transaction data occurs on mainframes? Data published by the Datatrain suggests that 1.1 million high-volume consumer transactions occur every second on mainframe computers. They also disclose that more than 70 percent of Global Fortune 500 companies use mainframes to handle their core business functions. So why are companies growing their use of mainframes?

Throughput

The number of users transacting online is expected to go well over 38.5 billion over the next few years. To deal with the growing demand for online transaction processing, financial institutions, governments, and universities need mainframe computers. Mainframes can keep up with increasing demand, by processing millions of transactions per second. These machines are ideal for hosting some of the most important, mission-critical applications and they’re designed to allow a large number of customers to access the same data simultaneously, without any issues. They use large CPUs and System Assistance Processors to transfer data rapidly and with precision.

Scalability

Mainframes retain performance levels when software and hardware changes are made. They efficiently adapt to new workloads of varying complexity.  As companies grow in employees and customers, additional capacity can be added to support business growth without disrupting normal business processes.

Reliability

Modern mainframe computers exhibit RAS (reliability, availability and serviceability) characteristics. They’re designed to remain in service at all times. Financial institutions, for example, can’t afford to have system outages. They need robust and highly reliable systems with no unplanned downtime. Mainframes meet this particular requirement. IBM System Z models have a mean time to failure of 40 years, which means these machines run continuously for an average of 40 years with zero downtime. The architecture allows for software and hardware upgrades without any downtime, which is a huge plus for banks and other large organizations.

High Security

Large scale enterprises like banks, insurance companies, and government organizations want to keep user and internal data secure and retrievable. This is quite a challenging task for those hundreds and thousands of online transaction requests that need to be processed, encrypted and decrypted simultaneously in real time and at speeds that won’t be noticeable to users. Mainframes are undoubtedly more secure than commercial desktop operating systems. They have extensive capabilities to share critical data with users authorized to see it, but they still protect data through built-in cryptographic hardware acceleration.

Customized

Mainframes can be customized to fit an individual user’s needs. Processors can be turned on and off, depending on the needs of the business. Operating parameters can be customized as well. Some of the latest mainframes, like the IBM System z13, run much faster than previous models. They’re designed to process more than 30,000 transactions per second and up to 2.5 billion transactions per day. These machines have an incredible capacity and outstanding processing power.

Modern mainframes are designed to deliver more value than ordinary computers and they’re built to handle more workloads. They’re also perfectly positioned to support new technologies and offer tremendous efficiency as the digital transformation continues.

Mainframes have been here for more than 50 years. Rather than losing relevancy, they’re only getting faster and more reliable with each passing year.

Contact us today to learn more about our mainframe capabilities: http://alebra.com/contact/

May 9th, 2017

4-Hour Rolling Average

The 4-hour rolling average plays a significant role in determining your mainframe operating costs. It’s just a metric, but a crucially important one. IBM uses it to determine your company’s software monthly license charges. So, when you find a way to lower your 4-hour rolling average (4HRA) you can help your IT department directly save money.

This sounds simple enough, but improving this metric without reducing your workload capacity is easier said than done. However, once you take the time to understand how the 4-hour rolling average works, how it’s calculated, and how it can save money, being 4HRA sensitive can revolutionize your company’s monthly software expenses.

How Software Pricing Works

Generally, software monthly license charge (MLC) is priced based on peak MSU (millions of service units) usage per month, not on actual machine capacity. To provide a fair pricing model, IBM supports the use of sub-capacity licensing and bases its price on the 4-hour rolling average of your mainframe’s MSU consumption. Basically, the tech giant looks at the average MSU used for all 4-hour periods within a month and uses the peak 4HRA to determine your monthly license charges. Where you may be losing out, is if your peak utilization per hour is particularly high during some hours due to running jobs while your mainframe is heavily used for high priority workloads.

IBM allows you to implement soft capping. This technique allows you to set a capacity limit for your system. Then, you are charged based on whatever is lower, your peak 4HRA or your defined capacity. Soft capping combined with active capacity management can lead to significant IT cost reduction.

Managing Your Resources

While workload does impact the 4-hour rolling average, how a system is tuned will also contribute to this metric. The quality of your code, system parameters, and data structures will all impact the 4HRA.

When a developer goes in and looks at what are the quiet hours for the LPARs – the logical partition of your mainframe into sets of resources – it is then possible to fine-tune your workloads to run during these times. Code should be written so the jobs, which can be moved, will run during these lean periods, not during peak processing hours. It’s also possible to make improvements by streamlining these workloads.

This will force your mainframe to consume only MSUs that are essential for the necessary workloads, rather than overconsuming in some periods. Even something as simple as moving a job by half an hour can reduce the peak 4HRA.

Getting the Right Insights for Accurate Tuning

It’s essential to adjust the right workloads to reduce costs. Modeling technology can be used to get a clear picture into your mainframe’s resource consumption and how it changes over time. Your IT team can get the necessary insight to identify the best options for tuning your workloads.

Lowering your software MLC bill can make a dramatic difference to your company’s bottom line. By simply tuning your workloads and actively managing your MSU consumption, it is possible to see a significant expense reduction.

Ready to learn about other options to lower your company’s 4-hour rolling average and see how much you can save? We’ll introduce you to a unique technology innovation that helps lower your 4HRA. Contact us today: http://alebra.com/contact/

IT Solutions

April 26th, 2017

If you are like me, you would feel misinformed. This article does not talk about soda but about the consumption of mainframe MIPS (millions of instructions per second) required to move data around the enterprise. Practically every IT budget places a priority on how many Mainframe MIPS are being consumed. If there’s a disconnect, between what is planned for and what is used, then enterprises may find themselves with unanticipated costs.

According to The Hidden Cost of TCP/IP on Z/OS, a White Paper prepared by Bill Yeager, Chief Technical Officer at Alebra Technologies, many mainframe systems use TCP/IP to move data to and from the Mainframe. In addition, he states that it is natural to expect that as the volume of transfers increases so will the expenses of using the resulting processors. However, what did surprise Mr. Yeager was that customers had indicated to him that the system usage reported under their job accounting did not match what they were seeing as far resource consumption.

He set out to see if he could explain this phenomenon, so he set up benchmark runs using standalone processor environments with no other workloads present. What he found is the job accounting for the FTP clients captured only 50% of what was being consumed. He believes that the differential is significant enough that either or both of the following should be considered:

􀂃 A charge-back system should be put in place to account for this time.
􀂃 Alternative methods should be considered

Want to see just how much this may be impacting your business costs? Contact us today: http://alebra.com/contact/

April 21st, 2017

Read the interview by Tom Lehn, Alebra Technologies’ CEO.

December 21st, 2016