Friday, May 17, 2013

IBM System z: The Lowest-Cost Database Server Solution

by Cal Braunstein in Enterprise Executive
Enterprises that trusted in the mainframe myths and moved their corporate databases to distributed platforms are spending 100 percent more than necessary on database servers, creating data integrity issues and increasing the data risk exposure by constraining the ever-shrinking backup window. By moving databases from shared-nothing distributed data servers to the shared storage environment of IBM zEnterprise systems and putting the applications on Integrated Facility for Linux (IFL) processors and an IBM zEnterprise BladeCenter Extension (zBX), IT executives can reduce their ecosystem costs by more than 50 percent per year.
At Robert Frances Group (RFG), we completed a Total Cost of Ownership (TCO) analysis of the traditional distributed Linux and Microsoft Windows environment vs. a zEnterprise with zBX environment that consolidates the databases on the mainframe and found the distributed environment to be twice as expensive. Our study used the standard three-year zEnterprise leasing and refresh strategy and traditional five-year purchase plan for the distributed x86 scale-out scenario. IT executives should evaluate the shared zEnterprise database server alternative to lower costs, improve productivity and reduce data risk. Additionally, IT executives should work with IBM or a third-party lessor to structure a package that best meets current and future business, financial and IT objectives.
We had several ideas and hypotheses in mind before conducting our analysis. Specifically:
• The scale-out distributed server model using shared-nothing databases is costly and inefficient, creates data integrity and operational exposures, and fails as a best practice. A switch to using the mainframe as a database server eliminates the need for database duplication and synchronization since the mainframe uses a shared-everything architecture. While the acquisition cost of the zEnterprise and zBX servers collectively runs more than distributed x86-based servers, this is more than compensated for by the drastic reduction in database arrays and their associated costs. IT executives should assess the platform options holistically rather than piecemeal to identify the optimal solution.
• A zEnterprise environment can place Linux applications on IFLs and Windows applications on a zBX. Using this tightly knit, workload-optimized solution reduces the number of processors required, improves application and system management, and uses a high-speed interconnect so performance isn’t diminished when shifting to a shared-everything database engine. A zEnterprise solution enables enterprises to improve automation, control, security and visibility to their applications and databases without degrading performance. IT executives should determine which applications and databases should move to a zEnterprise environment and perform a TCO analysis to gain executive buy-in for the shift.
• Several non-financial gains accrue when moving to a shared-everything storage environment and these should also be factored into the decision-making process. Having a single copy of data means there’s only one version of the truth, all outputs and reports will be consistent and keeping things in synch won’t require manual manipulation, which is error-prone. Most enterprises today spend between 25 and 45 percent of their time synchronizing the many database copies. The associated time consumption used for duplication also creates a backup exposure; some backups don’t occur when administrators are pressed for time. Business and IT executives should consider these data integrity and risk exposures.
• Most IT executives have blindly accepted as fact the theorem that distributed processing is the least expensive solution. This hypothesis has gained ground because of a focus on a Total Cost of Acquisition (TCA) perspective. If the only valid cost analysis is the TCA of servers, then this might hold water. However, when the entire ecosystem is analyzed—including administrator costs, application and middleware software license and maintenance fees, cabling, networking, servers, storage, floor space and power and cooling—this theory falls apart.
• When the zEnterprise is used as a database server and IFLs and zBX are fully leveraged—and the analysis occurs holistically—a different picture emerges. The zEnterprise environment costs more than 50 percent less than that of a distributed x86 ecosystem, mostly due to the savings on storage, administrator, warranty and software costs.
• The mainframe architecture supports shared-everything storage while all distributed operating system platforms use a shared-nothing architecture. The mainframe architecture is unique in that multiple workloads share processors, cache, memory, I/O and storage. Moreover, zEnterprise systems provide data, IT and storage management practices and processes that facilitate and simplify the centralized, shared environment and enable application and database multitenancy. This means mainframe applications can share a single instance of a database, such as customer data, while distributed systems force the creation of a copy for each application’s use.
• Often, companies have between seven and 50 copies of the same database in use, so every terabyte of data stored is expanded by requirements for archiving, backup, mirroring, snapshots, test systems and more (see Figure 1). This data store expansion is then duplicated by the number of copies the distributed systems require. Thus, 1 TB of data in a distributed environment could grow to in excess of 100 TBs—more than 10 times the amount needed when databases are shared using a zEnterprise. There are software clustering solutions to get around this distributed duplication phenomenon and some of the storage sprawls, but they’re partial fixes and only address certain data sets.
• Mainframe storage capacity requirements are a fraction of what’s required for distributed systems. Annual acquisition costs for additional storage on a mainframe will be far less than that for distributed storage solutions. The capital expenditure (CAPEX) savings from the differential in storage costs when mainframes are used as a database engine far exceed the added expense of the mainframe hardware. The mainframe’s smaller storage footprint will reduce the operational expenditures (OPEX) and lower the TCO.


The Methodology

We hypothesized that a large Small to Midsized Business (SMB) with revenues between $750 million and $1 billion might operate a more economical data center environment if it used the new zEnterprise architecture and the mainframe as a database server. Most SMBs run their applications on Windows and/or Linux on x86-architected servers that don’t offer the advantages of a scale-up architecture. Let’s assume AB Co. (ABCo) runs 500 applications with 75 percent of them (375) executing on top of the Windows operating system. The remaining applications (125) run on Red Hat's Enterprise Linux. Additionally, 10 percent are CPU-intensive and require their own blade servers. All other applications operate under either VMware or KVM, depending on whether they’re Windows or Linux applications, respectively. The application workload growth rate is at 20 percent per year.
We also assumed a Storwize V7000 Unified Storage System houses the databases for the mainframe and distributed environments. To keep the analysis from becoming too complex, only two sizes of databases are used (1 and 2 TBs) and each application accesses 10 databases, half of each size. The storage growth rate is 25 percent. There are a total of 70 unique databases, half of each size. For the purposes of the study, only the production servers and storage are included; excluded are the archive, backup and snapshot copies of data. Because a Storwize storage solution is used, we assume a 60 percent utilization is achieved in all environments.
We further assumed that 126 TB of storage is required to handle storage needs for the first 12 months of operation. This includes an additional 20 percent for duplicate databases for the mission-critical applications. On the x86 side, since this is a shared-nothing framework, a minimum set of seven copies of databases would be needed. This results in the total initial storage capacity of 770 TB being required to support the storage needs of the first year’s operation. Finally, note that DB2 10 for z/OS is the database software used to access all databases.
The x86 server scenario uses all IBM 16-core HX5 blade servers for application and database processing. The zEnterprise uses the Central Processing (CP) environment to handle all the database interactions, exploits IFLs for all of the Linux workloads and the zBX for the Windows workloads. In this way, each workload is allocated to the server platform best-suited to perform the task. We further assumed the x86 servers were purchased and kept in operation throughout the five-year analysis period while the zEnterprise boxes were leased and refreshed at the end of three years.

The Distributed Approach

We assumed the distributed environment used 24 16-core HX5 blade servers to handle the 500 Linux and Windows applications. Since these environments require shared-nothing storage, the Storwize solution ends up requiring 126 enclosures and 1,285 raw TB of storage. All the hardware was purchased with the financing of the purchase price spread out over the five-year period. To meet the additional capacity demands year-over-year, new servers or storage arrays were purchased using the same methodology.

The Mainframe Solution
We configured a zEnterprise z196 model 501 to handle the database management, along with 13 IFLs and a zBX containing 14 16-core HX5 blade servers. The only application in the CP is the DB2 database management package. None of the distributed applications are rewritten to run on the CP. The Linux applications are relocated to IFLs, where there’s better memory management, allowing for greater utilization (up to 60 percent) and performance. We assume that 10 Linux applications can run on each IFL. Due to the improved management capabilities of a zBX, we assume a 10 to 15 percent performance improvement per HX5 on the zBX compared to a standard distributed environment.
In the zEnterprise environment, the data I/O requests start from the applications in the IFLs and zBX blades and are relayed to the DB2 application in the CP for handling. Only the DB2 application in the CP interfaces with the Storwize storage arrays. This environment initially requires 21 shared-storage Storwize enclosures and 214 raw TBs of storage.
At the end of the three-year lease, the zEnterprise model 501 is upgraded to a model 601 so it could handle the database workload through the next three-year period. As is common when upgrading a mainframe, IFLs are also upgraded. The cost to upgrade each IFL is $6,000 and is factored into the new lease. When the HX5 blade servers are upgraded at the end of the third year, the number of servers shrinks by two. We assumed that even though there are two fewer servers in use in year four, the licenses and associated software maintenance should be continued. This way, when it’s necessary to add more servers in the last year of the analysis, only the software for two servers needs to be factored in instead of four.
Using the previous scenario, we find that, as expected, the cost of the mainframe environment exceeds that of the distributed x86 servers by $9.4 million to $5.3 million on a Net Present Value (NPV) basis. However, the $4.1 million differential is more than recouped on the storage side. The zEnterprise storage costs come in at $3.8 million on an NPV basis while the distributed storage costs exceed $21.7 million. This is a net savings in excess of $13.8 million. Moreover, this savings is more than the cost of the entire zEnterprise ecosystem.

Analysis Considerations
The TCO analysis was done over a five-year period. On the leasing side, the original zEnterprise processors (CP, IFLs and HX5 blades) are returned after 36 months and replaced by the latest-generation servers. By swapping out the old hardware and moving to more powerful processors, the CP growth is contained and excess capacity is minimized. The IFLs growth is retarded and maxed out at 25 while the HX5 blades shrink initially upon replacement and then expand to a total of 22 blade servers. The Storwize arrays grow from the initial 128 TBs (214 raw TBs) to 314 TBs (523 raw TBs). However, the number of enclosures only grows from 21 to 28. This small expansion is the result of leasing the storage and replacing the units with more dense storage at the end of the three-year lease period.
The purchase model assumes that all servers are kept in service for a full five-year cycle and that, whenever added capacity is required, additional servers are bought. Thus, with the purchase model, the 24 servers slowly expand at a 20 percent rate annually until it reaches 48 servers by the end of the five-year cycle. The Storwize arrays expand from an initial 771 TBs (1,285 raw TBs) to 1.9 PBs (3.14 raw PBs) over the five-year period. The number of enclosures jumps from 126 to 216 in the same period, as none of the arrays or enclosures are swapped out.
On the software side in the purchase model, we assumed that payments for all software licenses were financed over the five-year period. However, in the leasing model, the costs of software licenses were spread out over the term of the lease. The leasing model selected was a Fair Market Value (FMV) lease obtained from IGF at a reasonable, but not most favorable, lease rate. The cost of capital and the purchase financing rate were estimated to be 6 percent.

Findings

We found it’s more than 100 percent more costly to distribute database serving among the distributed x86 servers than to consolidate the databases onto a common shared database platform using the zEnterprise as a database server. This cost savings is true on a current dollar basis and NPV basis.
The primary inhibitor to selection of the zEnterprise database engine approach is the fact the zEnterprise server alternative is more than twice as expensive as the x86 servers. Business and IT executives see the price tag differential—$1.14 million for the x86-based servers vs. $3.88 million for the zEnterprise servers over the five-year period—and conclude mainframes aren’t the way to go. However, the server costs pale when the database environment is factored into the equation. The cost for the distributed shared-nothing x86 storage systems comes in at $10.7 million while the mainframe storage system only costs $2 million over the five-year period. The $8.7 million savings in storage acquisition costs more than compensates for the $2.74 million in added zEnterprise acquisition expenses.
When all the TCO factors are examined, the purchased x86 solution runs almost $34.8 million, or on an NPV basis, just over $27 million. The leased zEnterprise solution comes in at more than 50 percent less—$16.5 million on a current dollar basis, or $13.2 million on an NPV basis.

The zEnterprise solution costs remained fairly flat over the five-year period, with most of the yearly expenses in the low $3 million range. There were two years when that didn’t occur—years three and four, where the expenses jumped to $4 million and then dropped to $2.7 million. The purchased x86 solution saw its total annual costs climb from $4.8 million in year one to $9.4 million at the end of the five years (see Figure 2).

Details

The out-of-pocket charges to install the zEnterprise alternative is a wash compared to the installation costs of the x86 solution. However, there’s an $18.2 million savings that’s achieved by using the mainframe as a database server. Approximately one-third of that is hardware costs while another 27 percent savings comes from administrator costs.
In the purchased option, there was a requirement for additional software licenses and maintenance fees and growth in energy consumption. The total additional software expenditures in the purchase model exceeded $3.4 million, with most of that being software license and warranty fees. Similarly, power and cooling charges increased by more than $811,000 over the five years in the purchased model, or about 4 percent of the added expenditures (see Figures 3 and 4).



Other Considerations

There are several other advantages the zEnterprise platform offers that weren’t included in the cost analysis. Some of these are server-related while others are tied to the compressed storage footprint. Having just one copy of data reduces the risk of data integrity exposures caused by application or timing errors. This eliminates the need for syncing copies, which can consume between 25 and 45 percent of administrator time. Most companies today are concerned about the shrinking backup window; eliminating synchronization frees up time for backups. Companies often are hard pressed to get all their backups done as scheduled and are exposed, should a backup run fail to complete. There’s little time for a rerun. If a recovery is necessary, the most recent recovery point may not have been captured, potentially causing data integrity problems, lost revenues and customer dissatisfaction.
zEnterprise processors are architected for maximizing throughput and system utilization when consolidating multiple workloads on a server complex. Mainframes can consistently handle utilization levels of 80 to 100 percent without freezing or failing. Moreover, mainframes are recognized as the best platform for continuous and high availability, investment protection, performance, reliability, scalability and security. Because of its unique scale-up architecture, the cost per unit of work on a mainframe goes down as the workload increases; that isn’t the case with the scale-out architecture (see Figure 5). The cost/performance gains are due to the need for fewer administrators per unit of workload and higher levels of utilization. Mainframes can achieve higher utilization levels because of memory and processor sharing. Under the covers, there are hundreds of I/O processors to handle the data movements, freeing the central and specialty processors to focus on the application and task workloads.

This analysis didn’t examine the added costs of development systems. Here, too, the zEnterprise environment can share databases while each of the x86 test systems would have its own copies of the data. Moreover, users archive and back up the various databases and create snapshots. As shown in Figure 1, these database duplicates increase the rate of storage growth in the distributed environment over that of the mainframe solution. If these additional costs were added to the TCO, the zEnterprise advantage would improve even more.

Conclusion

We found that the zEnterprise reduced costs in all the TCO factors considered. zEnterprise hardware costs were 33 percent less than the x86 ecosystem costs while administrator costs were 28 percent lower. Warranty costs were 16 percent less and the cost of software dropped by 12 percent when the mainframe alternative was used. For much smaller SMBs or departmental systems, mainframes aren’t the answer, but for midsize to large enterprises, the economies of scale provided by mainframe solutions make a compelling case for organizations to re-examine their assumptions and consider the zEnterprise as a target environment.
Mainframe myths have led to higher data center costs and suboptimization. Organizations running hundreds of applications and multiterabytes of data should re-evaluate their architectural platform choices and evaluate whether or not a zEnterprise solution might provide them with a lower TCO. IT executives should insist on an evaluation that addresses the financial facts and ignores the religious platform wars. In today’s environment, IT must select and implement the best target platforms. The zEnterprise as a database server is a great choice.
 

Warning: Mainframe Data Leakage Poses Significant Risk

Denny Yost in Enterprise Executive
Enterprise Executive sat down with Rich Guski, who recently retired from IBM, to get his insight into current and future security trends he sees. Rich was a key participant in RACF security development and also the architect of several CICS security functions that shipped with RACF for z/OS 1.10 during his 27-year tenure with IBM. He’s also a Certified Information Systems Security Professional (CISSP), as defined by the International Information Systems Security Certification Consortium (ISC)2.

Enterprise Executive:
Rich, what are you currently doing now that you’ve retired from IBM?
Rich Guski: I’m currently doing mainframe computer security consulting work.
EE: Do you continue to stay apprised of current security trends that would benefit mainframe professionals?
Guski: Yes. I still attend and speak occasionally at RACF User Group (RUG) meetings and Vanguard conferences.
EE: What current or future trends do you see in the realm of data security that affect the z/OS environment?
Guski: If you’re the manager of an IT organization, one of your responsibilities, as the custodian of your organization’s data, is to comply with requirements for the security and handling of sensitive data. For many years, the simplest way to demonstrate compliance was to use a well-known mainframe Access Control Product (ACP), such as IBM’s RACF, CA’s ACF2 or CA’s Top Secret, and use the associated ACP tools to generate reports to prove to auditors that you’re protecting the sensitive data. But lately, newly emerging standards for security of sensitive data are complicating this picture.
EE: Can you give us an example of such an emerging standard and what it means for the IT executive.
Guski: Yes. Certain sets of security requirements, the Payment Card Industry Data Security Standards (PCI DSS), for example, have evolved their own requirements for the security and handling of sensitive data such as credit card numbers that are used by their industry. The PCI Security Standards Council has the responsibility of managing the PCI DSS standard. What makes the PCI DSS standard unique is that, unlike many other regulations, it comes from private industry rather than the government.
EE: Are there other standards and regulations besides PCI DSS that IT executives should be concerned about?
Guski: Yes, there are other compliance frameworks that, while not exactly the same as PCI requirements, nevertheless result in managerial action items similar to those driven by PCI DSS. However, for the sake of brevity, allow me to focus on PCI DSS for now, but be mindful that my conclusions will apply to other sets of sensitive information that a typical IT executive might be responsible for. Look at it this way: PCI DSS could be viewed as a “Standard of Due Care” in case a data breach ever goes to litigation.

EE: Most mainframe shops use mainframe security products such as RACF, CA-ACF2 or CA Top Secret. Don’t these provide all the security required for mainframe data?
Guski: No. What I’m saying is that these new standards and regulations such as PCI DSS are effectively raising the bar of mainframe security beyond the current reach of these products as they’re used today.
EE: Can you explain this rather strong statement?
Guski: Sure. Most mainframe ACPs are configured to use Discretionary Access Control (DAC), which is an access architecture whereby security administrators or data owners decide how the data should be protected. Users, who must access the data in order to use it as part of their job function, are granted at least READ authority to the data. Any user who can read the data, in effect, becomes a “custodian” of the data with direct control over the disposition of the data. As an example of how a custodian of data can change its security and disposition, consider the following: A user who’s authorized to READ certain data can make a copy of that data, giving the copy a different name with different access control rules. Therefore, they can give READ authority to other users without regard to the data content. Responsible managers know what they know about the location of production confidential and sensitive information, but they don’t know when the confidential and sensitive information is copied to unknown data repositories.
EE: You mean to say that “unknown” sensitive data may have proliferated inside the mainframe environment in such a way that IT organizations don’t know exactly where it is and how it’s protected? Wow! Could you explain how this might happen and provide some examples?
Guski: Yes, of course. Consider the following common scenarios:
• Your production support team is under pressure to fix a program abnormal termination (Abend) and get production back on schedule. To test a required fix, a team member copies production data to his or her own “user-ID prefixed data sets.” To expedite problem resolution, no time is spent sanitizing confidential and sensitive information, which may exist within the data. After the problem is resolved, for various valid reasons, the copied data isn’t deleted from the system.
• Another scenario is when a system user uploads confidential and sensitive information from a distributed platform to the mainframe into a repository that may be protected differently and is unknown to the manager who’s the responsible custodian of the data.
• A user is assigned to produce a report for executives and must do queries of a database containing sensitive information. He stores the query results in data sets under his userid prefix and then produces the report. He never deletes the data sets containing the query results, which contain sensitive information.
In each example, the copied data is inappropriately protected and logging attributes may be incorrectly configured. Again, the scenario continues downhill when the data isn’t promptly deleted, which is often the case. Additionally, some of the co-workers also routinely have access to this data since it’s stored in user-ID prefixed data sets, compounding the problem. These and similar scenarios are referred to as “data leakage,” which is now becoming a recognized risk by IT auditors. While the security experts are focusing on cyber security attacks, is anyone paying attention to the threat of insiders downloading improperly secured leaked data? 
EE: How about companies that have outsourced the management of their mainframes and sensitive data to third-party service providers?
Guski: This is an interesting question. Managers who are responsible for the security and disposition of sensitive data sometimes assume that since the processing of the data has been moved outside their organization, they’re no longer responsible for its security and disposition, but this isn’t so. Again, using PCI as an example, the organization is still responsible for ensuring the service provider performs certain control functions to ensure compliance with the PCI requirements regarding proper handling and security of data, and that the output of these control procedures is presented to the requesting organization to be added to their records for later perusal by their auditors.
A secondary problem that often shows up when an IT organization turns PCI cardholder data over to a service provider occurs when copies of the cardholder data, either complete or partial, are mistakenly left on the organization’s mainframe. PCI requirements state that the organization must be able to show that no such data exists outside of “known data repositories.” These scenarios show how sensitive data can leak outside the scope of a known data repository and become a security and audit risk to the organization.

EE: OK, I can see how data leakage can occur especially over a long period of time and with changes of personnel. But how does data leakage add risk to an IT organization’s bottom line?
Guski: Risk assessments are fundamental requirements found in almost all regulations and requirements, and they have long been a tool for mainframe security auditors. They’re important in determining how data should be protected whether it’s stored, transmitted or archived. Since data leakage has only recently begun to be recognized as a threat, it has only now begun to be included in mainframe risk assessments by auditors. To ignore data leakage in mainframe risk assessments presents an obvious loophole. If a mainframe risk assessment hasn’t been conducted at all, then it’s highly likely that little thought has been given to the mainframe data leakage problem.
Furthermore, if this risk wasn’t identified and included in a mainframe risk assessment, management isn’t positioned to make an intelligent decision regarding potential risk to the organization such as “accept the risk and associated consequences if a breach does occur,” or “demonstrate due diligence by initiating a data discovery project to scan and find all data repositories for unknown cardholder data.” Identifying and documenting mainframe data leakage in a risk assessment also removes the “plausible denial” factor.
To further expand on this point with PCI as the example, let’s consider an instance where all known cardholder data has been identified and is included in the scope of the Cardholder Data Environment (CDE), and any unknown cardholder data is considered to be outside the scope of the CDE.

The following excerpts are from the PCI DSS 2.0:
Scope of Assessment for Compliance with PCI DSS Requirements
The first step of a PCI DSS assessment is to accurately determine the scope of the review. At least annually and prior to the annual assessment, the assessed entity should confirm the accuracy of their PCI DSS scope by identifying all locations and flows of cardholder data and ensuring they are included in the PCI DSS scope. To confirm the accuracy and appropriateness of PCI DSS scope, perform the following:
• The assessed entity identifies and documents the existence of all cardholder data in their environment to verify that no cardholder data exists outside of the currently defined cardholder data environment (CDE).
• Once all locations of cardholder data are identified and documented, the entity uses the results to verify that PCI DSS scope is appropriate (for example, the results may be a diagram or an inventory of cardholder data locations).
• The entity considers any cardholder data found to be in scope of the PCI DSS assessment and part of the CDE unless such data is deleted or migrated/consolidated into the currently defined CDE.
• The entity retains documentation that shows how PCI DSS scope was confirmed and the results retained, for assessor review and/or for reference during the next annual PCI SCC scope confirmation activity.
To be a bit more specific, there are several PCI requirements that will be identified as “Not in Place” when “undiscovered” cardholder data exists outside the defined CDE on a mainframe. I will cite two of these requirements as examples, along with the risk associated with not knowing if and where all such data exists:
PCI Requirement 3.1.1.d: Verify that policies and procedures include at least one of the following: A programmatic process (automatic or manual) to remove, at least quarterly, stored cardholder data that exceeds requirements defined in the data retention policy. The risk associated with data leakage is that unknown cardholder data that leaks out of the confines of the known environment will be non-compliant with the PCI organization’s data retention policy.

PCI Requirement 9.10.2: Verify that cardholder data on electronic media is rendered unrecoverable via a secure wipe program in accordance with industry-accepted standards for secure deletion, or otherwise physically destroying the media (for example, degaussing). The risk associated with data leakage is that on mainframes, electronic media includes data repositories residing on both DASD and tape. Unknown cardholder data won’t be identified and therefore may not be rendered unrecoverable via a secure wipe program.
Although PCI data has been used repeatedly as examples in this discussion, this same thought process should also be applied to any confidential and sensitive information stored on the mainframe.
EE: OK, I see now how data leakage can translate into a mainframe audit compliance risk. So, are there any commercially supported tools available that can help assess and mitigate the risks associated with data leakage on mainframes?
Guski: Your question uncovers another problem with conducting a risk assessment for data leakage on mainframes. Although data leakage discovery tools are presently in use for distributed platforms, they’re only just beginning to become available for the mainframe.

An example of a comprehensive and commercially supported data leakage discovery and prevention tool that runs on the mainframe is DataSniff from XBridge Systems. This product provides the capability to search for and discover confidential and sensitive data so that appropriate protection can be applied. This protection may include deletion, migration to removable media, encryption or validation of the access controls for this data. This action will significantly reduce the data leakage risk to any organization.
DataSniff can also be used to support projects such as “encrypt all social security numbers.” The first step is to find all files that contain social security numbers, including those files associated with data leakage.
And after the encryption project is complete, running regular data vulnerability scans is important because social security numbers can creep back into the mainframe environment from external sources.
EE: Rich, in closing, can you summarize and possibly leave us with any additional suggestions for improving security for sensitive data of which we may be responsible?
Guski: Certainly. IT managers are responsible for the security of confidential and sensitive information that’s entrusted to their organization. Mainframe interaction with distributed environments and other factors, such as mergers and acquisitions, have added the new threat of data leakage to the existing responsibilities that IT management must address. The PCI standard, which is typical among recently emerged data security standards, implies that Data Leakage Prevention (DLP) must be addressed to prove compliance. Commercially supported discovery tools, such as DataSniff from XBridge Systems, have only recently become available for mainframes. IT organizations with mainframes should consider this tool in order to understand and significantly reduce the data leakage risk and to ensure audit compliance.
 

Thursday, May 16, 2013

Making mainframe technology hip again

http://searchdatacenter.techtarget.com/opinion/Making-mainframe-technology-hip-again?utm_medium=EM&asrc=EM_NLT_21716972&utm_campaign=20130516_Can%20IBM%20maintain%20the%20mainframe's%20momentum?_ewatkins&utm_source=NLT&track=NL-1811&ad=886656

Robert Crawford

The mainframe is a beautiful piece of technology, able to securely manage and meet the performance goals of disparate workloads for thousands of users in a relatively small, energy-efficient package. It maintains levels of compatibility and continuity that protect customers' investments in years of development and billions of lines of code. Despite rumors to the contrary, big iron continues to be the back-end system of record for large shops across a spectrum of industries. So, if technology isn't the problem, what's inhibiting the mainframe?

Bumps in the mainframe road

Every year, there seem to be fewer and fewer third-party software products specifically for the mainframe. Most off-the-shelf applications tend to be at a department level and favor smaller machines. As for system and utility software, the big fish continue to eat the small fish and popular tools are concentrated in fewer vendors' hands. This shrinking developer community tends to deflate the mainframe's ecosphere and limit customer choices.
Another issue casting doubt on the mainframe's future is the graying of technical support. The typical mainframe shop has more people who are inching closer to retirement. As the oldsters go, they will take their skills and tribal knowledge of systems and applications with them. This alone may encourage some companies to drop the mainframe rather than risk depending on systems no one can operate.
A corollary to retiring technical support is the loss of mainframe skills. When I graduated in 1981, my fellow alumni knew how to program in PL/1 and COBOL. A few of us even knew our way around JCL. That isn't true anymore, as our universities crank out Java programmers familiar with Linux and Windows. Mainframe shops must spend time and money to train graduates before they are useful. Even worse, some highly technical skills, such as assembler coding and dump reading, get left out altogether, making the customer even more dependent on vendors for technical support.
Lastly, in my mind, the biggest threat to the mainframe's future is cost. Fairly or unfairly, the price of mainframe components makes it an easy target at budget meetings and the first victim when an IT department wants to cut expenses. The alternative -- managing expenses through resource throttling and careful systems management -- has drawbacks, as it leads to wasting hours watching CPU consumption instead of making enhancements.
Controlling costs also leads customers to build asymmetric configurations that are harder to maintain, more likely to fail and no longer aligned with IBM's best practices. IBM has several explanations for why the mainframe may "seem" more expensive, but when shareholders are restless and the CEO wants to cut expenses, "value" becomes less important.
And IBM isn't alone in this. Independent Software Vendors (ISVs) also drive expenses, with contracts based on processor capacity instead of actual usage. These types of contracts tend to balloon customers' software costs any time they upgrade.

The road to a brighter future?

The mainframe's shrinking ecosystem may be irreversible. It most cases, it makes sense to put departmental applications on distributed boxes. The smaller vendor base is explainable by the age of the platform and the fact that the operating system fills many of the ISV software gaps. The good news is that IBM offers z Personal Development Tool (zPDT), which can run a mainframe image on a laptop and is especially valuable to vendors who can't afford to rent or own their own mainframe.
The aging of technical support is surmountable. IBM continues to reach out to colleges to provide mainframe training for interested students. Customers can ensure college hires have a chance to run the big machine as part of their introduction to IT. More importantly, these companies must also stress that the mainframe is still an important infrastructure, not something that will be retired at the first opportunity. Enterprises must also provide viable mainframe career paths that offer growth opportunities to anyone who sticks with it.
That leaves us with cost. IBM shows few signs of relinquishing its grip on the platform and remains the sole provider of processors and the major systems that make a mainframe worth running. While this is good for IBM's revenue stream now, it can make customers wary of entering or expanding the platform. In the long run, this reluctance will ultimately damage IBM's bottom line. Will IBM be able to make the right call when the time comes?

Mapping the right mainframe path

There are still a lot of things to look forward to despite these obstacles. IBM continues to invest in the platform; every generation of its mainframe processor gets faster, more compact, more fault tolerant and more hospitable for non-traditional mainframe workloads, such as Java. The operating system, z/OS, gets more sophisticated and easier to maintain with every release. The transaction processors and database management systems also remain on the cutting-edge of technology, especially as IBM finds ways to make customers' investments play well with modern systems.
The mainframe may not be the only game in town, but I'm convinced it's still one of the best and worth using in the future.
The opinions expressed in the above column belong solely to Robert Crawford and do not reflect those of his employer.
About the author:
Robert Crawford has been a systems programmer for 29 years. While specializing in CICS technical support, he has also worked with VSAM, DB2, IMS and other mainframe products. He has programmed in Assembler, Rexx, C, C++, PL/1 and COBOL. The latest phase in his career is as an operations architect responsible for establishing mainframe strategy and direction for a large insurance company. He works in south Texas, where he lives with his family.