Tuesday, December 30, 2014

Mainframe Futures: Reading the Tea Leaves

Mainframe Futures: Reading the Tea Leaves


 
I’ve been getting a steady trickle of inquires this year about the future of the mainframe from our enterprise clients. Most of them are more or less in the form of “I have a lot of stuff running on mainframes. Is this a viable platform for the next decade or is IBM going to abandon them.” I think the answer is that the platform is secure, and in the majority of cases the large business-critical workloads that are currently on the mainframe probably should remain on the mainframes. In the interests of transparency I’ve tried to lay out my reasoning below so that you can see if it applies to your own situation.
How Big is the Mainframe LOB?
It's hard to get exact figures for the mainframe contributions to IBM's STG (System & Technology Group) total revenues, but the data they have shared shows that their mainframe revenues seem to have recovered from the declines of previous quarters and at worst flattened. Because the business is inherently somewhat cyclical, I would expect that the next cycle of mainframes, rumored to be arriving next year, should give them a boost similar to the last major cycle, allowing them to show positive revenues next year.
My crude and conservative guesstimate for IBM's mainframe hardware and software revenue is somewhere around $3 - 4 Billion, and based on the nearly constant litany of customer comments about pricing, very profitable. Adding in service revenues that the mainframes pull in that are not accounted for in STG revenues, and the overall mainframe business is probably in excess of $5 Billion, and probably the most profitable portion of their STG revenue, which also includes their Power RISC servers, storage and networking. Financially-minded readers should note that 2014 STG revenues will also include most of the year's x86 server revenues, but as of October this line was transferred to Lenovo, reducing their total revenues but almost certainly increasing their margins. So in terms of its underlying economics, IBM's mainframe business is inherently attractive to them, and my opinion is that they will not by choice abandon it for the foreseeable future and will continue a healthy investment in it, some of which is also leveraged by their Power RISC systems as well.
Customer Workloads and Intentions – The Dominant Variable
The question about the role of the mainframe then devolves to the underlying motivations of mainframe users - will they stay or will they migrate to nominally lower-cost platforms? I think the answer is kind of a blended analysis against a rapidly changing technology and workload background. While many workload have indeed migrated, primarily to RISC Unix and to a lesser extent to x86 Linux, much of the mainframe workload is still anchored to the mainframe by two underlying issues - software and overall scaleability and reliability. If you have a workload dominated by what we can call, to simplify our analysis, a "legacy" workload dominated by COBOL, CICS, and database (mostly DB2, maybe some IMS), migration from the mainframe is difficult and fraught with project risk, and the inevitable changes to overall DR/HA architecture almost always end up pushing the end-state cost of the new environment much higher than proponents expected in the early stages. In regards to scale, the mainframe is still the highest performing platform for OLTP, the undisputed king of batch, and the only platform that can gracefully handle mixed workloads - in short all the things that the mainframe has always done well.
At the same time, IBM has done an excellent job of enabling the mainframe as a platform for new workloads, notably Java and Linux, with special pricing for Linux/Java IFLs that place the overall life cycle costs much closer to x86 costs than ever before. While I remain a bit skeptical of these models until I can review them further, the overall growth of Linux workloads is impressive, and probably accounts for much of the growth in mainframe MIPS over the last few years. The growth seems to be concentrated in applications where access to mainframe-resident data is at the core of the application, environments where the reduced latency and high throughput that can be obtained by a Linux running on an LPAR on the same system as the database resources compensates for any cost differences. I expect that future mainframe product cycles will continue to push the mainframe as a hub for application services that depend on mainframe resident data even if the services are implemented in Linux/Java and other runtimes usually associated with distributed x86 systems.
Wrapping it Up
As I noted above, this train of analysis leads pretty directly to the conclusion that mainframes, while probably never breaking out of at best a low single-digit growth, will not go away for the foreseeable future, and that unless you are willing to engage in radical, risky and expensive transformation, your current large mainframe workloads will be with you for the long-term.
The Forrester Muse

Sunday, December 28, 2014

zEnterprise vs. Intel Server Farms

zEnterprise vs. Intel Server Farms

How many Intel x86 servers do you need to match the performance of a zEnterprise and at what cost for a given workload? That is the central question every IT manager has to answer.
It is a question that deserves some thought and analysis. Yet often IT managers jump to their decision based on series of gut assumptions that on close analysis are wrong. And the resulting decision more often than not is for the Intel server although an honest assessment of the data in many instances should point the other way. DancingDinosaur has periodically looks at comparative assessments done by IBM. You can find a previous one, lessons from Eagle studies, here.
 The first assumption is that the Intel server is cheaper. But is it? IBM benchmarked a database workload on SQL Server running on Intel x86 and compared it to DB2 on z/OS.  To support 23,000 users, the Intel system required 128 database cores on four HP servers.  The hardware cost $0.34 million and the software cost $1.64 million for a 3-year TCA of $1.98 million. The DB2 system required just 5 cores at a hardware/software combined 3-year TCA of $1.4 million
What should have killed the Intel deal was the software cost, which has to be licensed based on the number of cores. Sure, the commodity hardware was cheap, but the cost of the database licensing drove up the Intel cost. Do IT managers wonder why they need so many Intel cores to support the same number of users they can support with far fewer z cores? Obviously many don’t.
Another area many IT managers overlook is I/O performance and its associated costs. This becomes particularly important as an organization deploys virtual machines.  Increasing the I/O demand on an Intel system uses more of the x86 core for I/O processing, effectively reducing the number of virtual machines that can be deployed per server and raising hardware costs.
The zEnterprise handles I/O differently. It provides 4-16 dedicated system assist processors for the offloading of I/O requests and an I/O subsystem bus speed of 8 GBps.
The z also does well with z/VM for Linux guest workloads. In this case IBM tested three OLTP database production workloads (4 server nodes per cluster), each supporting 6,000 trans/sec, Oracle Enterprise Edition, and Oracle Real Application Cluster (RAC) running on 12 HP DL580 servers (192 cores). This was compared to three Oracle RAC clusters of 4 nodes per cluster with each node as a Linux guest under z/VM . The zEC12 had 27 IFLs. Here the Oracle HP system cost $13.2 million, about twice as much as on the zEC12, $5.7 million. Again, the biggest cost savings came from the need for fewer Oracle licenses due to fewer cores.
The z also does beats Intel servers when running mixed high- and low- priority workloads on the same box. In one example, IBM compared high priority online banking transaction workloads with low priority discretionary workloads.  The workloads running across 3 Intel servers with 40 cores each (120 cores total) cost $13.7 million compared to z/VM on an zEC12 running 32 IFLs, which cost $5.77 million (58% less).
Another comparison demonstrates that core proliferation between Intel and the z is the killer. One large workload test required sixteen 32-way HP Superdome App. Production/Dev/ Test servers and eight 48-way HP Superdome DB Production/Dev/Test for a total of 896 cores. The 5-year TCA came to $180 million. The comparable workload running on a zEC12 41-way production/dev/test system used 41 general purpose processors (38,270 MIPS) with a 5-year TCA of $111 million.
When you look at the things a z can do to keep concurrent operations running that Intel cannot you’d hope non-mainframe IT managers might start to worry. For example, the z handles core sparing transparently; Intel must bring the server down.  The z handles microcode updates while running; Intel can update OS-level drivers but not firmware drivers. Similarly, the z handles memory and bus adapter replacements while running; Intel servers must be brought down to replace either.
Not sure what it will take for the current generation of IT managers to look beyond Intel. Maybe a new business class version of the zEC12 at a stunningly low price. You tell me.
BTW; are you planning to attend IBM Edge 2013 in Las Vegas, Jun 10-14? There will be much there to keep enterprise data center managers occupied.  Overall, IBM Edge 2013 will offer over 140 storage sessions, over 50 PureSystems sessions, more than 50 client case studies, and sessions on big data and analytics along with a full cloud track.  Look for me in the Social Media Lounge at the conference and in the sessions.  You can follow me on Twitter for conference updates@Writer1225.  I’ll be using hashtag #IBMEdge to post live Twitter comments from the conference.

6 Responses to “zEnterprise vs. Intel Server Farms”

  1. Luis Fernando Lopez Gonzalez Says:
    Some months ago, knowing that software licenses could be more expensive than hardware would have been a surprise for me.
    Examples of TCA for the product you mentioned in the article are awesome and hopefully make some IT managers think twice what is the best option for his company.
  2. Infinity Systems Software Says:
    […] analyst Alan Radding published a positive blog post that highlights the technical and economic advantages of System z vs. Intel-based servers for […]
  3. Open to Open Systems Says:
    What about skills, dependency on legacy technologies that are dying, who’s going to be around to maintain and convert old z/OS applications in the future? I have no doubt that existing z/OS applications that are well established do a great job at a reasonable cost, but don’t kind yourself. No new systems or applications should be based on z/OS, it’s a step backwards. While I have any influence in the IT decisions that are made in my work place, the tide will be firmly against z/OS as a platform for next-generation applications.
    • Not Your Grandpa's Mainframe Says:
      I sure hope you don’t work for a bank, credit card company or anywhere my personal assets and information is stored. It is quite foolish to dismiss the mainframe as a antiquated computing system that isn’t capable of being the most efficient, affordable, secure, reliable, available, scalable and flexible computing system for “next-generation” applications. You would actually be surprised how many legacy clients are now moving applications to JAVA and deploying new workloads on z/OS and Linux Servers that run on the mainframe. There is a reason IBM is building and shipping more MIP capacity / mainframe processors worldwide than ever before. I’d challenge you to spend a little time with someone who truly understands the full capability of today’s mainframe and you will start to see what I mean.
  4. System z Wins IBM Platform Financials Race | DancingDinosaur Says:
    […] has been telegraphing the arrival in 2013 of a new business-class version of the zEC12 here and here, a z114 equivalent for the zEC12.  If it arrives next quarter, it should give a kick to z […]
  5. Sethuraman R Says:
    Thanks for sharing this post with us…nice work…ROC Software

Friday, November 14, 2014

IBM to build two supercomputers for the U.S. Department of Energy

IBM to build two supercomputers for the U.S. Department of Energy

Supercomputer - generic
The Summit super computer will be installed at Oak Ridge National Laboratory and Sierra will be part of Lawrence Livermore National Laboratory. Both supercomputers are supposed to help the U.S. discover new ways to slow climate change, predict natural disasters, store nuclear waste and improve fuel efficiency.

IBM claimed that both supercomputers will be able to deliver in excess of 100 peak petaflops, which trumps current supercomputer reigning champs. Oak Ridge’s Titan supercomputer delivers 27 peak petaflops and China’s Tianhe-2, currently the fastest supercomputer in the world, delivers 55 peak petaflops.
The supercomputers are based on IBM’s OpenPower technology, which is managed by the OpenPower Foundation. The OpenPower tech is part of IBM’s efforts to cater to the webscale crowd that needs custom architecture to handle the kind of heavy duty workloads necessary for big-data tasks.
In early October, IBM unveiled a new server that contains both a Power8 processor and Nvidia’s GPU accelerator that IBM wants to sell to the “Linux-scale out market,” said Brad McCredie, an IBM fellow and vice president with IBM’s systems and technology group.

Monday, November 3, 2014

Predicting the next decade of tech: From the cloud to disappearing computers and the rise of robots

Predicting the next decade of tech: From the cloud to disappearing computers and the rise of robots
Topic:


Predicting the next decade of tech: From the cloud to disappearing computers and the rise of robots

Summary: Making short-term decisions about technology investment is relatively easy; trying to work out how IT will develop over the next decade is much harder.
SHARE:
future-it-intro-thumb
For an industry run according to logic and rationality, at least outwardly, the tech world seems to have a surprising weakness for hype and the 'next big thing'.
Perhaps that's because, unlike — say — in sales or HR, where innovation is defined by new management strategies, tech investment is very product driven. Buying a new piece of hardware or software often carries the potential for a 'disruptive' breakthrough in productivity or some other essential business metric. Tech suppliers therefore have a vested interest in promoting their products as vigorously as possible: the level of spending on marketing and customer acquisition by some fast-growing tech companies would turn many consumer brands green with envy.
As a result, CIOs are tempted by an ever-changing array of tech buzzwords (cloud, wearables and the Internet of Things [IoT] are prominent in the recent crop) through which they must sift in order to find the concepts that are a good fit for their organisations, and that match their budgets, timescales and appetite for risk. Short-term decisions are relatively straightforward, but the further you look ahead, the harder it becomes to predict the winners.
Tech innovation in a one-to-three year timeframe
Despite all the temptations, the technologies that CIOs are looking at deploying in the near future are relatively uncontroversial — pretty safe bets, in fact. According to TechRepublic's own research, top CIO investment priorities over the next three years include security, mobile, big data and cloud. Fashionable technologies like 3D printing and wearables find themselves at the bottom of the list.
A separate survey from Deloitte reported similar findings: many of the technologies that CIOs are piloting and planning to implement in the near future are ones that have been around for quite some time — business analytics, mobile apps, social media and big data tools, for example. Augmented reality and gamification were seen as low-priority technologies.
This reflects the priorities of most CIOs, who tend to focus on reliability over disruption: in TechRepublic's research, 'protecting/securing networks and data' trumps 'changing business requirements' for understandably risk-wary tech chiefs.
Another major factor here is money: few CIOs have a big budget for bets on blue-skies innovation projects, even if they wanted to. (And many no doubt remember the excesses of the dotcom years, and are keen to avoid making that mistake again.)
According to the research by Deloitte, less than 10 percent of the tech budget is ring-fenced for technology innovation (and CIOs that do spend more on innovation tend to be in smaller, less conservative, companies). There's another complication in that CIOs increasingly don't control the budget dedicated to innovation, as this is handed onto other business units (such as marketing or digital) that are considered to have a more entrepreneurial outlook.
CIOs tend to blame their boss's conservative attitude to risk as the biggest constraint in making riskier IT investments for innovation and growth. Although CIOs claim to be willing to take risks with IT investments, this attitude does not appear to match up with their current project portfolios.
Another part of the problem is that it's very hard to measure the return on some of these technologies. Managers have been used to measuring the benefits of new technologies using a standard return-on-investment measure that tracks some very obvious costs — headcount or spending on new hardware, for example. But defining the return on a social media project or an IoT trial is much more slippery.
Tech investment: A medium-term view
If CIO investment plans remain conservative and hobbled by a limited budget in the short term, you have to look a little further out to see where the next big thing in tech might come from.
One place to look is in what's probably the best-known set of predictions about the future of IT: Gartner's Hype Cycle for Emerging Technologies, which tries to assess the potential of new technologies while taking into account the expectations surrounding them.
The chart grades technologies not only by how far they are from mainstream adoption, but also on the level of hype surrounding them, and as such it demonstrates what the analysts argue is a fundamental truth: that we can't help getting excited about new technology, but that we also rapidly get turned off when we realize how hard it can be to deploy successfully. The exotically-named Peak of Inflated Expectations is commonly followed by the Trough of Disillusionment, before technologies finally make it up the Slope of Enlightenment to the Plateau of Productivity.
"It was a pattern we were seeing with pretty much all technologies — that up-and-down of expectations, disillusionment and eventual productivity," says Jackie Fenn, vice-president and Gartner fellow, who has been working on the project since the first hype cycle was published 20 years ago, which she says is an example of the human reaction to any novelty.
"It's not really about the technologies themselves, it's about how we respond to anything new. You see it with management trends, you see it with projects. I've had people tell me it applies to their personal lives — that pattern of the initial wave of enthusiasm, then the realisation that this is much harder than we thought, and then eventually coming to terms with what it takes to make something work."
ghs-2014
The 2014 Gartner Hype Cycle for Emerging Technologies. Image: Gartner
According to Gartner's 2014 list, the technologies expected to reach the Plateau of Productivity, (where they become widely adopted) within the next two years include speech recognition and in-memory analytics.
Technologies that might take two to five years until mainstream adoption include 3D scanners, NFC and cloud computing. Cloud is currently entering Gartner's trough of disillusionment, where early enthusiasm is overtaken by the grim reality of making this stuff work: "there are many signs of fatigue, rampant cloudwashing and disillusionment (for example, highly visible failures)," Gartner notes.
When you look at a 5-10-year horizon, the predictions include virtual reality, cryptocurrencies and wearable user interfaces.
Working out when the technologies will make the grade, and thus how CIOs should time their investments, seems to be the biggest challenge. Several of the technologies on Gartner's first-ever hype curve back in 1995 — including speech recognition and virtual reality — are still on the 2014 hype curve without making it to primetime yet.
future-it-ghc-1995
The original 1995 Hype Cycle for Emerging Technologies. Image: Gartner
These sorts of user interface technologies have taken a long time to mature, says Fenn. For example, voice recognition started to appear in very structured call centre applications, while the latest incarnation is something like Siri — "but it's still not a completely mainstream interface," she says.
Nearly all technologies go through the same rollercoaster ride, because our response to new concepts remains the same, says Fenn. "It's an innate psychological reaction — we get excited when there's something new. Partly it's the wiring of our brains that attracts us — we want to keep going around the first part of the cycle where new technologies are interesting and engaging; the second half tends to be the hard work, so it's easier to get distracted."
But even if they can't escape the hype cycle, CIOs can use concepts like this to manage their own impulses: if a company's investment strategy means it's consistently adopting new technologies when they are most hyped (remember a few years back when every CEO had to blog?) then it may be time to reassess, even if the CIO peer-pressure makes it difficult.
Says Fenn: "There is that pressure, that if you're not doing it you just don't get it — and it's a very real pressure. Look at where [new technology] adds value and if it really doesn't, then sometimes it's fine to be a later adopter and let others learn the hard lessons if it's something that's really not critical to you."
The trick, she says, is not to force-fit innovation, but to continually experiment and not always expect to be right.
Looking further out, the technologies labelled 'more than 10 years' to mainstream adoption on Gartner's hype cycle are the rather sci-fi-inflected ones: holographic displays, quantum computing and human augmentation. As such, it's a surprisingly entertaining romp through the relatively near future of technology, from the rather mundane to the completely exotic. "Employers will need to weigh the value of human augmentation against the growing capabilities of robot workers, particularly as robots may involve fewer ethical and legal minefields than augmentation," notes Gartner.
Where the futurists roam
Beyond the 10-year horizon, you're very much into the realm where the tech futurists roam.
Steve Brown, a futurist at chip-maker Intel argues that three mega-trends will shape the future of computing over the next decade. "They are really simple — it's small, big and natural," he says.
'Small' is the consequence of Moore's Law, which will continue the trend towards small, low-power devices, making the rise of wearables and the IoT more likely. 'Big' refers to the ongoing growth in raw computing power, while 'natural' is the process by which everyday objects are imbued with some level of computing power.
"Computing was a destination: you had to go somewhere to compute — a room that had a giant whirring computer in it that you worshipped, and you were lucky to get in there. Then you had the era where you could carry computing with you," says Brown.
"The next era is where the computing just blends into the world around us, and once you can do that, and instrument the world, you can essentially make everything smart — you can turn anything into a computer. Once you do that, profoundly interesting things happen," argues Brown.
With this level of computing power comes a new set of problems for executives, says Brown. The challenge for CIOs and enterprise architects is that once they can make everything smart, what do they want to use it for? "In the future you have all these big philosophical questions that you have to answer before you make a deployment," he says.
Brown envisages a world of ubiquitous processing power, where robots are able to see and understand the world around them.
"Autonomous machines are going to change everything," he claims. "The challenge for enterprise is how humans will work alongside machines — whether that's a physical machine or an algorithm — and what's the best way to take a task and split it into the innately human piece and the bit that can be optimized in some way by being automated."
The pace of technological development is accelerating: where we used to have a decade to make these decisions, these things are going to hit us faster and faster, argues Brown. All of which means we need to make better decisions about how to use new technology — and will face harder questions about privacy and security.
"If we use this technology, will it make us better humans? Which means we all have to decide ahead of time what do we consider to be better humans? At the enterprise level, what do we stand for? How do we want to do business?".
Not just about the hardware and software
For many organizations there's a big stumbling block in the way of this bright future — their own staff and their ways of working. Figuring out what to invest in may be a lot easier than persuading staff, and whole organisations, to change how they operate.
"What we really need to figure out is the relationship between humans and technology, because right now humans get technology massively wrong," says Dave Coplin, chief envisioning officer for Microsoft (a firmly tongue-in-cheek job title, he assures me).
Coplin argues that most of us tend to use new technology to do things the way we've always been doing them for years, when the point of new technology is to enable us to do things fundamentally differently. The concept of productivity is a classic example: "We've got to pick apart what productivity means. Unfortunately most people think process is productivity — the better I can do the processes, the more productive I am. That leads us to focus on the wrong point, because actually productivity is about leading to better outcomes." Three-quarters of workers think a productive day in the office is clearing their inbox, he notes.
Developing a better relationship with technology is necessary because of the huge changes ahead, argues Coplin: "What happens when technology starts to disappear into the background; what happens when every surface has the capability to have contextual information displayed on it based on what's happening around it, and who is looking at it? This is the kind of world we're heading into — a world of predictive data that will throw up all sorts of ethical issues. If we don't get the humans ready for that change we'll never be able to make the most of it."
Nicola Millard, a futurologist at telecoms giant BT, echoes these ideas, arguing that CIOs have to consider not just changes to the technology ahead of them, but also changes to the workers: a longer working life requires workplace technologies that appeal to new recruits as well as staff into their 70s and older. It also means rethinking the workplace: "The open-plan office is a distraction machine," she says — but can you be innovative in a grey cubicle? Workers using tablets might prefer 'perch points' to desks, those using gesture control may need more space. Even the role of the manager itself may change — becoming less about traditional command and control, and more about being a 'party host', finding the right mix of skills to get the job done.
In the longer term, not only will the technology change profoundly, but the workers and managers themselves will also need to upgrade their thinking.
Further reading

Tuesday, October 28, 2014

Happy Birthday: Mainframe Security Celebrates 50 Years

•   April 8, 2014


You Use Mainframes Everyday and Might Not Know It

Mainframe
You may not realize it, but mainframes play a large part in your everyday activities. Did you visit your ATM? Make airline reservations? Swipe your credit card? Then you “touched” a mainframe today. Did you know that 80 percent of the world’s corporate data resides on or originates from mainframes?
Why? For one reason, mainframes are still the most trusted platform, with an EAL5+ security evaluation. Companies rely on mainframe security to provide industrial-strength protection. Mainframes are still the platform of choice for processing mission-critical applications and hosting essential corporate information for banks, health care, insurance, retail, government and other industries largely because of mainframe security.

Furthermore, mainframes have become the true mother of reinvention: They have evolved and reinvented themselves with new technology, supporting cloud, mobile, big data and social innovation. Mainframes have transformed from isolated glass house systems to fully connected servers for Internet web applications, data analytics and private clouds. Security has also evolved to keep pace with innovation. It has been an interesting journey.

Mainframe in the Beginning

At first, System/360 security was very simple: To protect sensitive information, you created data set passwords specified on batch jobs with Job Control Language (JCL). While it was easy to share the password to allow data access, it was far more difficult to deny someone access later, which required changing the password and notifying all the other valid users.
The first step in the security journey was to establish user identification (user IDs) and authentication. Access control lists indicated who could access the data and how. This security information needed to be administered by authorized security managers in secure repositories. In 1976, IBM announced IBM Resource Access Control Facility for mainframes, with capabilities including:
  • User groups and privileged roles, such as auditors, operators and special administrators.
  • Resource protection for data sets, files, tapes, programs, applications and general resources.
  • Auditing of security events, including user log-on, data access and privileged operations.

Mainframe Security and Applications Evolved

As the mainframe evolved to support new applications beyond batch processing, security evolved along with these applications:
  • IBM TSO (Time Sharing Option) allowed multiple interactive real-time users.
  • IBM DB2 offered field-level security controls that wouldn’t impede performance.
  • IBM IMS and CICS protected transaction applications.
  • IBM Security AppScan identifies application vulnerabilities and generates reports with intelligent fix recommendations to ease remediation.

Communication Security Expanded to Internet and Mobile Access

Mainframes began to communicate outside their enterprises and across public networks, which required new encryption protocols and new security capabilities, including:
  • User directories that uniquely identified users across enterprises and domains.
  • Trusted authentication protocols that utilized certificates instead of passwords.
  • Secure communication protocols with distributed untrusted systems and mobile users.

Early “Cloud” Capabilities

Many people do not realize that the mainframe offered virtual machine capabilities long before today’s cloud options were available. Mainframes have provided a number of virtualization options over time:
  • Secure hypervisors that could run software virtual machines.
  • Physical logical partitions (LPARs) that run virtual machines with physical isolation.
  • Most recently, blade servers that run systems under the covers of the latest mainframes.

Growth of Database to Big Data Analytics

Mainframes provide robust information security, so it makes sense that mainframes have grown over time to host data warehouses, big data and data analytics. Mainframe data security has been enhanced with IBM Security zSecure and IBM InfoSphere Guardium security solutions. Big data by nature is enterprise-wide, so many other data sources connect with the mainframe. Guardium’s ubiquitous support for a wide variety of platforms and data sources ensures that any potential threats from within or outside the platform are detected, blocked and reported in virtually real time. InfoSphere Guardium Data Encryption for DB2 and IMS Databases provides additional protection of data at rest and in motion over communications at the column, row and segment levels. Security encryption key management solutions protect those keys from disclosure.

Security Intelligence and Compliance

As mainframes transformed, they were exposed to new threats. The overall volume of mainframe and enterprise-wide security events that requires analysis is staggering. Mainframes have new capabilities to obtain actionable insight with security intelligence using zSecure and QRadar SIEM to automate threat analysis, create alerts, monitor status and respond.
As the occurrence of big data breaches grew, new security standards and compliance regulations have been adopted to help protect user payment card information, sensitive financial information, medical health care records and other vulnerable data. These regulations require privileged user monitoring, vigilant audit reporting, data encryption and other security controls that help safeguard information. zSecure offers new compliance framework reporting to demonstrate governance and compliance.

Continued Transformation

Over the past 50 years, the mainframe has evolved from a siloed system to supporting databases, applications, networking, virtual machines, Internet, cloud, mobile, big data and business analytics. Mainframe security has transformed to secure these new capabilities and help customers create the ultimate security platform for their mission-critical workloads. Mainframe security has stood the test of time for 50 years and is still going strong.

Tuesday, July 29, 2014

System z Takes BackOffice Role in IBM-Apple Deal

29 july 2014


I didn’t have to cut short my vacation and race back last week to cover the IBM-Apple agreement. Yes, it’s a big deal, but as far as System z shops go it won’t have much impact on their data center operations until late this year or 2015 when new mobile enterprise applications apparently will begin to roll out.
The deal, announced last Tuesday, promises “a new class of made-for-business apps targeting specific industry issues or opportunities in retail, healthcare, banking, travel and transportation, telecommunications, and insurance among others,” according to IBM. The mainframe’s role will continue to be what it has been for decades, the backoffice processing workhorse. IBM is not porting iOS to the z or Power or i or any enterprise platform.
Rather, the z will handle transaction processing, security, and data management as it always has. With this deal, however, analytics appears to be assuming a larger role. IBM’s big data and analytics capabilities is one of the jewels it is bringing to the party to be fused with Apple’s legendary consumer experience. IBM expects this combination—big data analytics and consumer experience—to produce apps that can transform specific aspects of how businesses and employees work using iPhone and iPad devices and ultimately, as IBM puts it, enable companies to achieve new levels of efficiency, effectiveness and customer satisfaction—faster and easier than ever before.
In case you missed the point, this deal, or alliance as IBM seems to prefer, is about software and services. If any hardware gets sold as a result, it will be iPhones and iPads. Of course, IBM’s MobileFirst constellation of products and services stand to gain. Mainframe shops have been reporting a steady uptick in transactions originating from mobile devices for several years. This deal won’t slow that trend and might even accelerate it. The IBM-Apple alliance also should streamline and simplify working with and managing Apple’s mobile devices on an enterprise-wide basis.

CICS V5 Performance: Impact 2014 Presentation

Improved performance of CICS V5-V5.2 vs. previous versions
Download Now
According to IBM its MobileFirst Platform for iOS will deliver the services required for an end-to-end enterprise capability, from analytics, workflow and cloud storage to enterprise-scale device management, security and integration. Enhanced mobile management includes a private app catalog, data and transaction security services, and a productivity suite for all IBM MobileFirst for iOS offerings. In addition to on premise software solutions, all these services will be available on Bluemix—IBM’s development platform available through the IBM Cloud Marketplace.
One hope from this deal is that IBM will learn from Apple how to design user-friendly software and apply those lessons to the software it subsequently develops for the z and Power Systems. Would be interesting see what Apple software designers might do to simplify using CICS.
Given the increasing acceptance of BYOD when it comes to mobile, data centers will still have to cope with the proliferation of operating systems and devices in the mobile sphere. Nobody is predicting that Android, Amazon, Google, or Microsoft will be exiting the mobile arena as a result, at least not anytime soon.
Finally, a lot of commentators weighed in on who wins or loses in the mobile market. In terms of IBM’s primary enterprise IT competitors Oracle offers the Oracle Mobile Platform. This includes mobile versions of Siebel CRM, JD Edwards, PeopleSoft, and a few more. HP offers mobile app development and testing and a set of mobile application services that include planning, architecture, design, build, integration, and testing.
But if you are thinking in terms of enterprise platform winners and losers IBM is the clear winner; the relationship with Apple is an IBM exclusive partnership. No matter how good HP, Oracle, or any of IBM’s other enterprise rivals might be at mobile computing without the tight Apple connection they are at a distinct disadvantage. And that’s before you even consider Bluemix, SoftLayer, MobileFirst, and IBM’s other mobile assets.


http://enterprisesystemsmedia.com/article/system-z-takes-backoffice-role-in-ibm-apple-deal?utm_source=z%2Fflash&utm_medium=newsletter&utm_content=zF+-+art+1&utm_campaign=w4xed

Sunday, June 22, 2014

The Luckiest Man in the World

 34 129Print Email

The Luckiest Man in the World

The 2014 Master the Mainframe World Championship winner concepted his entry right before the competition started

May 21, 2014 |
As part of its Mainframe50 celebration, IBM named Yong-Siang Shih of National Taiwan University the winner of the first Master the Mainframe World Championship at the April 8 event in New York. The 2009 winner of the Taiwan Master the Mainframe competition bested 42 other competitors from 23 countries to take the title.

Previous participants who had demonstrated superior programming skills on the mainframe were invited to the competition. While participating, the students sharpened their programming skills; cultivated advanced development tools and learned how the mainframe platform supports cloud, big data and analytics, mobile and security initiatives. In March, competitors were given an already-built application to refine and improve upon, and were tasked with constructing an application that real-world business would use.

April 6, the competitors were brought to IBM in Poughkeepsie, N.Y., where they presented their results to a panel of judges. Scores were collected, tallied and six finalists were named. On April 7, those six presented their projects to another set of judges. The top three winners were announced the next day during the Mainframe50 live event.

The top winners for the 2014 Master the Mainframe World Championship are:

First Place: Yong-Siang Shih of National Taiwan University, Taiwan
Second Place: Rijnard van Tonder of Stellenbosch University, South Africa
Third Place: Philipp Egli of University of Brighton, United Kingdom
Fourth Place: Mugdha Kadam of University of South Florida, United States
Fifth Place: Shahini Sengupta of RCC Institute of Information Technology, India
Sixth Place: Aaron Call Barreiro of Universitat Politècnica de Catalunya, Spain

“One of the most exciting parts of this competition was getting to meet these bright students in person, and getting to know them on a personal level,” says Troy Crutcher, Master the Mainframe project manager, IBM Academic Initiative, System z.

“One of the main goals of the IBM Academic Initiative Master the Mainframe contest is helping these students gain enterprise systems skills that they will be able to use well into the future. Employers are continually looking for these skills,” he explains. “This contest is a way to get them excited and on the System z platform in a fun and educational way.”

Competition Brings Out Confidence


As a student at National Chiao Tung University, World Championship winner Shih got involved in Master the Mainframe in the hopes of winning an external hard drive, the 2009 prize for winning the second stage of the three-part competition. Despite this win, he said he was surprised to be invited to the World Championship. However, he was excited to have the opportunity to continue with Master the Mainframe and visit the U.S. for the first time.

“The most rewarding part of this experience was that I got to meet so many people and talked a lot” with others who have similar interests, Shih says. “My favorite memory was presenting my work; I enjoyed the demonstration and I got some very nice feedback. I also felt more confident after this competition.”

Shih attributes his win to luck based on timing and circumstances before the competition. In addition to having technical skills—including having recently taken a course about cloud computing—and participating in the National Taiwan University English Debate Society, he came up with the idea of creating a mobile application for debit cards shortly before the competition. In his application, a card is enabled to make an online purchase and disabled immediately afterward so no other charges can be incurred online. That way, if any information gets in the hands of the wrong person, he or she cannot make an unauthorized charge.

“If the third stage were not about an application, I might not have been able to think of another bright idea,” Shih notes. “Everything worked so perfectly together that this itself is incredible.”

Shih, in his first year of graduate school, has professional experience with an internship on IBM’s DataPower quality assurance team, mainly developing a test framework. After college, he said he hopes to work as a software engineer.

Students Benefit From Experiences

Although no concrete plans are in place for another World Championship, Crutcher says there are talks of such an event. He had heard positive feedback and that the students had an amazing time in New York.

“Not only did the participants get to compete with some of the best minds around the world, they got to explore the city,” he says. “A few of them had never even been on a plane before, let alone leave their home country. This was a huge event that they will continue to benefit from in the future.”

The Master the Mainframe competition is open to high school and university students that go to schools involved in the Academic Initiative. To learn more about the competition, visit the Master the Mainframe page here. Read Shih’s first-person account of the competition and his application in his blog here.

Valerie Dennis is site editor of Destinationz.org.
- See more at: http://www.destinationz.org/Academia/Articles/The-Luckiest-Man-in-the-World.aspx#sthash.gCURGMoO.dpuf