Sunday, September 29, 2013

What's Next for GXS

We bid for this company over ten years ago and Francisco partners beat us  as they overpaid. Great for GE but not for the Limited partners that invested with Francisco. You can always overpay and promise stupid returns of 30% and then deliver less than a 4% return and say it was market conditions, when in effect it was because you really had no idea what the business did or how the market operated.  Aivars Lode Avantce

What's Next for GXS





Read more: http://www.ec-bp.com/index.php/articles/editors-blog/9993-what-s-next-for-gxs#ixzz2gIa1LLzvReuters' recent commentary about GXS and its o    Reuters' recent commentary about GXS and its Owner Francisco Partners LP brings to light something everyone in the supply chain business knows 
but conveniently shoves away out of sight. Based on Francisco Partners' intent to spin off the company coupled with GXS' abysmal performance relative to     profitability, the giant provider of EDI solutions is in big trouble. As Reuters reports "The company has warned that it may continue to incur losses due to its debt and cannot guarantee that it will report a net profit in the future." 
So what are its options? According to its financial reports, GXS is servicing debt of somewhere over $750 million. That's down (but not much) from the initial $800 million that Francisco Partners gathered from its investors back in 2002 when General Electric decided to dump divest itself from the VAN business. Jack Welch is a pretty savvy guy, and since he left GE in 2001 I imagine that one of his parting actions was to get that divestiture underway before he left.
Fast forward to 2012 when Francisco Partners' 10 year term on its investors' term expired and it became clear to Francisco that the investment was not turning out the way it had intended. It had to get extensions from its investors and even stopped charging management fees.gxs
All this is actually not good news for the supply chain as a whole. That's because so much of the data traffic that supports commerce traverses at some point, through GXS. This is of course, no surprise to GXS as it takes aim at additional revenue sources by considering charging subscription fees for every cross-connect, or what it describes as "daisy-chaining". No doubt this would increase its income. But it may also hasten the trend to look for other methods and avenues to transact electronic commerce.

What's a company to do when they have one sector of its business that seems to be growing and other sectors that may be worth some amount but are not gaining ground? Since GXS indicates that its managed services revenue shows a growth rate of just over 23% from 2009 through 2012, it may do well to hold on to its managed services as its new core business strategy and sell off its VAN business. IBM may even be interested in consolidating that portion of the industry. Or maybe a company like TrueCommerce or SPS Commerce that relies on GXS' transfer connections would be likely purchasers.

Either way, don't expect to see GXS' finances turn around over the long haul. The managed services business may be growing for now, but their version of managed services relies on single-installation kinds of technology that don't leverage shared services found in more up to date cloud-based services. And it's only a matter of time till factors converge to put an end to the installed software business; even Microsoft has learned that lesson.

All this is actually not good news for the supply chain as a whole. That's because so much of the data traffic that supports commerce traverses at some point, through GXS. This is of course, no surprise to GXS as it takes aim at additional revenue sources by considering charging subscription fees for every cross-connect, or what it describes as "daisy-chaining". No doubt this would increase its income. But it may also hasten the trend to look for other methods and avenues to transact electronic commerce.
What's a company to do when they have one sector of its business that seems to be growing and other sectors that may be worth some amount but are not gaining ground? Since GXS indicates that its managed services revenue shows a growth rate of just over 23% from 2009 through 2012, it may do well to hold on to its managed services as its new core business strategy and sell off its VAN business. IBM may even be interested in consolidating that portion of the industry. Or maybe a company like TrueCommerce or SPS Commerce that relies on GXS' transfer connections would be likely purchasers.

Either way, don't expect to see GXS' finances turn around over the long haul. The managed services business may be growing for now, but their version of managed services relies on single-installation kinds of technology that don't leverage shared services found in more up to date cloud-based services. And it's only a matter of time till factors converge to put an end to the installed software business; even Microsoft has learned that lesson.

What's a company to do when they have one sector of its business that seems to be growing and other sectors that may be worth some amount but are not gaining ground? Since GXS indicates that its managed services revenue shows a growth rate of just over 23% from 2009 through 2012, it may do well to hold on to its managed services as its new core business strategy and sell off its VAN business. IBM may even be interested in consolidating that portion of the industry. Or maybe a company like TrueCommerce or SPS Commerce that relies on GXS' transfer connections would be likely purchasers.
Either way, don't expect to see GXS' finances turn around over the long haul. The managed services business may be growing for now, but their version of managed services relies on single-installation kinds of technology that don't leverage shared services found in more up to date cloud-based services. And it's only a matter of time till factors converge to put an end to the installed software business; even Microsoft has learned that lesson.

Either way, don't expect to see GXS' finances turn around over the long haul. The managed services business may be growing for now, but their version of managed services relies on single-installation kinds of technology that don't leverage shared services found in more up to date cloud-based services. And it's only a matter of time till factors converge to put an end to the installed software business; even Microsoft has learned that lesson.

Read more: http://www.ec-bp.com/index.php/articles/editors-blog/9993-what-s-next-for-gxs#ixzz2gIZjdXn8

So, since the investors seem to be tired of the continuing losses with no turn-around in sight, the only available option is to offer an IPO for GXS. Really? I know the IPO market has been making a bit of a recovery as of late, but seriously, will the price per share find any kind of support with financial prospects admittedly showing negative trends for the foreseeable future?

Disk-pushers, get reel: Even GOOGLE relies on tape



What was old is new again tape storage revival. Aivars Lode Avantce

Disk-pushers, get reel: Even GOOGLE relies on tape
Prepare to be beaten by your old, cheap rival

By Chris Mellor, 

Tangled magnetic tape
Current state
Oracle T1000 C tape drive

Future state
LTO Roadmap
5 ways to reduce advertising network latency
Comment Tape has spent some time on the ropes, but now it's back in the ring. After suffering five or more years of onslaught from pro-disk fanatics drunk on disk deduplication technologies, reality has struck home. Tape is cheaper than disk*. Tape is more reliable than disk and, the killer, tape's storage capacity can go on increasing for years.
In fact, IBM is currently preparing to demonstrate a near-40X improvement on LTO-6 capacity, while the rate of disk capacity increase is much slower.
Tape is the only storage medium that can confidently say it can keep up with the growing amounts transactions, images, music, plus compliance needs, social media conversations and machine-to-machine messages that are being thrown at our IT systems. Sure, store it on flash and disk when it's hot but bleed it off to tape when it's old and cold and can't be thrown away.
Tape, once the backup medium, is now reverting to an archive role. At the same time, the amount of data being archived is shooting up as data-generation rates increase.
There used to be more than a dozen tape and spool formats, and various methods of writing data to tape, but now there are just three main ones: from IBM, LTO and Oracle, with a fourth, HP's DAT, that is fading.
IBM's proprietary TS1140 format stores 4TB of raw data on a tape reel and Big Blue also makes a range of tape libraries to use the stuff, including its high-end TS3500 with 15,000 slots for cartridges, robots to pick the cartridges and slip them into drives, and a total capacity of compressed data in excess of 2.7 exabytes. With mainframe use of tape there is a near captive market for IBM although Oracle, with its acquired Sun/StorageTek technology plays a strong role here too.
Oracle T10000 drive
Oracle's is the second surviving proprietary tape format. Its T10000d holds 8.5TB of raw data, and, like IBM, Oracle makes a range of libraries with its StreamLine 8500 at the high end. This holds 10,000+ tapes in a single system and more than 100,000 in 10 linked systems. Its most famous customer for this is now Google.
LTO-6 tape
LTO, the Linear Tape Open consortium, was an invention of HP, IBM and Seagate, with the first sale by IBM in August 2000.
The idea was to create an open tape format for Windows and Unix servers with media and drive products from and interchangeable between the three consortium members. The combination of this with good technology decimated the other, proprietary tape formats for those servers, including DLT, Super-AIT, VXA and many others.
One after the other they all folded, leaving LTO supreme. Quantum joined the LTO by buying Certance, Seagate's ex-tape arm, in 2005. Nowadays only IBM and HP make drives, with Quantum selling LTO drives but not making them.
IBM, Oracle, HP and Quantum are three of the main tape library vendors, with SpectraLogic being a fourth. Talking of a tape resurgence, Spectra sold 550PB of tape storage in the second half of 2012.
Tape is a sequential access medium, but a virtual file:folder access system for it – called LTFS (Linear Tape File System) – has been created by IBM. This provides a drag-and-drop Windows file:folder-like access method to reading and writing files on tape.
It means ordinary users can, in theory, write files to tape and read files on tape without going through a backup software package, each of which has its own interface. This promises to revolutionise how tape is used, democratising, so to speak, its access.
That's the current tape technology state of play. But what's coming?
Each of the three main formats above has a roadmap out to one or two future generations which generally increase both capacity and data transfer speed. For example, LTO suppliers are currently shipping LTO-6, the latest generation. Sitting in the wings are LTO-7 and -8.
These increase both capacity and speed. LTO-7 provides 6.4TB of raw capacity (16TB compressed at a 2.5:1 ratio) and a 315MB/sec raw data transfer speed, which compares to LTO-6's 210MB/sec. LTO-8 should provide 12.8TB of capacity and a 427MB/sec transfer speed, both for raw data.
We might expect a new generation to become available 30 months or so after the preceding one. That's generally how it works. Historically each LTO generation drive can read and write the preceding generation's drive and read tapes from the generation before that, thus easing migration to the latest format.
We predict future LTO generations like LTO-9 and LTO-10 which double transfer speed and capacity compared to the preceding generation, not that the LTO consortium is committing to these format futures.
Oracle has a similar roadmap for its T10000 format. A coming T10000e could offer 12-20TB capacity and transfer speeds in the range of 400 - 600MB/sec, though we might think 300 - 350MB/sec might be more realistic. We're sure Oracle is more precise in its numbers when talking to its tape customers.
IBM? The same pattern we're sure, although Big Blue is coy about publicising it. El Reg storage desk also predicts that TS1150 and TS1170 formats could follow on from the existing TS1140 format, again with a general doubling of both capacity and transfer speed.
IBM has demonstrated a 35TB raw capacity tape. It is now preparing a demonstration of a tape holding 125TB using a refinement of today's barium ferrite tape media, not a totally new recording technology. Assume that works, then a TS3500 library using such tapes could hold 84 exabytes of data. It's a fantastically humungous amount of data. Nothing else will come close in terms of the cost/GB of storage, nothing.
Assume LTO keeps on doubling its format every 30 months or so and we could see a 102TB capacity LTO-11 in 12 to 13 years, 2025-2026.
How can we be sure this is reasonable and not a technological pipe dream? It's because the physical size of a bit on tape is, relative to a disk bit, gigantic. There is simply lots of room to shrink the physical size of tape bits without prejudicing bit stability, as is happening with current PMR disk recording technology which is forcing a move to a new recording technology. The 125TB tape project involves shrinking the tape bit size or areal density to 100Gbit/in2 which compares to the advanced disk areal densities in use now in the 620-690Gbit/in2 area.
DAT's future is a fading to insignificance as disk and cloud backup generally replace it.
Tape is the archive medium and it is being used now by some of the biggest archivers of data around such as Google and Amazon
.

Why Facebook is betting its future on open hardware, and why it matters to you


How Facebook has significantly cut the cost of delivering computing capability. Aivars Lode Avantce

Why Facebook is betting its future on open hardware, and why it matters to you

By Nick Heath | September 24, 2013 -- 13:17 GMT (06:17 PDT)
For years, servers have been built to a set menu — with most customers offered boxes including the same hardware and software, whether they needed it or not.
Web giants such as Google and Facebook have chosen more of an a la carte aproach for their datacentres, designing servers tailored to the computing workloads they will be carrying out and nothing more.
Under the Open Compute Project (OCP), Facebook and its partners have committed to developing novel designs for compute, storage and general datacentre infrastructure — not just the servers themselves, but the chassis and racks they sit in and their associated power and cooling, and then to sharing those designs so they can be refined and built upon.
For Facebook, the shift to DIY datacentres is paying off, in both the efficiency and the reliability of its datacentres. Facebook's datacentre in Lulea, Sweden, is kitted out with 100 percent OCP-designed equipment, and has a failure rate one third that of Facebook's datacentres running a mix of OCP and non-OCP hardware.
Similarly, Facebook credits OCP equipment for allowing it to run some of the most efficient datacentres in the world. Facebook's server farms have a power usage effectiveness (PUE) rating of 1.07, when the "gold standard" for the industry is 1.5 (PUE reflects how much of the total power delivered to a datacentre gets to a server).

Ultimately, companies will be better served by kitting out their datacentres with infrastructure that has been stripped down to the core components needed to carry out specific computing workloads, said Frank Frankovsky, Facebook's VP of hardware design and supply chain operations, an approach that the OCP refers to as 'vanity-free design'.
"We [Facebook] remove anything that's not necessary, if it doesn't contribute to computing your News Feed or storing your photos then it's not in the design so there's fewer things that can break," he said, explaining why OCP servers have proven more reliable.
"You strip out as much as you can to make it as efficient as possible, and also there are not as many ancillary things that can break."
Frankovsky said another boost to quality comes from testing that is done being specific to the workloads that are run. In contrast, the testing procedures used on traditional servers from OEMs are limited because they have to do "very wide and shallow testing" for a large number of use cases.
Modularity is another core OCP design principle, a central idea behind the project is to break components of the datacentre, rack and server into parts that can be swapped out as computing needs change. At the project's Santa Clara summit earlier this year, the 'Group Hug' slot architecture was revealed, whose design would allow server motherboards to accept ARM system on a chips, as well as AMD or Intel chips.
While only a handful of large companies have publicly acknowledged using Open Compute designs in their datacentres, the other big firm being the European hosting company Rackspace, it seems server OEMs such as IBM, HP and Dell will have to also move into a la carte servers if they want to win business from large web customers such as Facebook.
All Facebook hardware in datacentres will be OCP-designs, said Frankovsky, whether that's provided by ODMs (Original Design Manufacturers) — companies such as Hyve Solutions or Quanta that manufacture products for sale under another brand — or by the OEMs, as both Dell and HP are members of the OCP.
The openness of the OCP means that contributors share the designs for the datacentre equipment they produce. Large companies like Rackspace can then take these designs and modify them to suit their specific needs and then get ODMs to manufacture this equipment.
Smaller companies, without the ability or desire to modify OCP designs, can buy equipment based on vanilla OCP specifications. This is the approach followed by video games publisher Riot Games, responsible for the popular online game League of Legends.
There are a number of OCP-approved equipment makers that manufacture servers and datacentre infrastructure based on OCP designs, such as Penguin Computing, which makes servers based on the Open Compute V2 and V3 systems.
"Those are two examples of how the community are engaging from an adoption perspective," Frankovsky, who is also chairman of the OCP, said.
Why open compute will become mainstream
Even if firms don't adopt OCP equipment directly Frankovsky predicts the number of computing workloads carried out on OCP equipment will continue to grow as more work shifts to cloud computing providers from smaller datacentres.
"Cisco UCS or PowerEdge systems are excellent for SMEs who want a fully integrated stack of technology. Those customers don't want transparency, they want stuff to work," Frankovsky said.
"Over time those customers will procure less and less but will outask to cloud computing providers, which is why I'm so bullish about the impact that OCP will have on industry over the next three to five years. It's those large providers who are the main adopters of open compute."
Facebook's Frank Frankovsky. Photo: Jack Clark
Not all businesses have easily definable computing workloads that can be so easily matched to the underlying hardware, or are bound by regulatory or other constraints on how they carry out computing, which Frankovsky accepts could limit their adoption of OCP hardware.
Learning how OCP could work for these companies was part of the reason that OCP asked Don Duet, head of global technology for financial firm Goldman Sachs to sit on the OCP board, he said.
"Goldman Sachs is an example of a company that is heavily burdened by regulatory requirements. Learning more about what are the challenges of achieving this level of efficiency based on the reality of the world they live is teaching us a lot," Frankovsky said. 
"One of the things we've come up with though is not everything is bound by those restrictions. Financial services have very large scale-out farms of computers that look a lot like ours, typically running Monte Carlo simulations. That's a pretty large part of their IT infrastructure that can immediately benefit from Open Compute.
"There are large and growing portions of everyone's computing infrastructure that look more like web scale architectures," he said, citing health and pharmaceutical research compute clusters and oil and gas server farms for processing seismic data.
What's Hot on ZDNet
"From a physical infrastructure perspective, they look a heck of a lot like a Google, Microsoft, Facebook or an Amazon."
One emerging server architecture that complements OCP's vanity free and modular approach is microservers, energy sipping servers — typically with a TDP below 15W — that can be used to carry out computationally simple tasks at scale.
"We're really excited about the potential for microservers [at Facebook]," said Frankovsky, adding that Facebook is yet to adopt them in any number because it is waiting for a suitable 64-bit microserver system on a chip. Intel recently released its second generation of Atom-based microserver SoCs — the Avoton chip — and the first 64-bit ARM microservers look unlikely to ship in any number until 2014.
"The first areas that we would utilise a technology like that is in the storage area," Frankovsky said.
He said Facebook are currently testing two OCP Open Vault storage arrays — one Avoton-based and the other a Calxeda ARM-based system — where the Serial attached SCSI (SAS) controllers have been swapped for microservers.
"We've turned what's considered just a bunch of discs into a storage server. That enables you to eliminate the entire separate server chassis that used to sit above the disc enclosure, now the server is a microserver and sits in a card slot that is part of the storage device," he said.
The design for the storage array came from the OCP community building on the freely available designs for OCP hardware, and for Frankovsky it demonstrates why sharing OCP designs is worthwhile.
"That's an example of a community contribution, and one of those 'Aha' moments, it's like 'Gosh, why didn't we think about that'. We have a lot of great engineers but we don't have all the greatest engineers under one roof, nobody does. That's an example of the power of open source," Frankovsky said.

The Fatal Mistake That Doomed BlackBerry



Blackberry missed the ultimate boat, we lived in Waterloo during that start period 1999. Aivars Lode Avantce

The Fatal Mistake That Doomed BlackBerry

Beleaguered gadgetmaker BlackBerry said on Monday that it’s signed a tentative agreement to be purchased by a group led by Canadian holding company Fairfax Financial in a $4.7 billion deal. The transaction, in which BlackBerry would become a private company, represents a turning point for a once high-flying tech giant that played a key role in the mobile-device revolution only to be eclipsed by Apple and Google.
Fairfax, which already owns 10% of BlackBerry, will pay $9 per share for the company, about 3% more than its closing price on Friday. BlackBerry still has the flexibility to accept a better offer in a maneuver known as a “go-shop” process, but it’s hard to imagine that a sweeter overture will be forthcoming.
On Friday, BlackBerry announced that it would cut 4,500 jobs as it prepares to absorb nearly $1 billion in losses related to unsold-device inventory, sending its stock price plunging by 20%. Since last month, BlackBerry’s “special committee” has been evaluating strategic alternatives (like a sale) for the company. BlackBerry and Fairfax are expected to complete their due diligence by Nov. 4. By going private, BlackBerry (until recently known as Research in Motion) can continue to attempt a turnaround without the Wall Street pressure that accompanies public companies.
(MORE: Why Apple vs. Google Is the Most Important Battle in Tech)
“The special committee is seeking the best available outcome for the company’s constituents, including for shareholders,” Barbara Stymiest, chair of BlackBerry’s board of directors, said in a statement. “Importantly, the go-shop process provides an opportunity to determine if there are alternatives superior to the present proposal from the Fairfax consortium.”
Prem Watsa, chairman and CEO of Fairfax, is often referred to as Canada’s Warren Buffett, the famed investor who runs the Omaha-based Berkshire Hathaway conglomerate. “We believe this transaction will open an exciting new private chapter for BlackBerry, its customers, carriers and employees,” Watsa said in a statement. “We can deliver immediate value to shareholders, while we continue the execution of a long-term strategy in a private company with a focus on delivering superior and secure enterprise solutions to BlackBerry customers around the world.”
It may seem like a distant memory now, but just a few years ago BlackBerry was the premier mobile gadget on the market. The device was so ubiquitous on Wall Street and Capitol Hill that it earned the nickname CrackBerry. As recently as 2009, BlackBerry wasnamed by Fortune magazine as the fastest growing company in the world, with earnings exploding by 84% a year. Times have changed. Since 2009, BlackBerry’s stock price has collapsed by a vertigo-inducing 90% to under $7 at its low point last summer.
Today, BlackBerry has fallen to the back of the smartphone pack — with a minuscule 3% of the market — as Apple’s iPhone and Google’s Android operating system have come to dominate the market. BlackBerry’s decline has become a case study about what happens when a tech giant fails to innovate in a consumer-technology market evolving at breakneck speed. In a sign of the times, Apple said on Monday it sold a record 9 million units of its latest iPhone devices during the first weekend they were on sale.
(MORE: BlackBerry CEO Could Face Testy Crowd at Annual Meeting)
BlackBerry’s failure to keep up with Apple and Google was a consequence of errors in its strategy and vision. First, after growing to dominate the corporate market, BlackBerry failed to anticipate that consumers — not business customers — would drive the smartphone revolution. Second, BlackBerry was blindsided by the emergence of the “app economy,” which drove massive adoption of iPhone and Android-based devices. Third, BlackBerry failed to realize that smartphones would evolve beyond mere communication devices to become full-fledged mobile entertainment hubs.
BlackBerry insisted on producing phones with full keyboards, even after it became clear that many users preferred touchscreens, which allowed for better video viewing and touchscreen navigation. When BlackBerry finally did launch a touchscreen device, it was seen as a poor imitation of the iPhone. BlackBerry saw its devices as fancy, e-mail-enabled mobile phones. Apple and Google envisioned powerful mobile computers and worked to make sending e-mail and browsing the Web as consumer-friendly as possible.
Founded in 1984 as a consulting business called Research in Motion in Waterloo, a suburb of Toronto, the company introduced its first BlackBerry device in 1999. For e-mail-obsessed Wall Streeters and other corporate users, it was a godsend. BlackBerry pioneered “push e-mail,” meaning that users simply received their messages when they were sent, instead of having to constantly check for new e-mails. BlackBerry’s QWERTY keyboard was like an epiphany: no more pecking at a numeric keypad to eke out messages. In the years that followed, the BlackBerry keyboard spawned a whole generation of dual-thumb e-mail warriors.
As the BlackBerry exploded in popularity, especially among business customers, the company became Canada’s most valuable firm, leading some to dub Waterloo Canada’s Silicon Valley. But while BlackBerry was resting on its laurels atop the corporate mobile market, Apple and Google were laser-focused on the consumer market, which they correctly predicted would drive smartphone adoption. In January 2012, BlackBerry announced that its co-CEOs Jim Balsillie and Mike Lazaridis would step down and be replaced by Thorsten Heins, a German-born executive who joined the company in 2007. Nearly two years later, Heins has not yet been able to execute a turnaround.

Consulting on the Cusp of Disruption


Even the highest level of consulting is being disrupted. Aivars Lode Avantce


Consulting on the Cusp of Disruption
by Clayton M. Christensen, Dina Wang, and Derek van Bever

After years of debate and study, in 2007 McKinsey & Company initiated a series of business model innovations that could reshape the way the global consulting firm engages with clients. One of the most intriguing of these is McKinsey Solutions, software and technology-based analytics and tools that can be embedded at a client, providing ongoing engagement outside the traditional project-based model. McKinsey Solutions marked the first time the consultancy unbundled its offerings and focused so heavily on hard knowledge assets. Indeed, although McKinsey and other consulting firms have gone through many waves of change—from generalist to functional focus, from local to global structures, from tightly structured teams to spiderwebs of remote experts—the launch of McKinsey Solutions is dramatically different because it is not grounded in deploying human capital. Why would a firm whose primary value proposition is judgment-based and bespoke diagnoses invest in such a departure when its core business was thriving?
For starters, McKinsey Solutions might enable shorter projects that provide clearer ROI and protect revenue and market share during economic downturns. And embedding proprietary analytics at a client can help the firm stay “top of mind” between projects and generate leads for future engagements. While these commercial benefits were most likely factors in McKinsey’s decision, we believe that the driving force is almost certainly larger: McKinsey Solutions is intended to provide a strong hedge against potential disruption.
In our research and teaching at Harvard Business School, we emphasize the importance of looking at the world through the lens of theory—that is, of understanding the forces that bring about change and the circumstances in which those forces are operative: what causes what to happen, when and why. Disruption is one such theory, but we teach several others, encompassing such areas as customer behavior, industry development, and human motivation. Over the past year we have been studying the professional services, especially consulting and law, through the lens of these theories to understand how they are changing and why. We’ve spoken extensively with more than 50 leaders of incumbent and emerging firms, their clients, and academics and researchers who study them. In May 2013 we held a roundtable at HBS on the disruption of the professional services to encourage greater dialogue and debate on this subject.
We have come to the conclusion that the same forces that disrupted so many businesses, from steel to publishing, are starting to reshape the world of consulting. The implications for firms and their clients are significant. The pattern of industry disruption is familiar: New competitors with new business models arrive; incumbents choose to ignore the new players or to flee to higher-margin activities; a disrupter whose product was once barely good enough achieves a level of quality acceptable to the broad middle of the market, undermining the position of longtime leaders and often causing the “flip” to a new basis of competition.
Early signs of this pattern in the consulting industry include increasingly sophisticated competitors with nontraditional business models that are gaining acceptance. Although these upstarts are as yet nowhere near the size and influence of big-name consultancies like McKinsey, Bain, and Boston Consulting Group (BCG), the incumbents are showing vulnerability. For example, at traditional strategy-consulting firms, the share of work that is classic strategy has been steadily decreasing and is now about 20%, down from 60% to 70% some 30 years ago, according to Tom Rodenhauser, the managing director of advisory services at Kennedy Consulting Research & Advisory.
Big consulting is also questioning its sacred cows: We spoke to a partner at one large firm who anticipates that the percentage of projects employing value-based pricing instead of per diem billing will go from the high single digits to a third of the business within 20 years. Even McKinsey, as we have seen, is pursuing innovation with unusual speed and vigor. Though the full effects of disruption have yet to hit consulting, our observations suggest that it’s just a matter of time.
Management consulting’s fundamental business model has not changed in more than 100 years. It has always involved sending smart outsiders into organizations for a finite period of time and asking them to recommend solutions for the most difficult problems confronting their clients. Some experienced consultants we interviewed scoffed at the suggestion of disruption in their industry, noting that (life and change being what they are) clients will always face new challenges. Their reaction is understandable, because two factors—opacity and agility—have long made consulting immune to disruption.
Like most other professional services, consulting is highly opaque compared with manufacturing-based companies. The most prestigious firms have evolved into “solution shops” whose recommendations are created in the black box of the team room. (See the exhibit “Consulting: Three Business Models.”) It’s incredibly difficult for clients to judge a consultancy’s performance in advance, because they are usually hiring the firm for specialized knowledge and capability that they themselves lack. It’s even hard to judge after a project has been completed, because so many external factors, including quality of execution, management transition, and the passage of time, influence the outcome of the consultants’ recommendations. As a result, a critical mechanism of disruption is disabled.
Therefore, as Andrew von Nordenflycht, of Simon Fraser University, and other scholars have shown, clients rely on brand, reputation, and “social proof”—that is, the professionals’ educational pedigrees, eloquence, and demeanor—as substitutes for measurable results, giving incumbents an advantage. Price is often seen as a proxy for quality, buoying the premiums charged by name-brand firms. In industries where opacity is high, we’ve observed, new competitors typically enter the market by emulating incumbents’ business models rather than disrupting them.
The agility of top consulting firms—their practiced ability to move smoothly from big idea to big idea—allows them to respond flexibly to threats of disruption. Their primary assets are human capital and their fixed investments are minimal; they aren’t hamstrung by substantial resource allocation decisions. These big firms are the antithesis of the U.S. Steel of disruption lore. Consider how capably McKinsey and others were able to respond when BCG started to gain fame for its strategy frameworks. But, as we’ll see, opacity and agility are rapidly eroding in the current environment. For a glimpse of consulting’s future, it’s instructive to examine the legal industry.
The legal industry is grappling with legions of disgruntled but inventive clients and upstart competitors. The first significant blow to law’s opacity came about 25 years ago, when Ben Heineman, fresh from serving as a general partner of Sidley & Austin, responded to Jack Welch’s call to come to General Electric and essentially invent the modern corporate law function, greatly reducing corporations’ reliance on law firms. Today general counsel budgets account for about one-third of the $500 billion legal market in America.
No less significant was the introduction, around the same time, of the Am Law 100 ranking of firms by financial performance, which gave clients their first hint of the true costs and value of the services they were buying, along with a real basis for comparison among the top firms. By adding increasingly granular data, such as leverage and profits per partner, the Am Law rankings shone a light on the previously secretive operations of white-shoe firms.
By now corporate general counsel are well along in the process of disaggregating traditional law firms, taking advantage of new competitors such as Axiom and Lawyers on Demand, which reduce costs and increase efficiency through technology, streamlined workflow, and alternative staffing models. AdvanceLaw and other emerging businesses are helping general counsel move beyond cost and brand as proxies for quality through what Firoz Dattu, AdvanceLaw’s founder, calls the “Yelpification of law.” His business vets firms and independent practitioners for quality, efficiency, and client service and shares performance information with its membership of 90 general counsel from major global companies, including Google, Panasonic, Nike, and eBay.
“The legal market has historically lacked transparency, making it difficult for us to deviate from using incumbent, brand-name law firms,” says Bob Marin, the general counsel of Panasonic North America. “Things are changing now. This has greatly helped general counsel be much savvier about where to send different types of work and helped us serve our corporations better.” Marin’s sentiment reflects a broader trend that David Wilkins, of Harvard Law School, has noted: Today’s general counsel increasingly view their fellow corporate executives, rather than outside counsel, as their peer group. They are often hired to bring cost and quality advantages to corporations by working creatively with law firms.
Emerging law firms are innovating quickly to take business away from white-shoe firms. For example, LeClairRyan is a full-service U.S. firm that currently employs more than 300 (full-time and contract) lower-cost but highly trained lawyers in its Discovery Solutions Practice. Clients can unbundle litigation work and “right source” to the firm such projects as large-scale document and data review at a dramatically lower cost. LeClairRyan coordinates this discovery work with the higher-value services of lead counsel, who focus on the less routine aspects of litigation.
There will always be matters for which, as Wilkins says puckishly, “no amount of shareholder money is too much to spend,” but without doubt, the old-line firms are under pressure. An AdvanceLaw survey of general counsel found that 52% agree (and only 28% disagree) with the statement that general counsel “will make greater use of temporary contract attorneys,” and 79% agree that “unbundling of legal services...will rise.” The legal management consultancy Altman Weil, surveying law firm managing partners and chairs, found that in 2009 only 42% expected to see more price competition, whereas by 2012 that number had climbed to 92%. Similarly, in 2009 less than 30% thought fewer equity partners and more nonhourly billing were permanent trends; in 2012 their numbers had reached 68% and 80%, respectively.
In response, some white-shoe firms have begun to incubate new models. Freshfields Bruckhaus Deringer, one of the UK’s prestigious Magic Circle firms, launched Freshfields Continuum in mid-2012 after years of experimentation and debate. Freshfields Continuum employs alumni as attorneys on a temporary basis to meet fluctuations in demand as “a solution for efficient head count management,” according to the Freshfields partner Tim Jones.
Kennedy Research estimates that turnover at all levels in prestigious consulting firms averages 18% to 20% a year. McKinsey alone has 27,000 alumni today, up from 21,000 in 2007; the alumni of the Big Three combined are approaching 50,000. Precise data are not publicly available, but we know that many companies have hired small armies of former consultants for internal strategy groups and management functions, which contributes to the companies’ increasing sophistication about consulting services. Typically these people are, not surprisingly, demanding taskmasters who reduce the scope (and cost) of work they outsource to consultancies and adopt a more activist role in selecting and managing the resources assigned to their projects. They have moved more and more work in-house, such as average costing analysis, an exercise that once racked up billable hours.
Companies are also watching their professional services costs, a relatively new development that was triggered by the 2002 recession. Ashwin Srinivasan, an expert on procurement practices with CEB, says that C-suite executives are the “worst offenders of procurement best practices, but when spend is aggregated and they see the full impact of their individual decisions on the expense line, it wakes them up.” In other words, cost pressures force clients to abandon the easy assumption that price is a proxy for quality.
Their growing sophistication leads clients to disaggregate consulting services, reducing their reliance on solution-shop providers. They become savvy about assessing the jobs they need done and funnel work to the firms most appropriate for those jobs. We spoke to top managers of Fortune500 and FTSE 100 companies who were once consultants themselves; they repeatedly described weighing a variety of factors in deciding whether the expensive services of a prestigious firm made sense. As one CEO (and former Big Three consultant) put it, “I may not know the answer to my problem, but I usually roughly know the 20 or so analyses that need to be done. When I’m less confident about the question and the work needed, I’m more tempted to use a big brand.”
This disaggregation is also explained by a theory—one that describes the increased modularization of an industry as client needs evolve. As the theory would predict, we are seeing the beginnings of a shift in consulting’s competitive dynamic from the primacy of integrated solution shops, which are designed to conduct all aspects of the client engagement, to modular providers, which specialize in supplying one specific link in the value chain. The shift is generally triggered when customers realize that they are paying too much for features they don’t value and that they want greater speed, responsiveness, and control.
Examples of this shift are many. When Clay Christensen first started working at BCG, in the early 1980s, a big part of his job was assembling data on the market and competitors. Today that work is often outsourced to market research companies such as Gartner and Forrester, to facilitated networks that link users with industry experts such as Gerson Lehrman Group (GLG), and to database providers such as IMS Health. As access to knowledge is democratized, opacity fades and clients no longer have to pay the fees of big consulting firms. Some of these modular providers are moving upmarket by providing their own boutique consulting services, offering advice based on the research they specialize in gathering.
The rise of alternative professional services firms, such as Eden McCallum and Business Talent Group (BTG), is another chapter in the modularization story. These firms assemble leaner project teams of freelance consultants (mostly midlevel and senior alumni of top consultancies) for clients at a small fraction of the cost of traditional competitors. They can achieve these economies in large part because they do not carry the fixed costs of unstaffed time, expensive downtown real estate, recruiting, and training. They have also thus far chosen to rely on modular providers of research and data rather than invest in proprietary knowledge development.
Although these alternative firms may not be able to deliver the entire value proposition of traditional firms, they do have certain advantages, as our Harvard Business School colleague Heidi Gardner has learned through her close study of Eden McCallum. Their project teams are generally staffed with more-experienced consultants who can bring a greater degree of pragmatism and candor to the engagement, and their model assumes much more client control over the approach and outcome. We expect these attributes to be particularly compelling when projects are better defined and the value at risk is not great enough to justify the price of a prestigious consultancy. As BTG’s CEO, Jody Miller, puts it, “Democratization and access to data are taking out a huge chunk of value and differentiation from traditional consulting firms.”
Eden McCallum and BTG are growing quickly and zipping upmarket. While it’s fair to question whether they will need to take on some of the cost structure of incumbents as they expand, their steady growth suggests that they’ve been successful without doing so. For example, Eden McCallum launched in London in 2000 with a focus on smaller clients not traditionally served by the big firms. Today its client list includes Tesco, GSK, Lloyd’s, and Whitbread, among many other leading companies. In addition, some of its contacts at smaller companies have moved into more-senior positions at larger companies, taking the Eden McCallum relationship with them. That dynamic is one that the consulting majors have long used to drive growth.
Modularization has also fostered data- and analytics-enabled consulting, or what Daniel Krauss, a research director at Gartner, calls “asset-based consulting,” of which McKinsey Solutions is an example. This trend involves the packaging of ideas, processes, frameworks, analytics, and other intellectual property for optimal delivery through software or other technology. The amount of human intervention and customization varies, but in general it’s less than what the traditional consulting model requires, meaning lower expenses spread out over a longer period of time (usually through a subscription or license-based fee). Certain tools can be more quickly and efficiently leveraged by the client, and teams don’t have to reinvent the wheel with each successive client.
This approach is most pertinent for consulting jobs that have been routinized—that is, the process for uncovering a solution is well-known and the scope of the solution is fairly well defined. Often these jobs must be repeated regularly to be useful, and many of them deal with large quantities of data. For example, determining the pricing strategy for a portfolio of products is no small feat, but experienced consultants well understand what analytics are needed. The impact of such projects, which involve copious amounts of data, can erode quickly as circumstances change; the analysis must be updated constantly. In such projects a value-added process business model would be most appropriate.
Scores of start-ups and some incumbents are also exploring the possibility of using predictive technology and big data analytics to deliver value far faster than any traditional consulting team ever could. One example is Narrative Science, which uses artificial intelligence algorithms to run analytics and extract key insights that are then delivered to clients in easy-to-read form. Similar big data firms are growing explosively, fueled by private equity and venture capital eager to jump into the high-demand, high-margin market for such productized professional services.
Only a limited number of consulting jobs can currently be productized, but that will change as consultants develop new intellectual property. New IP leads to new tool kits and frameworks, which in turn lead to further automation and technology products. We expect that as artificial intelligence and big data capabilities improve, the pace of productization will increase.
As noted, we’re still early in the story of consulting’s disruption. No one can say for sure what will happen. Disruption is, after all, a process, not an event, and it does not necessarily mean all-out destruction. We believe that the theory has four implications for the industry:
1. A consolidation—a thinning of the ranks—will occur in the top tiers of the industry over time, strengthening some firms while toppling others. Winners will be differentiated from losers by their understanding of the evolving pressures on their clients and by their ability to bring clarity and skill to fulfilling clients’ new requirements.
However disruption unfolds, a core of critical work will survive, requiring custom solutions to complex, interdependent problems across industries and geographies. As in law, for clients facing “bet the business” strategic problems, paying top dollar for name-brand solution shops will make sense, if for no other reason than that board members won’t question the analytics produced by prestigious firms. Such firms will probably remain the only players that can crack enormous problems and facilitate the difficult change management required to address them, and they will continue to command a premium for their services.
But as disrupters march upmarket, armed with leaner business models and new technology, the range of problems requiring strategic solutions should shrink. To stay ahead of the wave of commoditization, firms will need human, brand, technological, and financial resources to deploy against new and increasingly complex problems and to develop new intellectual property. M&A activity, as difficult as that may be, will increase as some firms decide that they don’t have the resources or stamina to make necessary changes, and others realize the need to acquire fill-in capability.
2. Industry leaders and observers will be tempted to track the battle for market share by watching the largest, most coveted clients, but the real story will begin with smaller clients—both those that are already served by existing consultancies and those that are new to the industry. This is so because in consulting, as in every other industry, the unlocked entryway is in the basement of the established firms. While consulting’s core apparatus is focused on bigger and bigger client engagements, small customers are unguarded.
3. The traditional boundaries between professional services are blurring, and the new landscape will present novel opportunities. But a counterforce to modularity is creating many ill-defined interdependencies among the professional services. Thus the first firms to offer interdependent solutions to problems arising at these intersections stand to gain the lion’s share of the value.
IDEO, for example, bridges the disciplines of industrial design and innovation consulting. Its unique mix of talent and strength in solving interdependent problems makes it hard to imitate. The legal services provider Axiom has expanded far beyond its roots in contract lawyer staffing to advising general counsel on substantive ways to lower costs while maintaining quality. To that end, Axiom now deploys a mix of lawyers, management consultants, workflow specialists, and technologists. By spanning domains and creating models that are hard to pick apart, these companies are effectively fighting modularization. (We should note that IDEO is attempting self-disruption as well, with its online platform OpenIDEO, which uses crowdsourcing instead of traditional consultants to solve problems. Although the platform today is primarily focused on social issues, we can imagine its applicability in more-commercial settings.)
Another example is the Big Four accounting firms, which have moved into a diverse array of professional services; like IBM and Accenture, these firms aspire to be “total service providers.” According to a 2012 Economist article, Deloitte’s consulting business is growing far faster than its core accounting business and, if the pace continues, will be larger by 2017. The other firms in the Big Four divested their consulting services almost a decade ago, after the introduction of Sarbanes-Oxley legislation and other U.S. reforms, but are now catching up and starting to stake a claim in the higher-margin management consulting business. Whether they’ll attempt to create a disruptive business model or just copy the incumbents’ business model remains to be seen.
For leaders of incumbent organizations, this type of threat, which creeps in at the margins from an unexpected source, is particularly worrisome. More likely than not, alarms won’t sound until it’s already too late in the game.
4. The steady invasion of hard analytics and technology (big data) is a certainty in consulting, as it has been in so many other industries. It will continue to affect the activities of consultants and the value that they add. Average costing and pricing analysis have been automated and increasingly insourced; now Salesforce.com and others are automating customer relationship analysis. What’s next?
We believe that solutions featuring greater predictive technology and automation will only get better with time. What’s more, data analytics and big data radically level the playing field of any industry in which opacity is high. Their speed and quantifiable output help reduce, and perhaps even negate, brand-based barriers to growth; thus they might accelerate the success of emerging-market consulting firms such as Tata Consultancy Services and Infosys.
Consider the disruption that technology has already introduced. The big data company BeyondCore can automatically evaluate vast amounts of data, identify statistically relevant insights, and present them through an animated briefing, rendering the junior analyst role obsolete. And the marketing intelligence company Motista employs predictive models and software to deliver insights into customer emotion and motivation at a small fraction of the price of a top consulting firm. These start-ups, though they lack the brand and reputation of the incumbents, are already making inroads with Fortune 500 companies—and as partners to the incumbents.
Consulting firms that hope to incubate a technology-assisted model will want to revisit the lessons Christensen laid out in The Innovator’s Solution. (See the sidebar “A Checklist for Self-Disruption.”) As he has often said, self-disruption is extremely difficult. The day after you decide to set up the disruptive business as a separate unit, the illogic of the new business to the mainstream business is not magically turned off. Rather, second-guessing about the initiative persists, because the logic is embedded within the resource allocation process itself. That second-guessing must be overcome every day.
Indeed, whether McKinsey Solutions is ultimately successful will depend on how the partnership shapes and manages this new offering. Perhaps the knottiest issue it faces will involve dealing with the inevitable, and desirable, competition that arises between the core engagement business and its offspring. Will the partnership blink when a nontraditional client requests delivery of McKinsey Solutions without obligatory use of a full engagement team?
The consultants we spoke with who rejected the notion of disruption in their industry cited the difficulty of getting large partnerships to agree on revolutionary strategies. They pointed to the purported impermeability of their brands and reputations. They claimed that too many things could never be commoditized in consulting. Why try something new, they asked, when what they’ve been doing has worked so well for so long?
We are familiar with these objections—and not at all swayed by them. If our long study of disruption has led us to any universal conclusion, it is that every industry will eventually face it. The leaders of the legal services industry would once have held that the franchise of the top firms was virtually unassailable, enshrined in practice and tradition—and, in many countries, in law. And yet disruption of these firms is undeniably under way. In a recent survey by AdvanceLaw, 72% of general counsel said that they will be migrating a larger percentage of work away from white-shoe firms.
Furthermore, the pace of change being managed by the traditional clients of consulting firms will continue to accelerate, with devastating effects on providers that don’t keep up. If you are currently on the leadership team of a consultancy and you’re inclined to be sanguine about disruption, ask yourself: Is your firm changing (at least) as rapidly as your most demanding clients?
Finally, although we cannot forecast the exact progress of disruption in the consulting industry, we can say with utter confidence that whatever its pace, some incumbents will be caught by surprise. The temptation for market leaders to view the advent of new competitors with a mixture of disdain, denial, and rationalization is nearly irresistible. U.S. Steel posted record profit margins in the years prior to its unseating by the minimills; in many ways it was blind to its disruption. As we and others have observed, there may be nothing as vulnerable as entrenched success.





Make a plan as enterprises hollow out IT



It looks like the cloud is here to stay. Aivars Lode Avantce 

Make a plan as enterprises hollow out IT
  • RSEnterprises are losing their emphasis on the bread and butter of IT, and the need for server techs and sysadmins seems to be diminishing. How can an IT pro plan around this? 
An IT pro recently expressed to me that IT was losing its emphasis on hardware deployment and application development, the traditional “bread and butter” of IT.Down, up, or out?The two-year plan

Server techs and sysadmins increasingly find their roles outsourced or displaced by cloud or package software, while demand for less technical roles like implementation specialists, project managers, and business analysts only seems to increase. In extreme cases, even these roles are provided by a third party, leaving only high-level IT management and expert “pipe fitters” who build and maintain robust networks designed to connect an organization reliably to a third-party service provider.
In a sense, IT is becoming “hollow” and losing many of these “middle of the stack” roles long associated with IT: programmers, hardware technicians, and support personnel. If you find yourself in one of these mid-stack roles, it’s easy to become disillusioned with your career as you see peers lose their jobs and watch your own role become increasingly marginalized. So, what’s an IT pro to do?
While the prospects of most traditional mid-stack roles look bleak at the average company, security presents one of the few areas that is unlikely to disappear anytime soon if you want to stay “corporate.”
IT security roles range from highly technical “internal hackers” to process and accounting driven audit-type roles. Similarly, as the mid-stack layer is pushed out of many companies, it is being absorbed by cloud providers and outsourcing firms. If you’re loathe to leave this type of role, moving to a service provider or product company will likely put you in a high-volume, technically advanced environment.
The unfortunate thing is that cloud providers make their money through economies of scale. Where a hundred server technicians might have been required across a dozen companies, once those same companies move applications and infrastructure outside their walls, their cloud provider might only require a dozen techs.
For the technically minded who want to stay put, moving closer to the network layer might also seem like an attractive option. Networking technologies continue to be highly complex, and will likely remain required regardless of whether other components of corporate IT are outsourced or delivered by the cloud.
The major problem with networking technologies is that they’ve become almost too good and, much to the chagrin of the companies that make the devices, have far longer replacement cycles than most other IT equipment. Once installed and configured, “care and feeding” is relatively minimal. Similarly, management technologies have matured to the point where network staff can be counted on two hands at even large and complex companies.
The other option for IT pros is to move into a role where technology is not the sole focus, an option that likely has more longevity than a technically oriented role. The typical business analyst has content knowledge in an area like marketing or finance, and enough knowledge of systems to speak knowledgably with internal experts or consultants.
With the push toward cloud and package software, hitching your wagon to a common back office system like SAP or Oracle may keep you busy at your current organization and enable you to build a highly marketable skill, the only risk being that you may need to refresh your knowledge as vendors offer new releases or a vendor changes its product line.
There’s also the increasingly over-applied title of “architect.” In the truest sense, this is someone who can take a business problem and design the overarching combination of process change and technical solutions to solve that problem.
Those who can truly perform this task are in high demand, although so many are adopting the title as to render it increasingly meaningless. But with a track record of successfully performing this difficult task, you rise above the vagaries of a single vendor to which many business analysts find themselves subject. Similarly, data-related roles are currently in vogue, as Big Data takes corporate IT by storm. Big Data offers sophisticated technical opportunities and analysis roles that require deep mathematical and statistical knowledge.
While multi-year planning of your career may bring back unpleasant memories of job interview questions or various government schemes, IT is undergoing a broad shift as most enterprises “hollow out” their IT.
Consider whether you want technology to remain the primary focus of your career, and look at whether your current employer will have roles available to support this career path. Failing that, take an inventory of the primary vendors used by your company, and similar organizations, and consider hanging your hat with one of those service providers.
If you want to transition upward in the IT stack, find an area of the company that interests you, or focus your work around a particular department. Ask to participate in meetings with business users and stakeholders and begin to consider technology as a solution to a business problem rather than a monolithic end in itself. Either of these transitions takes several months to successfully pull off, so start your planning now, and you’ll be well positioned as corporate IT evolves.


The Mainframe: Let’s Drop the ‘Legacy’ Label and Look to the Future


Aivars Lode Avantce

The Mainframe: Let’s Drop the ‘Legacy’ Label and Look to the Future

In the world of computing, the IBM mainframe will always be associated with green-screen processing, and the misconception that anything that has been around for nearly 50 years can't be suitable for running any part of a modern IT workload. It gets saddled with the dreaded “legacy” label. 
But the truth is that the only thing old about mainframes is the legacy label itself. Some of the mainframe’s architectural concepts have stood the test of time and have been so successful they have now re-emerged and been re-invented in modern computing parlance in the form of virtualization, cloud computing and so on. 
And the hardware that runs your modern mainframe today has nothing to do with the hardware that ran mainframes back in the 1960s, ’70s and ’80s—in the same way that people understand there’s no comparison in processing power and technological complexity between the original IBM PC and the cheapest of personal computers today. 
The reality is that mainframe systems aren’t going to go away, despite the rise of exciting new mobile technologies such as tablet and smartphone devices. Mainframes are still everywhere, and you and I (as members of the public) are rapidly becoming their biggest users. Put an item into your “shopping basket” on many retail sites, or ask for a quote on a comparison website, and you’re likely to be driving mainframe processing somewhere. 
Despite portraying the image that they only use the latest technology, in reality many companies rely on hybrid application systems. Their sexy new Web interfaces are actually linked through to repackaged systems on the mainframe, which do the serious back-end processing—often using program logic that has been around for 30 years or more. 
People need to get over the notion that the mainframe is going to quietly fade away—and instead look to the future and focus on ways of encouraging the next generation of mainframe support and development staff, who are likely to be required long into the future.
One of the issues is how one works with and accesses the IBM mainframe today. For end customers of the new Web applications, the mainframe is hidden behind the Web interface so there is no problem. And even when applications are used internally within companies there’s often a certain amount of “wrappering” of the applications to protect the user from the realities of green-screen 3270 interfaces. 
That leaves those working in development, support and administration roles on the mainframe as the main group still using green-screen interfaces. For some—the more mature, generally—this isn’t a problem; and who are we to interfere with working practices they’re happy with? But for new users, the need to master this stuff before they can work with, and contribute to, the mainframe is a real obstacle.
As a result, many of ISVs have spent a lot of development time producing new Eclipse-based interfaces for their mainframe product sets. Eclipse interfaces can make access to the mainframe easier and more meaningful to a new generation of IT professionals and help to dispel the legacy tag and its “road to nowhere” connotations.
One of today's challenges is how to overcome the negative ideas associated with legacy and turn it into a heritage to be proud of. There’s no easy answer to this question as we are talking about changing perceptions here. This is one of the main discussion points for managers of mainframe IT shops. We believe the mainframe user community needs to be able to demonstrate a valid mainframe-based career path in today's IT industry.
One thing is for sure, though: the advent of new interfaces that allow us to look beyond skin-deep and rekindle the love of mainframes—for their power, flexibility and reliability—can only be a step in the right direction.