Sunday, March 2, 2014

Meet the 2013 enterprise superstars

As the maturation of SAP and Oracle deployment come to a close horizontal operational applications will grow. Aivars Lode avantce

Meet the 2013 enterprise superstars

2013 was the year that business software became “sexy.” It wasn’t just the sheer dollars that poured into the space, but the people and ideas that transformed the way we do business.
Young and energetic chief executives, such as Box‘s Aaron Levie and Dropbox‘s Drew Houston made cloud storage seem accessible. Fast-growing startup RelateIQ got us thinking about what’s next for customer relationship management (CRM). And “big data” companies like Quid, Kaggle, and Metamarkets rose to prominence by forming online communities of data scientists.
It’s about time that enterprise startups got their share of attention — and not just consumer apps. So VentureBeat’s enterprise team interviewed our network and compiled a list of the vendors and visionaries that impressed us most this year.
The chief executive: Aaron Levie
Aaron Levie, 28, was on a crusade this year to make enterprise cool, and it totally worked. At Boxworks, the company’s recent conference, Levie paced the stage, cracking some (actually funny) jokes and discussing plans for his billion-dollar cloud storage company. In interviews, Levie talks a mile a minute, always giving us the impression that he’s thinking about ten steps ahead. He usually is.
“The software you’ll see in the enterprise needs to be far easier to use,” he often says. Box is a threat to some legacy players like Microsoft, as it’s designed with ease of use in mind, unlike most business-focused software.
Box’s customers include 97 percent of the companies on the Fortune 500; the technology is used by an estimated 20 million people. But 2014 will be Levie’s year. An initial public offering is likely on the horizon; revenues will be sky high if the sales team’s strategy of pitching vertical sectors one-by-one (health, education, financial services, legal and so on) ultimately pays off. With Levie at the helm, we’re excited to see what’s in store.
Honorable mention: Roman Stanek, chief executive of GoodData; Anthony Goldbloom, chief executive ofKaggle; and Frederic Laluyaux, chief executive of Anaplan.
The venture firm: Emergence Capital
Emergence Capital‘s founder Gordon Ritter made a name for himself by investing early in Salesforce. But this year, Ritter’s investment strategy shifted. Rather than funding startups with a broad approach, his firm is focused on software for specific industries, such as health care or finance. This vertical approach paid off when a little-known cloud company called Veeva Systems experienced a soaring initial public offering earlier this year.
Emergence, Veeva’s biggest investor, had a stake worth $1.2 billion, which is roughly a 300-fold return on the original investment. In an interview following the IPO, Ritter reiterated his view that the next wave of the cloud is “focused on industries and real customer successes.” He may well be right.
We were also intrigued this year by enterprise-focused funds Data Collective, Venrock Capital, and the CIA’s venture firm, In-Q-Tel.
And Bloomberg Beta, an early stage fund that Bloomberg L.P. launched this year, won points for publishing on GitHubits lengthy “operating manual,” which contains questions the fund’s investors like to ask, its funding process, and other characteristics of the fund that entrepreneurs otherwise probably wouldn’t know.
But Emergence’s hilarious Christmas gift clinched the win. It showed us that they don’t take themselves too seriously, even after such a stellar year.
Honorable mention: Data Collective, Venrock Capital, and In-Q-Tel. 
The accelerator: Alchemist
Investors are on the hunt for revenue-generating companies that are building software for businesses, not consumers — and that’s Alchemist’s sweet spot. We exclusively covered the launch of the new Silicon Valley-based startup accelerator earlier this year, and have reported on the first batch of graduates, many of whom have gone on to bag a sizable seed round.
Alchemist is run by Ravi Belani, a highly personable and well-connected former venture capitalist. His accelerator has already generated a great deal of buzz, and interest from VCs.
“It’s fun to build stuff and make money,” said Belani. “And in the enterprise, your first customers will write million-dollar checks.”
Belani will only take on 10 teams per quarter. He has already raised enough money to fund 90 companies in the next two years.
Honorable mention: Citrix Startup Accelerator and Y Combinator.
The pivot: Docker
This year, observers of the cloud-computing industry started wondering about the real value of Platform as a Service (PaaS), a category of products on top of which developers can build and run web applications.
But one company has succeeded in casting aside its original identity as just another PaaS and taking up a whole new appearance as a provider of open-source technology that saves developers time and effort as they get their applications ready to go.
That company, Docker, was called dotCloud up until a couple months ago, when it changed its name to better reflect its close relationships with the technology it had released under an open-source license earlier this year: Docker containers. Engineers at eBay and other companies have flocked to the technology, and cloud providers such as Google have adding support for it.
The company plans to release commercial services based on Docker containers and provide commercial support to people who use them. Partnerships with vendors who embrace the technology could bring in money, too.
The data scientist: Monica Rogati
Users share lots of information about themselves on social media sites like LinkedIn and Facebook, including their clicks. Those companies’ data scientists have plenty to mine for ideas on which features ought to get scrapped and what product managers could do to propel the companies forward. And they can also put together cool infographics full of trends they observe.
But more and more data is coming from machines. Sensors capturing temperature, sound levels, and other kinds of information are becoming more ubiquitous. Connected devices such as the Fitbits and Jawbones are among the sources of data in the fast-growing Internet of things.
That’s why it’s exciting to think about what Monica Rogati will be able to do in her new job as vice president of data at Jawbone. Rogati, formerly senior data scientist at LinkedIn, has already started whipping up digestible insights based on data from the UP wristbands her company sells. Expect her contributions to the company to go beyond that sort of output, though.
Speaking at VentureBeat’s DataBeat/Data Science Summit event earlier this month, she talked about her interest in “data products.” The term doesn’t simply mean interpreting people’s clickstreams on a website.
“It’s also about taking data and turning it into a product,” she said. She didn’t elaborate on the data products she wants to produce. But her work is worth watching.
So what if IT administrators won’t be the only ones to see it? It could prove influential as more people start buying connected devices and more companies jump into the market. It might also provide to end users suggestions for improving their health. And that could well be more important in the long run than slight improvements in ad targeting.
Honorable mention: Rayid Ghani, cofounder of Edgeflip.
The startup worth watching: RelateIQ
Lots of companies release add-ons for’s software for tracking customer leads, showing how the product isn’t perfect by itself. Fewer companies have the guts to call into question that whole category of software, known as customer relationship management (CRM), into question and offer a truly different solution for the same problem.
RelateIQ is one such company, pitching “relationship intelligence” as a smarter tool for salespeople to stay on top of conversations with all their important contacts.
Data from multiple applications, including LinkedIn, Office 365, and Gmail, can be pushed into RelateIQ, which provides desktop and mobile applications. And the tool just pulls data in from incoming and outgoing emails, without requiring users to do anything special. It sends users reminders of how and when they should follow up.
It wouldn’t be surprising to see RelateIQ get smarter. It’s got some serious data science talent on hand. DJ Patil, a co-creator of the now-hot term “data scientist,” joined the company after being data scientist in residence at Greylock Partners and previously head of data products at LinkedIn.

The mobile Internet’s consumer dividend

The mobile Internet's consumer dividend by McKinsey. Aivars Lode avantce

The mobile Internet’s consumer dividend

New research suggests that user benefits have nearly doubled thanks to the growth of the wireless web.

February 2014 | byJacques Bughin and James Manyika

When consumers tweet, exchange photos, or search for information on the web, they’ve come to expect that it will be free. In economic terms, this panoply of services by web providers amounts to a vast consumer surplus. Three years ago, we took the measure of these consumer benefits in the United States and Europe.1 Using survey data and statistical analysis, we estimated how much consumers would be willing to pay for each of a range of services and then aggregated the benefits,2 which we found totaled €130 billion.
A 2013 update suggests that the consumer surplus has nearly doubled, to €250 billion (exhibit). Three-fourths of the incremental surplus results from the explosion in consumer use of the wireless web through smartphones and tablets—propelled by the migration of web services, communications channels, social media, and entertainment to these wireless devices. Broadband usage also has grown in the countries analyzed, rising to 65 percent of all households, from a little more than 50 percent.


The web’s consumer surplus has nearly doubled in the past three years, primarily because of the explosion in consumer use of wireless access.
While web services are free to consumers, many companies providing them generate income from their extensive platforms and user networks, through advertising or access charges for valuable information about consumers and their preferences. In our analysis, we identify those two activities as a cost to users and set a price we think they would pay to avoid disruption of their web experiences or to limit the risks associated with sharing personal information.
Since 2010, these costs have risen to €80 billion, from €30 billion, reflecting growing consumer sensitivity to web clutter and privacy issues. While that’s a sizable increase, it’s less than the rise in the web’s total surplus for consumers, suggesting that the net effect on them remains strongly positive. Interestingly, in a sign of maturing usage, the net surplus for the wired web has remained close to flat since 2010, as a large increase in privacy and clutter risk balances the increased surplus. Mobile usage drives almost the entire increase in the overall net consumer surplus.
As an extension of our core surplus analysis, we estimated the value of the trust users have in the web brands they use to interact with others, seek information, and consume entertainment. This trust generates a €50 billion surplus across both wired and wireless use. Leading web-service providers such as Google (including YouTube), Facebook, Microsoft, Yahoo!, and Twitter capture nearly half of the trust surplus. This concentration in a few marquee brands, many of which actively generate revenues from their web services, suggests that these revenues can coexist with acceptable privacy and web-experience quality levels. Indeed, they must do so, given the fragility of that trust and the ease of undermining it when companies mismanage user expectations. Companies with strategies that meet or exceed them while increasing the range and reliability of their web offerings—particularly mobile services—should be well positioned to enjoy a virtuous cycle in which the creation of a consumer surplus expands the scope of opportunities to generate revenue and value.

NO WONDER Big Blue dropped it: IBM server biz EXPLODED in Q4

Reminds me of when mainframe sales stopped and client server started. The analogy here is server sales have dropped and cloud has started. Aivars Lode avantce

NO WONDER Big Blue dropped it: IBM server biz EXPLODED in Q4
Ta for the $2.3bn, Lenovo... just take IT OFF US
By Paul Kunert, 26th February 2014
There's little wonder that IBM execs were so quick to snap up Lenovo's $2.3bn offer for the sickly volume server biz. Factory revenues and shipments apparently crashed during Big Blue's last full quarter behind the wheel.
Ending a thoroughly unpleasant 2013 for Big Blue server peeps, Gartner calendar Q4 numbers show revenues fell by 26.4 per cent to $3.61bn and unit sales dropped 16.3 per cent to a little over 231k.
This capped another quarter of misery for IBM, which has failed to grow its server biz in any three-month period since Q3 2011. It is only surprising that execs at the firm failed to strike a deal with Lenovo the first time round.
Clearly Lenovo will again have its work cut out if it wants to convert Big Blue's x86 division into a fast growing, profitable sales engine, at a time when cloud and white box builders in China are a growing threat.
Not that many server makers covered themselves in glory in a quarter where the total market declined 6.6 per cent in turnover terms to $13.6bn, despite a 3.2 per cent rise in shipments to 2.58m boxes.
"We've seen ongoing growth in web-scale IT deployments, while the enterprise remained relatively constrained," said Jeffrey Hewitt, research veep at Gartner, echoing comments made alongside prelim Q4 data.
The Mainframe and RISC Itanium Unix platform market "kept overall revenue growth in check", he added.
Server market boss HP and Cisco were the only named vendors to post growth, up 6 and 34.5 per cent respectively, to reach $3.89bn and $646m. The "others" section – which includes numerous fast growing Chinese brands – was also up, 6.2 per cent to $2.9bn.
If HP CEO Meg Whitman is worried about Lenovo's deal with IBM, she didn't show it on a fiscal Q1 results call with analysts last week.
"What I have learned about this business is instability and questions about the future make it very difficult because people want to bet on a roadmap, and they worry that as a change occurs, is the roadmap the same, [is] investment the same".
"I think we have a near term opportunity here to gain share in our enterprise services or in our server business, so we are all over it," she added. "Long term obviously Lenovo is going to be a powerful competitor, and we aim to be set up by the time the deal is done to compete really aggressively".
Privately owned Dell took its eye off the ball in Q4 as revenues declined 0.5 per cent to $2.07bn and shipments dropped 5.4 per cent to just over 504k.
And Oracle, which has taken Sun's hardware biz out of the game since acquiring it, again saw factory revenues drop 4.7 per cent to $574.7m. It no longer registers in the top five shipments firms.
EMEA market recorded its tenth consecutive quarter of revenue declines, down 6.4 per cent to $3.6bn. Unit server sales declined 2.5 per cent to 613k. In this region, HP held a near 40 per cent share of box sales, declining at the market average, and 34.5 per cent of revenues.
For the year, worldwide revenues declined 4.5 per cent to $50.1bn as shipments grew 2.1 per cent to 9.8m units.

Former Hospital CFO Charged with Health Care Fraud

You would not want to be a CFO at a hospital and not know if your records are correct otherwise the FBI could be after you! Aivars Lode avantce

Former Hospital CFO Charged with Health Care Fraud 

U.S. Attorney’s OfficeFebruary 06, 2014
  • Eastern District of Texas(409) 839-2538
TYLER, TX—The former chief financial officer for Dr. Tariq Mahmood’s Texas hospitals has been charged with health care fraud violations in the Eastern District of Texas, announced U.S. Attorney John M. Bales today.
Joe White, 66, of Cameron, Texas, was indicted by a federal grand jury on January 22, 2014, and charged with making false statements to the Centers for Medicare and Medicaid Services (CMS) and aggravated identity theft. White appeared for an arraignment hearing today before U.S. Magistrate Judge John D. Love.
The American Recovery and Reinvestment Act of 2009 established incentive payments under the Medicare and Medicaid programs for eligible professionals and eligible hospitals that meaningfully use Certified Electronic Health Record Technology. The incentive programs were created to promote the adoption of health information technology and encourage the electronic exchange of health information in order to improve the quality and lower the cost of health care in the United States. Upon meeting certain conditions, an eligible hospital could qualify for incentive payments from CMS if the hospital attested that it had meaningfully used Certified Electronic Health Record Technology for the prior fiscal year.
According to the indictment, on November 20, 2012, White falsely attested to CMS that Shelby Regional Medical Center (Shelby Regional) met the meaningful use requirements for the 2012 fiscal year. However, Shelby Regional relied on paper records throughout the fiscal year and only minimally used electronic health records. To give the false appearance that the hospital was actually using Certified Electronic Health Record Technology, White directed its software vendor and hospital employees to manually input data from paper records into the electronic health record (EHR) software, often months after the patient was discharged and after the end of the fiscal year.
The indictment further alleges that White falsely attested to the hospital’s meaningful use by using another person’s name and information without that individual’s consent or authorization. As a result of the false attestation, CMS paid Shelby Regional $785,655. In total, hospitals operated by Dr. Mahmood, including Shelby Regional, were paid $16,794,462.66 by the Medicaid and Medicare EHR incentive programs for fiscal years 2011 and 2012.
“As more and more federal dollars are made available to providers to adopt Electronic Health Record systems, our office is expecting to see more cases like this one,” said Special Agent in Charge Mike Fields of the U.S. Department of Health and Human Services Office of Inspector General’s (OIG) Dallas Regional Office. “The Office of Inspector General is committed to protecting the millions of taxpayer dollars used to pay providers to adopt Electronic Health Record systems.”
If convicted, White faces up to five years in federal prison for making a false statement and up to two years in federal prison for aggravated identity theft.
This case is being investigated by the U.S. Department of Health and Human Services-Office of the Inspector General (HHS-OIG), the Texas Office of the Attorney General-Medicaid Fraud Control Unit (OAG-MFCU), and the Federal Bureau of Investigation. Assistant U.S. Attorney Nathaniel C. Kummerfeld and Special Assistant U.S. Attorney Kenneth C. McGurk are prosecuting this case.
A grand jury indictment is not evidence of guilt, and all defendants are presumed innocent until proven guilty beyond a reasonable doubt in a court of law.

Indirect proposition: Inside IBM UK software supremo's profit plan

We wish IBM the best with their channels strategy. Aivars Lode avantce

By | Gavin Clarke 8th January 2014 10:03
Indirect proposition: Inside IBM UK software supremo's profit plan

Partners and tackling that PC myth...
Interview IBM’s Stephen Smith shrugs off our suggestion there’s more pressure on him now than ever before. And no wonder: he is a man with a profit-generating plan.
Smith is the UK and Ireland head of IBM’s Software Group – a unit that’s home to some of the biggest names in software, never mind the Big Blue brand itself.

As a whole, the group houses DB2, WebSphere, Informix, Rational, Tivoli and Lotus. By next year the plan is that these products will generate half of IBM’s profits.
That’s under the mandate of a 2010 roadmap that says "about" 50 per cent of profits will come from software by 2015 – up from 45 per cent now. If it hits the 50 per cent number, software will widen its lead over hardware and the renowned services unit.
The plan also calls for an EPS of $20, meaning Smith’s group must also inflate profits for shareholders.
The goal is more remarkable given software accounts for just a quarter of IBM’s net sales – 24 per cent. Services are still the big earner, accounting for 56 per cent.
“Listen, there’s pressure on us every day,” Smith told The Register during a recent interview. “That’s nothing new.”
Smith, who took post 18 months ago following various senior executive roles at IBM in the UK and Europe, needs to be clever: he's required not just to sell more, but sell more without incurring additional costs. Running the UK and Ireland operation means Smith will be contributing to the overall success of IBM software and systems group, run by executive vice president and group executive Steve Mills in the US.
So what’s the plan for extracting a disproportionate amount of profit from his unit?
Partners. Smith plans to sign up more in 2014, whom he hopes will customise and distribute more of software group's products. The idea is IBM can reach more customers working through third parties than it could going direct. Smith calls this switch a cultural change. Also, IBM is hiring and training more people with the “right” skills to make "complex" sales.
Software has become a more profitable venture in the tech business then either hardware or services. Hardware needs to be shipped and stored while prices have gone down, and you can the laws of physics mean you can only sell one unit per customer. Services only grows as you hire more bodies. But the same code can be sold again and again. Also, code doesn’t take up shelf space or demand a salary.
And yet Smith faces two major hurdles.
Smith: ROI is more aggressive, but it it’s something we relish
One is new entrants: startups like Dropbox and big names like Amazon are turning their attention not just to IBM’s traditional enterprise base but are signing up small-and-medium-sized businesses that are important for IBM’s growth.
Cloud startups like Dropbox pose a problem because they are viral in their sign-up and billing. This is what has given them a foot in the door at SMBs and the departments of big companies, and resulted in CIOs' business-collaboration platforms becoming based on Dropbox before they know it.
Amazon has started to prove a serious contender in enterprise IT infrastructure. So serious that IBM spent 2013 fighting a US government decision to award a $600m, 10-year CIA cloud hosting contract to AWS.
IBM complained to the US Government Accounting Office (GAO) that Amazon’s bid was $54m cheaper and the GAO found in favour of IBM. Amazon took the case to the Federal Court of Claims, which found for Amazon, saying AWS was simply a better cloud.
In the midst of this, IBM is also battling a legacy image problem – the perception that, unlike new entrants, IBM is only suited to “big” customers and that it's relatively slow.
“We are seeing more and more new entrants,” Smith told The Reg. “We are having to become more agile and flexible to deal with a plethora of new competitors – competitors we are not used to.
“I still think there’s a huge challenge for IBM in being able to communicate the breadth and depth of its portfolio,” Smith said. “There’s lots of traditionally held views out there that are lost in time... I’ve met customers who still think we sell PCs and that’s a long way from where we are.”
For the record, IBM sold its PC business to Chinese manufacturer Lenovo for $1.24bn in 2004, as part of a strategy to move out of low-priced commodity businesses. IBM’s decision today looks farsighted as Dell has gone private and Hewlett-Packard struggles to reverse losses at the hands of tablets.
Smith calls the idea that IBM is only for big businesses an “absolute myth”, saying IBM’s software acquisitions mean there’s plenty for SMBs.
Underlying all this is the computing paradigm shift that is cloud. This is a multi-faceted problem, and it's not just the fact IBM faces a new line up of competitors. Cloud poses a particular problem for those making and selling software, companies like IBM and units like IBM's software group.
IBM makes some chunky ol' enterprise software and like others in this field – Oracle and SAP – it’s business model has been site licenses and maintenance. IBM's software group has made its money by selling and supporting staples like WebSphere and DB2.
But just as growth is down at Oracle and SAP, so it’s falling at IBM.
Don't buy the software, float the software
Customers are thinking twice about the need to purchase a license for a piece of software. Instead, they realise they can rent time on or outsource the job to a service provider's servers instead, or they can virtualise the software on their own high-density servers.
A byproduct of cloud is that customers’ expectations are becoming more pronounced when it comes to actually buying new software. They expect ROI in six to 12 months, not two to three years as they did in the past, Smith said.
In IBM's recent third quarter results, Software Group worldwide made $5.8bn while its sales grew – albeit modestly, by one per cent.
But, still, software did do better than hardware and services: hardware fell a whopping 17 per cent and services was down four per cent
The biggest sellers in the software group were Rational developer tools and something IBM is calling Social Workforce Solutions, which includes Lotus - up 12 and 14 per cent. Information management (DB2, Informix and BI tools), and Tivoli systems management grew two per cent respectively.
But what of the WebSphere middleware portfolio – home of the app server? This was the toast of enterprise sales during the 2000s. Now it's the biggest loser, with no growth at all during the third quarter.
Compare that to early 2004, the heyday of the battle of the Java application servers and middleware. Back then, WebSphere revenue was growing by 24 per cent a quarter.
And remember that one per cent Q3 growth for Software Group that was reported late last year? Back in 2004, WebSphere’s income helped the software group hit quarterly growth of 11 per cent to $3.5bn.
“I’d be fooling you and everyone else if I didn’t say it wasn’t more of a challenge in this current environment - you have to work an awful lot harder on articulating a business proposition” - Stephen Smith
IBM is making some money from cloud, but not much from what we can see.
IBM doesn’t break out revenue from its cloud business but in the third quarter said cloud revenue grew 70 per cent to “more” than $1bn. Of this, $460bn was delivered “as a service” - meaning from IBM’s own Smart Cloud with the rest of the money, we assume, coming from sale of IBM software, hardware and services to help others set up and run their own clouds.
“I’d be fooling you and everyone else if I didn’t say it wasn’t more of a challenge in this current environment - you have to work an awful lot harder on articulating a business proposition,” Smith told us just before the Q3 results were announced.
“ROI is more aggressive, but it it’s something we relish. You have to work that harder, it’s so competitive out there now. It’s driven by economic factors and the rate and pace of change is so fast,” Smith said.
The answer for IBM’s challenge is to reach a broader market by selling more software through partners.
Smith told The Reg he’s shifted resources away from face-to-face and direct sales to third parties to achieve scale. His goal is to sign up more partners in 2014.
“It’s where I see the growth coming from and what the market is demanding as we look to branch into non and less traditional IBM areas,” he said. “Traditionally, software has been a very high touch face to face approach that we have relied on... we are now looking to go to market with our partners.”
“We don’t have the scale to deal with the amount of demand there is. A key part of strategy from a 2015 perspective in the UK is to extend out through the ecosystem system and market he challenge, the ISVs, the resellers a key part of what we are drying to drive.”
He said sales have been re-tooled, with the software group in the UK and Ireland hiring and training the “right” kind of salespeople.
IBM has been laying off staff - 6,000 have gone globally since 2009 and 300 from the UK and Ireland last year as part of cost cutting and restructuring.
Smith wouldn’t comment on plans in the UK and Ireland for further layoffs, but did claim his sales team was growing.
He said his team has been hiring, while IBM is looking at more training, so sales can make “a more complex value proposition” rather than “traditional product based sales".
He added: “What you are seeing in IBM it’s fair to say is a rebalancing in terms of the skills to respond to the market opportunities. We are making huge investments in people.”
Smith and his team in the UK and Ireland now have 12 months left to see whether the strategy of partners and a smarter workforce pays off. ®

Retiring greybeards force firms to retrain Java, .NET bods as mainframe sysadmins

So the mainframe is still not dead. Aivars Lode avantce

Retiring greybeards force firms to retrain Java, .NET bods as mainframe sysadmins
Desperate CIOs respond to old buggers, er, buggering off
By Gavin Clarke, 20th February 2014

New IT grads and Java and .NET jockies are being re-trained to run mainframes by big companies desperate to replace a generation of IT staff giving up work.
That’s according to Compuware, who has released a study that says CIOs are growing concerned about the looming skills shortage in their mainframe rooms.
They are concerned because they believe that the mainframe will remain critical to businesses’ operations for at least another decade.
The issue is especially critical in retail banking, where the majority of the big names have mainframes at the centre of their operations.
It’s the mainframe that often holds customer accounts – it’s literally where the money sits.
Worse, these systems are now on the front-line of new wave of IT development, as banks compete for new customers with services such as mobile banking.
Banks are being pitched on new services, too. Startups, ISVs and SIs want banks to integrate their apps into these customer account systems to do things like suck in eBay or Amazon purchases, so the customer’s becomes a central hub that lets them view all their transactions.
Others are going further: Standard Chartered bank has begun an internal pilot of BitCoin using a service from nine-month old start-up Switchless.
Switchless claims its online services lets users buy, sell and spend BitCoin, but neither the company nor Standard Chartered would discuss the pilot. Yet, despite all this, Compuware reckons that those same CIOs concerned about losing their mainframe people also lack a proper plan to pick up the shortfall.
The data is from Compuware’s survey of 350 CIOs from large companies around the world in different sectors and updates its last report in 2011. Compuware reckons 81 per cent of CIOs believe that the mainframe will remain a key business asset for another 10 years.
Sixty six per cent fear that the impending retirement of the mainframe workforce after 30 years in the job will hurt their business.
By “hurt”, chiefs are worried about their inability to support legacy mainframe applications. Of most concern is that the lack of experienced bodies will put key applications at risk, leading to reduced productivity and new IT projects running late.
Despite all of this, 40 per cent have no formal plans in place for dealing with the crisis.
Neil Richards, director of Compuware’s EMEA mainframe business, told The Register he’s noticed customers are starting to draft in techies from other parts.
“In customer meetings we’ve noticed a new generation of mainframe workers - a lot of businesses are bringing in new people and trying to train them up,” he said.
Those are the IT hires fresh off the job market and .NET and Java devs already inside the companies.
But training up these staff won’t automatically pay off, as there’s a lead time on getting up to speed on the mainframe and getting to understand how it actually works with the business. That’s an especial problem in banking, as companies roll out new services like mobile banking that must access customer accounts on mainframes.
New services are creating two problems: making the applications more complicated, while the issue of performance has become critical.
“'Poor performance' is now being viewed as a defect,” Richards reckoned.
He cited the mobile banking apps in one financial institution that he said spins up 10 different mainframe transactions just for the user to log in.
The growing complexity of the apps combined with the loss of those who actually know in detail how the mainframe works is causing a problem when things go wrong, as nobody can quickly pinpoint the problem. Richards reckoned he’s seen an upsurge in the number of “war rooms” inside banks when things have gone wrong.
He described meetings of up to 50 people from IT business areas within the bank, such as database, network, application and server, who have been drafted in to find the source of the problem and to fix it quickly.
Of course, banks in particular have thought nothing of hiving off qualified and experienced IT staff over the last 10 years and shipping their jobs to India.
NatWest, RBS and Ulster Bank suffered a catastrophic mainframe outage in summer 2012, when customers couldn’t get their money or make payments. The problem was traced to a mis-applied software patch from a team in India, and came after the bank had cut hundreds of IT staff to help cut costs.
But the retirement wave hitting mainframe staff, combined with the demand for new services like mobile, are different, according to Richards. Banks are waking up to the need to do more than simply cut costs.
“The thing driving this is the introduction of mobile banking. IT’s making things even more complex,” Richards said.
What does Richards suggest CIOs should do to address the shortage in a more coordinated manner? First, profile your IT people by age, skills and the applications and systems they are responsible for. Next, capture and record their knowledge so it can be passed on to new people - such as those new grads and those Java and .Net code slingers.
“If I was the CIO in any bank, I’d want to quantify the problem and have a plan in place to fix it,” Richards said. ®

Summary of Ten Types of Innovation

Where do you fall on the innovation scale? Ten Types of Innovation. Aivars Lode avantce

Summary of Ten Types of Innovation

In today’s competitive business world, companies must innovate, but most don’t know how. They employ ineffective, silly techniques, including brainstorming sessions featuring NERF balls, toys and “fun foods” to make the session enjoyable and, thus, productive. Unfortunately, brainstorming doesn’t work no matter how many balls you throw at it. Authors Larry Keeley, Ryan Pikkel, Brian Quinn and Helen Walters – all of Doblin, the innovation consultancy – detail what works instead. Their book teaches you the most effective ways to spark 10 kinds of innovation, based on planning, analysis and a full menu of strategic tactics.

The enterprise IT infrastructure agenda for 2014

Companies' challenges today in deploying technology have grown. Aivars Lode avantce

The enterprise IT infrastructure agenda for 2014

IT infrastructure managers must simultaneously capture the next rounds of efficiencies, accelerate the transition to next-generation infrastructure, reduce risks, and improve organizational execution.

January 2014 | byBj√∂rn M√ľnstermann, Brent Smolinski, and Kara Sprague

This year has been tough for many organizations that manage IT infrastructure—the hardware, software, and operational support required to provide application hosting, network, and end-user services. Highly uncertain business conditions have resulted in tighter budgets. Many infrastructure managers have rushed to put tactical cost reductions in place—canceling projects, rationalizing contractors, extracting vendor concessions, and deferring investments to upgrade hardware and software.
We have conducted more than 50 discussions with heads of infrastructure at FortuneGlobal 500 companies over the past six months to get a sense of the issues they are wrestling with. Clearly, infrastructure leaders must meet 2013 budgets while ensuring they can address critical challenges in 2014 and beyond. They can do so by pulling 11 levers.

Capture the next round of efficiencies

There is no indication that 2014 will be dramatically easier from a budgetary standpoint than 2013 has been at many companies. Even as infrastructure organizations lock in 2013 savings, they need to take several actions to establish a pipeline of cost-improvement initiatives that will create space in their budgets for 2014 and 2015.
1. Put in place a commercial-style interaction model with business partners
Traditionally, business partners and application-development managers have argued that they do not understand and cannot influence large infrastructure expenditures, which complicates demand management. In response, large infrastructure functions are establishing commercial models for interacting with their business partners. These models involve several efforts:
  • continuing to implement standard service offerings that can be consumed on a price-times-quantity basis
  • creating bottom-up unit costs for each service based on a detailed bill of materials
  • investing in integrated tools to automate the data collection, aggregation, analysis, and reporting required for cost transparency
  • putting in place the roles required to interact with business partners in a more commercial way—including product managers who can define standard offerings and solutions architects who can help developers combine the right mixture of standard offerings to meet a business need
2. Use project teardowns to optimize the cost of new demand
Solutioning—the process of converting a set of business requirements into specifications that can be implemented—is one of the biggest drivers of infrastructure costs. Fairly granular decisions about which operating-system version, server model, and storage tier to use can affect the cost to host an application by a factor of ten. Just as consumer-electronics companies disassemble or “tear down” their own products and those of competitors, infrastructure organizations must apply similar thinking to new projects and existing systems. This involves a structured process to lay out the full set of options in hosting an application; mapping the dependencies among decisions (for example, among different layers in the stack); assessing the cost, performance, and risk implications of each decision; and engaging business partners on important trade-offs. This process can be applied first to new projects and then extended to large existing applications over time.
3. Build industrial-style procurement capabilities
Procurement of hardware, software, and services required to operate an enterprise environment is becoming more challenging for senior infrastructure managers. Even as more procurement spending is devoted to software, many infrastructure organizations continue to use techniques developed for hardware procurement. These techniques are not entirely effective given software’s product fragmentation and relatively high switching costs. Legacy contracts, sometimes with unrealistic volume or revenue commitments,1create financial and strategic constraints as infrastructure managers try to reshape their organizations and budgets to meet revised business expectations for efficiency and flexibility. To add even more complexity to managing vendor relationships, large vendors are starting to make incursions across technology domains (for instance, network-equipment vendors are providing servers, and server vendors are providing data storage). Despite the added complexity, though, these developments increase competition and provide additional opportunities.
In response, infrastructure organizations should adopt purchasing techniques developed by automobile and consumer-electronics manufacturers, which source billions of dollars of components each year. Techniques that infrastructure organizations should employ include the following:
  • creating tight integration between product-design and procurement decisions, building cross-functional teams, and rotating managers between procurement and product-management roles
  • investing in procurement as a core discipline, separating strategic category-management roles from transactional purchasing-execution roles, and ensuring procurement staff have required business and product knowledge
  • developing an information advantage by comparing vendors, thoroughly analyzing vendor costs, and investing in accurate inventory management
  • building credibility when negotiating with vendors by making sure there is a single lead for all negotiations and using signaling mechanisms such as press coverage to indicate willingness to switch vendors
  • employing differentiated strategies to achieve the lowest possible cost by using varied procurement tactics (depending on what is appropriate given the market structure) such as sole source, auctions, and long-term contracts, as well as conducting internal discussions about the lowest price vendors might accept and how far the company is prepared to push each vendor
  • ensuring that the company’s and the vendors’ incentives are aligned by using open-book pricing, making year-on-year cost reductions, and mutually sharing the gains

Accelerate the transition to next-generation infrastructure

Even after years of consolidation and standardization, which have led to huge improvements in efficiency and reliability, most infrastructure leaders work in environments that they believe are too inflexible, provide too few capabilities to business partners, and require too much manual effort to support. Addressing these problems to create more scalable and flexible next-generation infrastructure will require sustained actions in multiple dimensions.
4. Determine how to influence application-development road maps to enable more scalable and efficient infrastructure
Most large institutions have programs under way to develop and roll out private-cloud environments in order to reduce hosting costs and dramatically improve the speed of delivery. Adopting techniques pioneered by “hyperscale” infrastructure functions serving e-commerce and social-media organizations can significantly enhance the business case for next-generation hosting environments.
These techniques include extensive self-service or automation, software-defined networking, commodity components, and aggressive use of open-source technologies. They have enabled some companies to reduce hosting costs by 50 to 75 percent.
However, infrastructure functions must take into account several important considerations as they evaluate how radically they can evolve their hosting environment:
  • What will it take for application developers to learn how to use self-service capabilities effectively?
  • Given existing architectures, should legacy applications be migrated to the new environment, or should it primarily be used for new applications?
  • What are the performance implications of commodity components, given the critical workloads?
  • What technical skills are required to build and support an environment that leverages hyperscale technologies?
5. Get the operating model in place to scale private-cloud environments
Many large infrastructure functions are experiencing “cloud stall.” They have built an intriguing set of technology capabilities but are using it to host only a small fraction of their workloads. It may be that they cannot make the business case work due to migration costs, or that they have doubts about the new environment’s ability to support critical workloads, or that they cannot reconcile the cloud environment with existing sourcing arrangements. Over the next year, infrastructure organizations must shift from treating the private cloud as a technology innovation to treating it as an opportunity to evolve their operating model. This involves a number of elements:
  • tightly integrating private-cloud offerings into their service catalog and establishing business rules to make deployment of these offerings the default option for many types of workloads
  • concentrating responsibility for the private-cloud offering to a single owner that oversees its economics and service-level performance
  • redesigning operational processes to eliminate manual steps required for traditional environments
  • leveraging automation to facilitate DevOps2 and give developers more control over their applications (within guidelines)
  • recasting sourcing arrangements to enable cloud operating models3 ; many infrastructure organizations are trying to migrate from traditional sourcing arrangements to virtual private-cloud models, and others are seeking models in which they control the developer interface and a provisioning or orchestration layer but may source the underlying servers in an integrated way
6. Advance end-user offerings to facilitate business productivity
After all the attention paid to application hosting over the past several years, many infrastructure leaders have started to conclude that they need to increase the attention and focus they devote to innovating end-user capabilities. At many companies, the most critical employees (in functions such as sales, marketing, research, and design) depend heavily on end-user technology tools such as e-mail and calendar rather than on business applications such as customer relationship management to enhance their productivity.
However, there is a fair degree of uncertainty about the ultimate direction of end-user capabilities. Infrastructure leaders are looking for answers to the following questions:
  • How do we strike the right balance between security and mobility, that is, the use of smartphones, tablets, and other cloud-enabled devices that extend the reach of the company’s wired information infrastructure but also make the information more vulnerable to breaches?
  • Where and how should we deploy virtual-desktop4 infrastructure?
  • Is there a practical business case for having unified communications5 and rich collaboration capabilities?
  • Does the productivity impact of desktop videoconferencing justify increased bandwidth costs?
Answering these questions will require deep engagement not only with business-unit IT functions but also with business managers and frontline personnel, who can help leaders develop a granular understanding of frustrations with existing tools and gain insight into how more sophisticated ones might integrate with day-to-day business operations. In doing this, infrastructure managers should pay particular attention to integration across tools. There is typically as much of an opportunity in tightening the linkages across existing capabilities as there is in adding entirely new functionality.

Reduce risks

As more business value migrates online and business processes become more and more digitized, IT infrastructure inevitably becomes a bigger source of business risk. Customer-facing systems could slow to a crawl because of insufficient computing capacity, data-center outages can disrupt business, and critical intellectual property could be extracted from inadequately secured networks. It is easy to overreact and select strategies that reduce risk but carry high costs with regard to capital expenditures or reduced business flexibility.
To serve their business partners effectively, senior infrastructure managers will have to create easily understood options that allow business partners to make practical trade-offs between cost and risk.
7. Ensure facility footprints provide required resiliency at acceptable cost
Hurricane Sandy, which came little more than a year after the Japanese tsunami, was a sobering moment for many infrastructure organizations on the east coast of the United States. A few institutions suffered severe outages; many more had close calls.
The experience accelerated the years-long process of companies moving servers and other assets out of closets and other subfunctional facilities into consolidated, strategic data centers.
It also reinvigorated a long-running debate about the pros and cons of real-time failover6versus geographic diversity. Is it better to build data centers in close-together pairs so applications can run synchronously across the two and so avoid downtime if one facility is impaired—but still run the risk that an extreme event such as a tsunami or an earthquake might impair both facilities? Or is it better to accept a small period of downtime so that, in the event of a disaster, applications can be brought back up in a facility hundreds of miles away? Might it be better to expend the capital so that applications run synchronously across a data-center pair, with recovery to a third, remote facility if required?
Naturally, different organizations will have different answers to these questions, but there are a few musts in addressing the issue:
  • starting with business applications and processes and being willing to create segments to avoid “leveling up” to the most expensive answer for all applications
  • integrating modular and predesigned architectures into data-center-build plans to increase flexibility and realize lower costs for an appropriate level of resiliency
  • looking at resiliency across the entire stack—more robust facilities may not be the lowest-cost or most effective mechanism for increasing the resiliency of a set of applications
8. Instrument the environment to support next-generation cybersecurity
Every week seems to bring news of another cyberattack, motivated by financial gain, political activism, or—in some cases—national advantage. In many companies, the policy aspects of cybersecurity are being moved out of infrastructure organizations in order to enhance their visibility and proximity to other functions like enterprise-risk management.
But infrastructure has an utterly critical role to play as next-generation cybersecurity strategies are put in place. Increasingly, cybersecurity will depend on sophisticated intelligence and analytics about attacker strategies. This requires massive amounts of data about what is transiting the enterprise network, where it is coming from, and who has been accessing critical systems and data.
However, infrastructure functions have to partner with security functions to make effective trade-offs so they can find ways to extract massive amounts of granular data without compromising performance or creating too much additional complexity.

Improve organizational execution

Organizational capacity is one of the perennial frustrations in enterprise infrastructure—there is never enough managerial, operational, or technical talent to provide day-to-day service delivery while pursuing necessary improvements. Creating required organizational bandwidth will depend on infrastructure managers pulling organizational, performance-management, and talent-sourcing levers.
9. Make the transition to a plan-build-run organizational model
Traditionally, large infrastructure functions have been organized according to a combination of regions and “technology towers.”7 However, both of these models seem to be hitting their limits for large enterprises. Regional constructs constrain infrastructure functions’ ability to get the most from their investments in new technologies and make it harder to support global business processes and applications. Moreover, private-cloud environments, converged infrastructure products, Internet Protocol telephony, unified communications, and virtual-desktop infrastructure all make traditional distinctions among end users, data centers, and networks less and less relevant.
In response, leading infrastructure organizations are putting in place functional organizations with distinct “plan,” “build,” and “run” capabilities.
  • “Plan” includes service or relationship management, which is responsible for collecting business requirements, performing demand management, and serving as the overall interface with business partners. It also includes product management, which is responsible for developing and optimizing a set of reusable service offerings to be consumed by business partners.
  • “Build” includes product engineering and deployment. Product engineering designs and configures the technology to make the service offerings defined by product management ready for use in a production environment. Deployment takes requests from service management, develops implementable solutions using standard service offerings, and provisions them into the day-to-day environment.
  • “Run” performs all the operations and support to keep the technology environment running and meeting service-level expectations.
10. Proactively create the next generation of infrastructure business leaders
There is a long-standing model of career progression within most infrastructure organizations. Junior engineers acquire technical expertise in a given area and become more and more specialized in storage, databases, or networks. Frontline operations managers demonstrate their ability to keep things running and manage larger and larger operational teams over time.
This model creates technology specialists and effective operators. It does not necessarily create business leaders who can drive innovation or technology integrators who can solve problems that span the server, storage, network, database, and middleware domains.
To expand the pool of effective infrastructure managers—and so expand the organization’s ability to do big things—senior infrastructure leaders will have to broaden their set of talent-management levers by doing several things:
  • increasing their use of rotational staffing—moving high performers across technology domains as their careers progress
  • expanding the hiring aperture by interviewing and hiring managers from both application-development and commercial-technology vendors
  • creating nontechnical training to help managers build skills in general technology problem solving and develop knowledge of the businesses they support
11. Drive performance management to the front line
There are huge variations in productivity and quality (typically by a factor of four to ten) between the most and least effective help-desk agents, system administrators, database administrators, desktop technicians, and other frontline infrastructure staff.
While most infrastructure functions have thick books of metrics—such as help-desk first-call resolution, outages by severity, number of scheduled job failures, and mainframe utilization—almost all of these metrics measure platforms or parts of the organization, not the performance of frontline personnel. As a result, weaker performers do not get the coaching they need, strong performers do not get the recognition they deserve, and organizational productivity and quality suffers.
To continue making advances in efficiency and quality and to meet business expectations, infrastructure organizations will have to not only implement frontline metrics, such as tickets closed per day, but also use them in performance huddles and one-on-one coaching to improve individual performance.
In many cases, implementing these changes may require reversing work-from-home policies and bringing frontline staff back into operational centers where they can engage with managers and collaborate with peers.