Thursday, January 2, 2014

The Three Most Innovative Companies of 2013

20140102_2

The Three Most Innovative Companies of 2013

One can naturally debate any system that seeks to crown the world’s most innovative companies. Boston Consulting Group’s list, which relies heavily on surveys asking senior executives to name the companies they perceive to be innovative, risks succumbing to the halo effect, where generally successful companies are assumed to be good at everything. Forbes’s mathematical approach, which calculates an “Innovation Premium” baked into a stock price, suffers from market capriciousness. Editorial driven approaches at MIT Technology Review and Fast Company can trip over hype (recall how Fast Company in 2009 named “Team Obama” its most innovative company?).
Nonetheless, if you want a fun ice-breaker at your next team meeting, ask people to guess which three companies made it to the top 50 on all four lists (only 29 made it on two or more, and only seven are on three or more). You can even give a hint – two of the three are technology companies from America’s West coast, and one is from Asia.
If your groups are anything like the ones I’ve been with over the past few months, they will assume that the American companies are some combination of Google, Apple, and Amazon. Apple is perhaps the surprising odd man out of that troika, ranking 79th in Forbes’s list (owing primarily to a somewhat bumpy stock price, which to Forbes indicates a lack of investor confidence in its ability to innovate in the future).
But if you’d guessed Google and Amazon, you’d be right. Both trace their success to bringing new business models to their respective markets. While Google is commonly viewed as a search company, what made Google Google was figuring out how to parlay its search technology into a highly disruptive advertising model by which companies bid to tie their advertising to specific search terms. In recent years the company’s free, flexible Android operating system – a stark contrast to the closed, proprietary systems that historically dominated the industry – has helped new competitors like Samsung from South Korea and ZTE from China offer devices that are both low priced and highly functional.
Amazon continues to be the world’s best example of a serial business model innovator. Its core e-retailing model, with its hyper-efficient supply chain, turned the retail world on its head. It has subsequently launched three successive disruptive business models. Its Prime subscription model now provides close to $1 billion of revenues. With its Kindle e-reading platform, the company happily sells low-cost devices and makes money on content. And through its Amazon Web Services business, it has built a multibillion-dollar business by turning its internal technology prowess into a powerful cloud-computing service.
So, which is the third company on all four lists? Some guess Korea’s Samsung (on three); Japan’s Toyota (on two); China’s Huawei (interestingly, on none of the lists); or Alibaba (also, shockingly, on not a single list). The home country of the latter pair is right, but the right company is Tencent.
Tencent remains relatively unknown outside Asia, but that won’t last long if its torrid growth continues.  Its core offerings — its QQ instant-messaging service and WeChat SMS service — aren’t particularly interesting. But it follows a business model that is distinct from most of its competitors. Instead of seeking to build wide audiences and parlaying them into advertising revenue, the company has built a multibillion-dollar business out of micro-transactions, such as charging consumers to upgrade the look of the avatar that appears on their chat service. Hundreds of millions of small transactions add up, powering the company’s explosive growth. A chart in a recent Economist article says it all. Global Internet giants like Baidu, Google, and Facebook each draw at least 80% of their revenues from advertising. Tencent has flipped the model, earning about 80% of its $7 billion annual revenues from value-added services.
Innosight research shows that business model innovation is the ticket to explosive growth. In Seizing the White Space, my colleague Mark Johnson noted that more than half of the relatively recent companies that made it onto the Fortune 500 before their 25th birthday—including Amazon, Starbucks, and AutoNation—were business model innovators. It’s easy to get captivated by shiny technology or compelling marketing, but if you really want to identify tomorrow’s giants, pay the most attention to innovators that have figured out how to create, capture, or deliver value in unique ways. History shows they are the best bets for long-term success.

More blog posts by
More on: Innovation, Strategy
80-scott-anthony

Scott Anthony is the managing partner of the innovation and growth consulting firm Innosight.. His most recent books are The Little Black Book of Innovation and the new HBR Single, Building a Growth Factory. Follow him on Twitter at @ScottDAnthony.

Sunday, December 29, 2013

Big Data and the Role of Intuition-Harvard Business Review

http://blogs.hbr.org/2013/12/big-data-and-the-role-of-intuition/?utm_source=Socialflow&utm_medium=Tweet&utm_campaign=Socialflow

HBR Blog Network


20131226_2

Big Data and the Role of Intuition



Many people have asked me over the years about whether intuition has a role in the analytics and data-driven organization. I have always reassured them that there are plenty of places where intuition is still relevant. For example, a hypothesis is an intuition about what’s going on in the data you have about the world. The difference with analytics, of course, is that you don’t stop with the intuition — you test the hypothesis to learn whether your intuition is correct.
Another place where intuition is found in analytical companies is in the choice of the business area where analytical initiatives are undertaken. Few companies undertake a rigorous analytical study of what areas need analytics the most! The choice of a target domain is typically based on the gut feelings of executives. For example, at Caesars Entertainment — an early and continuing user of analytics in its business — the initial focus was on analytics for customer loyalty and service. CEO Gary Loveman noted that he knew that Caesars (then Harrah’s) had low levels of customer loyalty across its nationwide network of casinos. He had also done work while at Harvard Business School on the “service profit chain” — a theory that companies that improve customer service can improve financial results. While the theory had been applied and tested in several industries, it hadn’t been applied to gaming firms. But Loveman’s intuition about the value of loyalty and service was enough to propel years of analytics projects in those areas.
Of course, as with hypotheses, it’s important to confirm that your intuitions about where to apply analytics are actually valid. Loveman insists on an ROI from each analytics project at Caesars. Intuition plays an important role at the early stages of analytics strategy, however. In short, intuition’s role may be more limited in a highly analytical company, but it’s hardly extinct.
But how about with big data? Surely intuition isn’t particularly useful when there are massive amounts of data available for analysis. The companies in the online business that were early adopters of big data — Google, Facebook, LinkedIn, and so forth — had so much clickstream data available that no one needed hunches any more, correct?
Well, no, as it turns out. Major big data projects to create new products and services are often driven by intuition as well. Google’s self-driving car, for example, is described by its leaders as a big data project. Sebastian Thrun, a Google Fellow and Stanford professor, leads the project. He had an intuition that self-driving cars were possible well before all the necessary data, maps, and infrastructure were available. Motivated in part by the death of a friend in a traffic accident, he said in an interview that he formed a team to address the problem at Stanford without knowing what he was doing.
At LinkedIn, one of the company’s most successful data products, the People You May Know (PYMK) feature, was developed by Jonathan Goldman (now at Intuit) based on an intuition that people would be interested in what their former classmates and colleagues are up to. As he put it in an interview with me, he was “playing with ideas about how to help people build their networks.” That certainly sounds like an intuitive process.
Pete Skomoroch, who became Principal Data Scientist at LinkedIn a few years after PYMK was developed, believes that creativity and intuition are critical to the successful development of data products. He told me in an interview this week that companies with the courage to get behind the intuition of data scientists — without a lot of evidence yet that their ideas will be successful — are the ones that will develop successful data products. As with traditional analytics, Skomoroch notes that you have to eventually test your creativity with data and analysis. But he says that it may take several years before you know if an idea will really pay off.
So whether you’re talking about big data or conventional analytics, intuition has an important role to play. One might even say that developing the right mix of intuition and data-driven analysis is the ultimate key to success with this movement. Neither an all-intuition nor an all-analytics approach will get you to the promised land.

Independent Report: Pivotal Data Dispatch Reached Payback in 3 Months with 377% Annual ROI at NYSE Euronext

Independent Report: Pivotal Data Dispatch Reached Payback in 3 Months with 377% Annual ROI at NYSE Euronext

Nucleus Research This week, Nucleus Research compiled an independent report profiling how NYSE Euronext solved the challenge of big data in their organization with big returns using Pivotal technologies that we have now released to the market called Pivotal Data Dispatch.  By all accounts, NYSE Euronext was caught between the proverbial rock and a hard place with their data requirements. With their stock trades representing one third of the world’s equities volume, and federal regulations requiring them to keep 7 years history, by 2006 their data volume was staggering. The cost to maintain this data and the penalties the business was paying with the latency of data results was unacceptable.
However, instead of following the norm in the industry and continuing to invest in monolithic data stores that had fixed data ceilings, NYSE Euronext read the future and decided to embrace modern big data strategies early, resulting in a scalable and cost affordable data solution. As a result, they are now recognized as a early leader in the big data market, achieving payback for their efforts in just 3 months and enjoying a staggering 377% annual ROI.

Background

To give an idea of the scope of the problem, by 2007, when NYSE Euronext started to look at viable solutions to their growing challenge, even the lower cost alternative of Massive Parallel Processing (MPP) still had a market price of about $22,000 a terabyte. At the time, they were generating an average of ¾ of a terabyte a day. Knowing data volumes were only going to grow, and they needed to keep 7 years on hand, storage of the data alone was a crippling prospect.
Also, actually using the data was problematic. Typical queries had to be performed by experienced DBAs so as to not upset the infrastructure and average query time took 7 hours, and frequently required additional filters and queries. NYSE Euronext executives frequently waited up to 3 weeks to receive the results of their queries. With data not being ubiquitous and available in even near real-time, the NYSE knew it was missing opportunities—big opportunities as it turns out.

The Solution

Their first attempt to solve their data challenges started in 2007. NYSE Euronext set out to develop a solution built on our Greenplum technology (inherited from EMC in our spin-out) and IBM Netezza. The idea was to build a Data Lake, where active and historical data could be self-provisioned on-demand by the data analysts. Separate from real-time operations, data analysts were free to use the data at will.
Data volumes continued to grow and by 2010, NYSE Euronext was collecting 2 TB a day. While the cost of MPP processing was becoming cheaper, the growth rate was not affording any further savings. Their estimated costs were approaching $4.5 million a year to just store data. However, since the data could be federated across commodity hardware, this solution was still estimated to be 1/40th of a traditional data storage in a analytics environment, and provided broader access to analysts—something that provided enormous value to daily operations.
With that in mind, NYSE Euronext took a leap that many organizations to date have not. They went big on big data. With help from vendors like Pivotal, they built a system, now publicly available called Data Dispatch, that should be treated as a model for the enterprise.

Key Benefits

For full financial disclosure on the economics, please read the full report by Nucleus Research. However here are some notable highlights from the report itself speak volumes on this aggressive approach to big data, and why big data leaders like NYSE Euronext are showing the way to the market that investing in strategies to make it ubiquitous and real-time really pay off:
  • Power user productivity. With IT eliminated from the active process of harvesting data, business users are not only in control, they are empowered to use data daily to fully understand their markets. With the data available, they tend to use it more and improve business decisions.
  • Increased productivity. With the back and forth between IT and the business eliminated in every data request, both the business and IT can focus on their own areas of expertise. Data requests are fulfilled more quickly with the person who knows what they are actually after and what the data means in the drivers seat. IT also manages to service the business more effectively, while reducing the amount of direct help. This is a win-win for everyone.
  • Reduced IT labor costs. The Pivotal Data Dispatch tool services about 2000 data requests each month. Historically each request took a DBA about 1 hour, so this means approximately 2000 hours or over 83 man days of DBA work can be refocused into other areas of their massive data infrastructure.
  • Reduced decision latency. With the data request cycle compressing from 3 weeks to hours, the nearly 250 data analysts at NYSE Euronext are by default working on fresher data. This results in reduced decision latency, allowing them to use near real-time data to make important inferences and prove imperically what their markets need.

More on NYSE Euronext and Pivotal Data Dispatch

- See more at: http://blog.gopivotal.com/case-studies-2/independent-report-pivotal-data-dispatch-reached-payback-in-3-months-with-377-annual-roi-at-nyse-euronext#sthash.lCt6gLwH.dpuf

Monday, September 9, 2013

Data is Worthless if You Don't Communicate It

Data is Worthless if You Don't Communicate It

There is a pressing need for more businesspeople who can think quantitatively and make decisions based on data and analysis, and businesspeople who can do so will become increasingly valuable. According to a McKinsey Global Institute report on big data, we'll need over 1.5 million more data-savvy managers to take advantage of all the data we generate.
But to borrow a phrase from Professor Xiao-Li Meng — formerly the Chair of the Statistics Department at Harvard and now Dean of the Graduate School of Arts and Sciences — you don't need to become a winemaker to become a wine connoisseur. Managers do not need to become quant jocks. But to fill the alarming need highlighted in the McKinsey report, most do need to become better consumers of data, with a better appreciation of quantitative analysis and — just as important — an ability to communicate what the numbers mean.
Too many managers are, with the help of their analyst colleagues, simply compiling vast databases of information that never see the light of day, or that only get disseminated in auto-generated business intelligence reports. As a manager, it's not your job to crunch the numbers; but — as Jinho Kim and I discuss in more detail in Keeping Up with the Quants — it is your job to communicate them. Never make the mistake of assuming that the results will "speak for themselves."
Consider the cautionary tale of Gregor Mendel. Although he discovered the concept of genetic inheritance, his ideas were not adopted during his lifetime because he only published his findings in an obscure Moravian scientific journal, a few reprints of which he mailed to leading scientists. It's said that Darwin, to whom Mendel sent a reprint of his findings, never even cut the pages to read the geneticist's work. Although he carried out his groundbreaking experiments between 1856 and 1863 — eight years of painstaking research — their significance was not recognized until the turn of the 20th century, long after his death. The lesson: if you're going to spend the better part of a decade on a research project, also put some time and effort into disseminating your results.
One person who has done this very well is Dr. John Gottman, the well-known marriage scientist at the University of Washington. Gottman, working with a statistical colleague, developed a "marriage equation" predicting how likely a marriage is to last over the long term. The equation is based on a couple's ratio of positive to negative interactions during a fifteen minute conversation on a "difficult" topic such as money or in-laws. Pairs who showed affection, humor, or happiness while talking about contentious topics were given a maximum number of points, while those who displayed belligerence or contempt received the minimum. Observing several hundred couples, Gottman and his team were able to score couples' interactions and identify the patterns that predict divorce or a happy marriage.
This was great work in itself, but Gottman didn't stop there. He and his wife Julie founded a non-profit research institute and a for-profit organization to apply the results through books, DVDs, workshops, and therapist training. They've influenced exponentially more marriages through these outlets than they could possibly ever have done in their own clinic — or if they'd just issued a press release with their findings.
Similarly, at Intuit, George Roumeliotis heads a data science group that analyzes and creates product features based on the vast amount of online data that Intuit collects. For his projects, he recommends a simple framework for communicating about each analysis:
  1. My understanding of the business problem
  2. How I will measure the business impact
  3. What data is available
  4. The initial solution hypothesis
  5. The solution
  6. The business impact of the solution
Note what's not here: details on statistical methods used, regression coefficients, or logarithmic transformations. Most audiences neither understand nor appreciate those details; they care about results and implications. It may be useful to make such information available in an appendix to a report or presentation, but don't let it get in the way of telling a good story with your data — starting with what your audience really needs to know.


More blog posts by Tom Davenport

Are You Data Driven? Take a Hard Look in the Mirror.

Are You Data Driven? Take a Hard Look in the Mirror.


The term "data driven" is penetrating the lexicon ever more deeply these days. Data Driven was the title of my latest book, and recent academic work shows that companies that regard themselves as "data driven," are measurably more profitable than those that aren't. So becoming data-driven is clearly a worthwhile endeavor. Yet for all the attention, I've yet to see any clear criteria by which leaders can benchmark themselves and their organizations to figure out what they need to do better.
In my view, the essence of "data-driven" is making better decisions up and down the organization chart. Over the years I've had the good fortune to work with plenty of individual decision-makers and groups, some terrific and some simply awful. From that work, I've distilled twelve "traits of the data-driven." (It bears mention that the data-driven also avoid some self-destructive traits; I'll take these up in another post.)
Traits of the Data-Driven
The data-driven:
  • Make decisions at the lowest possible level
  • Bring as much diverse data to any situation as they possibly can.
  • Use data to develop a deeper understanding of their worlds.
  • Develop an appreciation for variation
  • Deal reasonably well with uncertainty
  • Integrate their ability to understand data and its implications and their intuitions.
  • Recognize the importance of high-quality data and invest to improve.
  • Are good experimenters and researchers.
  • Recognize that decision criteria can vary with circumstances.
  • Recognize that making a decision is only step one.
  • Work hard to learn new skills and bring new data and new data technologies (big data, predictive analytics, metadata management, etc) into their organizations.
  • Learn from their mistakes.
All of these traits are important. And most are self-evident. Only a few require further explanation. First, data-driven companies work to drive decision-making to the lowest possible level. One executive I spoke to described how he thought about it this way: "My goal is to make six decisions a year. Of course that means I have to pick the six most important things to decide on and that I make sure those who report to me have the data, and the confidence, they need to make the others." Pushing decision-making down frees up senior time for the most important decisions. And, just as importantly, lower-level people spend more time and take greater care when a decision falls to them. It builds the right kinds of organizational capability and, quite frankly, appears to create a work environment that is more fun.
Second, the data-driven have an innate sense that variation dominates. Even the simplest process, human response, or most-controlled situation varies. While they may not use control charts, they know that they have to understand that variation if they are going to understand what is going on. One middle manager expressed it to me this way, "When I took my first management job, I agonized over results every week. Some weeks we were up slightly, others down. I tried to take credit for small upturns and agonized over downturns. My boss kept telling me to stop — I was almost certainly making matters worse. It took a long time for me to learn that things bounce around. But finally I did."
Third, the data-driven place high demands on their data and data sources. They know that their decisions are no better than the data on which they are based, so they invest in quality data and cultivate data sources they can trust. As a result, when a time-sensitive issue comes up they are prepared. High-quality data makes it easier to understand variation and reduces uncertainty. Success is measured in execution, and high-quality data makes it easier for others to follow the decision-makers logic and align to the decision.

Further, as one executes, one acquires more data. So the data-driven are constantly re-evaluating, refining their decisions along the way. They are quicker than others to pull to plug as when the evidence suggests that a decision is wrong. To be clear, it doesn't appear that the data-driven "turn on a dime"; they know that is not sustainable. Rather, they learn as they go.
Now take that hard look in the mirror. Look at the list above and give yourself a point for every trait you follow regularly and half a point for those you follow most — but not all — of the time. Be hard on yourself. If you can only cite an instance or two, don't give yourself any credit.
Unless you're one of the rare few that truly score seven or more, you need to improve. While each person and organization is different, I'd first recommend starting by pushing decision-making down the organization. I've already noted the benefits. It may be tough and counterintuitive, especially for managers that want to feel in control, but it's worth the effort.
Second, invest in quality data. Frankly, you simply cannot be data-driven (or do anything consistently well for that matter) without high-level of trust in your data and data sources. You're reduced to your intuition alone, the antithesis of the goal here. Quality data is a necessity.
Now, take one more step. You've taken a hard look at yourself. Engage your management team in doing exactly the same thing for your organization.

More blog posts by Thomas C. Redman

To Go from Big Data to Big Insight, Start with a Visual

To Go from Big Data to Big Insight, Start with a Visual



Although data visualization has produced some of the most captivating artistic displays in recent memory, some of which have found their way into exhibits at the New York Museum of Modern Art and countless art installations around the world, business leaders are asking: is data visualization actionable?
I think so. In my role as the Scholar-in-Residence at The New York Times R&D Lab, I am collaborating with one of the world's most advanced digital R&D teams to figure out how we can draw actionable insights from big data.
How big? Massive: We are documenting every tweet, retweet, and click on every shortened URL from Twitter and Facebook that points back to New York Times content, and then combining that with the browsing logs of what those users do when they land at the Times. This project is a relative of the widely noted Cascade project. Think of it as Cascade 2.0.
We're doing this to understand and predict when an online cascade or conversation will result in a tidal wave of content consumption on the Times, and also when it won't. More importantly we are interested in how the word-of-mouth conversation drives readership, subscriptions, and ad revenue; how the Times can improve their own participation in the conversation to drive engagement; how we can identify truly influential readers who themselves drive engagement; and how the Times can then engage these influential users in a way that complements the users' own needs and interests. Do it, and we can turn that statistical analysis, as you'll see below, into elegant, artistic real time data streams.
Handling the streams, archiving the sessions and storing and manipulating the information are in themselves herculean tasks. But the even bigger challenge is transforming beautiful, big data into actionable, meaningful, decision-relevant knowledge. We've found that visualization is one of the most important guideposts in this search for knowledge, essential to understanding where we should look and what we should look for in our statistical analysis.
For example, here are three visualizations that have helped us gain knowledge. They show cascades of the tweets and retweets as lines and dots about three different Times articles over time, combined with the click-through volume on each article synced in time and displayed as a black graph under each cascade. Each panel tells a different story about engagement with the content.
aral-viz1.png
For the first article, there is a sizable Twitter conversation and several large spikes in traffic. But the click-through volume seems independent of the Twitter conversation: The largest spike in traffic, highlighted in blue, occurs when there is very little Twitter activity. In this case, a prominent link on a blog or a news story that referred to the story, rather than the Twitter conversation itself, is probably driving the traffic.
aral-viz2.png
On the second article, the Twitter conversation is intense. There are many, tweets and retweets of the article — yet the article itself gets very little traffic. People are talking about the article on Twitter, but not reading it. This sometimes happens when the main message of an article sparks a debate or a conversation that can happen without the content of the article being that important, for example, when a timely piece of news contains little analysis or editorial content, or when the conversation or debate gets away from the article and evolves its own independent content.
aral-viz3.png
In the third and final article, an intense Twitter conversation moves in lockstep with engagement. As people tweet and retweet the article, their followers are clicking through and engaging with the content itself. This tight relationship between the online conversation and the website traffic is most pronounced when the three "influencers" tagged in the figure inspire the two largest spikes in traffic over the engagement lifecycle of the article.
With just these three data visualizations, we've gained understanding in important nuances about so-called virality. The relationship between online word-of-mouth conversations and engagement isn't as simple as something just "going viral." Different patterns emerge with different types of content.
Still, the visuals cannot tell the whole story. We see some clear correlations here, but complex conditional dependencies and temporal and network autocorrelation make it necessary to build more sophisticated causal statistical models that will generate true, reliable insights about word-of-mouth influence.
What these visuals do help with is getting us to know where to look and what questions to ask of the data. That is, we can't build the more complex models until we know the most suitable places for building them. These visuals give us some of that inisght.
Cascade 2.0 will be built on sophisticated analytics, and it will require data visualization. Asking important questions and avoiding unnecessary ones is essential to moving forward effectively and efficiently with big data. Without visualization, we are much less efficient in getting to the questions whose answers teach us something. That's why visualizing data must be one of the most important tools for data scientists. It is our torch in a thick, dark forest.

http://blogs.hbr.org/cs/2013/08/visualizing_how_online_word-of.html

NSA Code Cracking Puts Google, Yahoo Security Under Fire

NSA Code Cracking Puts Google, Yahoo Security Under Fire




Matthias Balk/picture-alliance/dpa/AP Photo
The agency has fulfilled a decades-long quest to break the encryption of e-mail, online purchases, electronic medical records and other Web activities, the New York Times, the U.K.’s Guardian and ProPublica reported yesterday.
Disclosures that the U.S. National Security Agency can crack codes protecting the online traffic of the world’s largest Internet companies will inflict more damage than earlier reports of complicity in government spying, according to technology and intelligence specialists.

Sept. 6 (Bloomberg) -- Full episiode of "Bloomberg West." Guests: Veracode VP of Research Chris Eng, CGV Capital Partner’s Glenn Solomon and Bloomberg's Allan Holmes, Stephen Engle, Willem Marx and Nela Richardson. (Source: Bloomberg)
The agency has fulfilled a decades-long quest to break the encryption of e-mail, online purchases, electronic medical records and other Web activities, the New York Times, the U.K.’s Guardian and ProPublica reported yesterday. The NSA also has been given access to -- or found ways to enter -- databases of major U.S. Internet companies operating the most popular e-mail and social-media platforms, the news organizations reported.
The reports, based on documents from former intelligence contractor Edward Snowden, emerged amid an expanding debate over whether NSA surveillance activities undermine civil liberties. The revelations raise fresh questions about the security of data held by companies including Google Inc. (GOOG), Facebook Inc. (FB) and Microsoft Corp. (MSFT) just as more commerce shifts online.
“This is a fundamental attack on how the Internet works,” Joseph Lorenzo Hall, senior staff technologist at the Washington-based policy group Center for Democracy & Technology, said in an interview. “Secure communications technologies are the backbone of e-commerce” including the transfer of medical records and financial exchanges.
“People in business will either not engage in those activities, or find other ways,” Hall said.

Snowden Revelations

The reports in the Guardian, the Times and the non-profit ProPublica news website said that NSA spends more than $250 million a year on a program working with technology companies to “covertly influence” product designs. The reports didn’t name the companies cooperating with the NSA and didn’t describe the extent to which the agency was using its code-breaking capability on the Internet.
The classified documents are the latest that Snowden has exposed revealing previously secret NSA programs. The 30-year-old former employee of government contractor Booz Allen Hamilton Holding Corp. (BAH) faces espionage charges in the U.S. and is in Russia under temporary asylum.
President Barack Obama’s administration has been coping with increasing public backlash over U.S. spying activities since top-secret documents leaked by Snowden began emerging in June. Foreign governments, including Brazil and Germany, have objected to U.S. surveillance and spying operations.

Obama Remarks

Brazilian authorities canceled a trip to Washington this week to prepare for President Dilma Rousseff’s state visit in October to protest allegations the U.S. spied on communications between officials in Latin America’s largest economy.
Obama told reporters at a news conference today in St. Petersburg, Russia, that “what we do is similar to what countries around the world do with their intelligence services.” He said that he had met with Rousseff and Mexican President Enrique Pena Nieto during the G-20 summit to “discuss the allegations made in the press about NSA.”
The U.S. president also said that the nation should review the spy programs to determine if they should continue. “The nature of technology and the legitimate concerns around privacy and civil liberties means that it’s important for us, on the front end, to say, all right, are we actually going to get useful information here,” he said. “And if not, or how useful is it, if it’s not that important, should we be more constrained in how we use certain technical capabilities.”

Lost Business

U.S. companies that are “household names” gave the NSA access to all communications, said Cedric Leighton, a former Air Force intelligence officer and a former NSA training director. Companies gave easy access to NSA because their managers believed it was necessary and they trusted that the government agency wouldn’t do anything wrong, Leighton said.
“But this takes the cake,” he said. “This has done a lot of damage to our ability to collect intelligence.”
Even before the latest reports, U.S. technology companies offering network infrastructure services such as cloud computing and popular social-networking applications were facing the prospect of losing business overseas.
Industry groups sounded alarm at the revelations. “This is a tragic case of myopia on the part of the NSA, and the surveillance infrastructure throughout the government,” said Ed Black, president of the Computer & Communications Industry Association, a Washington trade group, in a statement today. “By secretly embedding weaknesses into encryption systems in order to create a ’back door’ for surveillance access, the NSA creates a road map for similar cyber-incursions by others with less noble intentions.”

‘Hugely Disappointing’

Companies offering cloud services -- in which businesses pay a third party to provide databases, storage and computing power -- may lose as much as $35 billion by 2016 as foreign companies avoid U.S. solutions because of the fear the NSA may have access to the data, according to a study released last month by the Information Technology & Innovation Foundation.
“This is a hugely disappointing revelation,” Daniel Castro, author of the Washington-based group’s study, said in an e-mail. “This most recent news will certainly contribute to the perception that U.S. Internet companies cannot be trusted.”
Michael Birmingham, a spokesman for the Office of the Director of National Intelligence, which oversees U.S. intelligence agencies, declined to comment on the reports.
“Anything that yesterday’s disclosures add to the ongoing public debate is outweighed by the road map they give to our adversaries about the specific techniques we are using to try to intercept their communications in our attempts to keep America and our allies safe,” according to a statement posted to the national intelligence office’s website today.

More Legal Protection

Obama and officials from intelligence agencies have defended the NSA’s surveillance programs as essential to thwarting possible terrorist attacks. U.S. officials have told lawmakers that the programs are legal and subject to oversight by a federal court and members of Congress.
Sixty-six percent of U.S. Internet users polled believe current laws aren’t good enough to protect people’s privacy online, according to a survey released yesterday by the Pew Research Center. That compared with 24 percent who believe current laws provide reasonable protections, Pew said. The July 11-14 telephone survey of 792 Internet users has a margin of error of plus or minus 3.8 percentage points.

Obama Measures

Amid increasing public unease over the surveillance programs, Obama said Aug. 9 he would ask Congress to change the section of the Patriot Act allowing collection of telephone records, to increase oversight and transparency.
The president also said he’ll propose a legal advocate to serve as an adversary when spy agencies make requests in the secret sessions of the Foreign Intelligence Surveillance Court, which vets requests for electronic eavesdropping. Last week, he met for the first time with a panel he requested to review U.S. surveillance initiatives.
Leslie Miller, spokeswoman for Google, said in an e-mail that the company doesn’t “provide any government, including the U.S. government,” access to its systems.
“As for recent reports that the U.S. government has found ways to circumvent our security systems, we have no evidence of any such thing ever occurring,” Miller said. “We provide user data to governments only in accordance with the law.”

‘Legally Obligated’

Microsoft provides the U.S. government information when “legally obligated to comply with demands,” according to a July 15 blog post by Brad Smith, general counsel for the Redmond, Washington-based company. “To be clear, we do not provide any government with the ability to break the encryption, nor do we provide the government with the encryption keys.”
Smith’s comments apply to yesterday’s reports, said Dominic Carr, a spokesman for Microsoft.
“We are unaware of and do not participate in such an effort, and if it exists, it offers substantial potential for abuse,” Suzanne Philion, spokeswoman for Sunnyvale, California-based Yahoo! Inc. (YHOO), said in an e-mail today. “Yahoo zealously defends our users’ privacy and responds to government requests for data only after considering every applicable objection and in accordance with the law.”
Facebook’s spokeswoman Sarah Feinberg didn’t respond to requests for comment.
Google, Microsoft, Apple Inc. (AAPL) and 19 other technology companies sent a letter in July to Obama and congressional leaders urging that the companies be allowed to report statistics concerning requests for user data received from intelligence agencies.
To contact the reporter on this story: Allan Holmes in Washington at aholmes25@bloomberg.net
To contact the editor responsible for this story: Bernard Kohn at bkohn2@bloomberg.net