Cryptojacking Found in Critical Infrastructure Systems Raises Alarms

The rise of cryptojacking—which co-opts your PC or mobile device to illicitly mine cryptocurrency when you visit an infected site—has fueled mining’s increasing appeal. But as attackers have expanded their tools to slyly outsource the number of devices, processing power, and electricity powering their mining operations, they’ve moved beyond the browser in potentially dangerous ways.

On Thursday, the critical infrastructure security firm Radiflow announced that it had discovered cryptocurrency mining malware in the operational technology network (which does monitoring and control) of a water utility in Europe—the first known instance of mining malware being used against an industrial control system.

Radiflow is still assessing the extent of the impact, but says that the attack had a “significant impact” on systems. The researchers note that the malware was built to run quietly in the background, using as much processing power as it could to mine the cryptocurrency Monero without overwhelming the system and creating obvious problems. The miner was also designed to detect and even disable security scanners and other defense tools that might flag it. Such a malware attack increases processor and network bandwidth usage, which can cause industrial control applications to hang, pause, and even crash—potentially degrading an operator’s ability to manage a plant.

“I’m aware of the danger of [malware miners] being on industrial control systems though I’ve never seen one in the wild,” says Marco Cardacci, a consultant for the firm RedTeam Security, which specializes in industrial control. “The major concern is that industrial control systems require high processor availability, and any impact to that can cause serious safety concerns.”

Low Key Mining

Radiflow CEO Ilan Barda says the company had no idea it might discover a malicious miner when it installed intrusion detection products on the utility’s network, particularly on its inner network, which wouldn’t usually be exposed to the internet. “In this case their internal network had some restricted access to the internet for remote monitoring, and all of a sudden we started to see some of the servers communicating with multiple external IP addresses,” Barda says. “I don’t think this was a targeted attack, the attackers were just trying to look for unused processing power that they could use for their benefit.”

Industrial plants may prove an enticing environment for malicious miners. Many don’t use a lot of processing power for baseline operations, but do draw a lot of electricity, making it relatively easy for mining malware to mask both its CPU and power consumption. And the inner networks of industrial control systems are known for running dated, unpatched software, since deploying new operating systems and updates can inadvertently destabilize crucial legacy platforms. These networks generally don’t access the public internet, though, and firewalls, tight access controls, and air gaps often provide additional security.

Security specialists focused on industrial control, like the researchers at Radiflow, warn that the defenses of many systems still fall short, though.

“I for one have seen a lot of poorly configured networks that have claimed to be air gapped but weren’t,” RedTeam Security’s Cardacci says. “I am by no means saying that air gaps don’t exist, but misconfigurations occur often enough. I could definitely see the malware penetrating crucial controllers.”

With so much fallow processing power, hackers looking to mine—often with automated scanning tools—will happily exploit flaws in an industrial control system’s defenses if it means access to the CPUs. Technicians with an inside track may also yield to temptation; reports surfaced on Friday that a group of Russian scientists were recently arrested for allegedly using the supercomputer at a secret Russian research and nuclear warhead facility for Bitcoin mining.

“The cryptocurrency craze is just everywhere,” says Jérôme Segura, lead malware intelligence analyst at the network defense firm Malwarebytes. “It’s really changed the dynamic for a lot of different things. A large amount of the malware we’ve been tracking has recently turned to do some mining, either as one module or completely changing attention. Rather than stealing credentials or working as ransomware, it’s doing mining.”

Getting Serious

Though in-browser cryptojacking was a novel development toward the end of 2017, malicious mining malware itself isn’t new. And more and more attacks are cropping up all the time. This weekend, for example, attackers compromised the popular web plugin Browsealoud, allowing them to steal mining power from users on thousands of mainstream websites, including those of United States federal courts system and the United Kingdom’s National Health Service.

Traditional mining attacks look like the Browsealoud incident, targeting individual devices like PCs or smartphones. But as the value of cryptocurrency has ballooned, the sophistication of attacks has grown in kind.

Radiflow’s Barda says that the mining malware infecting the water treatment plant, for instance, was designed to spread internally, moving laterally from the internet-connected remote monitoring server to others that weren’t meant to be exposed. “It just needs to find one weak spot even on a temporary basis and it will find the way to expand,” Barda says.

Observers say it’s too soon to know for sure how widespread cryptojacking will become, especially given the volatility of cryptocurrency values. But they see malicious mining cropping up in critical infrastructure as a troubling sign. While cryptojacking malware isn’t designed to pose an existential threat—in the same way a parasite doesn’t want to kill its host—it still wears on and degrades processors over time. Recklessly aggressive mining malware has even been known to cause physical damage to infected devices like smartphones.

It also seems at least possible, that an attacker with goals more sinister than a quick financial gain could use mining malware to cause physical destruction to critical infrastructure controllers—a class of rare but burgeoning attacks.

“We’ve seen this technique with ransomware like NotPetya where it’s been used as a decoy for a more dangerous attack,” Segura says. “Mining malware could be used in the same way to look financially motivated, but in fact the goal was to trigger something like the physical damage we saw with Stuxnet. If you run miners at 100 percent you can cause damage.”

Such a calamitous attack remains hypothetical, and might not be practical. But experts urge industrial control plants to consistently audit and improve their security, and ensure that they’ve truly siloed internal networks, so there are no misconfigurations or flaws that attackers can exploit to gain access.

“Many of these systems are not hardened and are not patched with the latest updates. And they must run 24/7, so recovery from crypto-mining, ransomware, and other malware threats is much more problematic in industrial control system networks,” says Jonathan Pollet, the founder of Red Tiger Security, which consults on cybersecurity issues for heavy industrial clients like power plants and natural gas utilities. ” I hope this helps create a sense of urgency.”

Cryptojack Attacks

U.S., UK government websites infected with crypto-mining malware: report

(Reuters) – Thousands of websites, including ones run by U.S. and UK government agencies, were infected for several hours on Sunday with code that causes web browsers to secretly mine digital currencies, technology news site The Register reported.

More than 4,200 sites were infected with a malicious version of a widely used tool known as Browsealoud from British software maker Texthelp, which reads out webpages for people with vision problems, according to The Register.

The news comes amid a surge in cyber attacks using software that forces infected computers to mine crypto currencies on behalf of hackers. The prevalence of these schemes has increased in recent months as the volume of trading in bitcoin and other crypto currencies has surged.

The tainted version of Browsealoud caused inserted software for mining the digital currency Monero to run on computers that visited infected sites, generating money for the hackers behind the attack, The Register said.

Representatives of the U.S. and British law enforcement agencies and Texthelp could not immediately be reached for comment.

Texthelp told The Register that it had shut down the operation by disabling Browsealoud while its engineering team investigated.

Reporting by Jim Finkle in Toronto; Additional reporting by Mark Hosenball in Washington; Editing by Daniel Wallis

Bitcoin and Bug Bounties on the Hill, Apple and Cisco’s Cyber Deal, iPhone Leak

Good morning, Cyber Saturday readers.

On Tuesday, the U.S. Senate convened two hearings on a couple of this newsletter’s favorite topics: cryptocurrencies and bug bounty programs. The day’s testimonies were chock full of fresh insights—and were a welcome diversion, for this author, from the government’s unending budgetary troubles.

The first hearing before the Senate Banking Committee saw Jay Clayton, chair of the Securities and Exchange Commission, and Christopher Giancarlo, chair of the Commodity Futures Trading Commission, dish about virtual money. Amid cratering prices, repeated thefts, and recent banking credit bans, Bitcoin investors had braced themselves for the worst. The regulators, however, struck several positive notes during the session, praising Bitcoin for spurring innovations in digital ledger technology. Giancarlo, for one, promised “a thoughtful and balanced response, and not a dismissive one” to the digital gold rush.

One point to keep an eye on: Clayton warned entrepreneurs against “initial coin offerings,” recent fundraising phenomena that founders have used to raise billions of dollars through the sale of digital tokens. “To the extent that digital assets like ICOs [initial coin offerings] are securities—and I believe every ICO I’ve seen is a security—we have jurisdiction and our federal securities laws apply,” he said. Expect Clayton’s agency to continue to pursue action against projects it deems in violation of securities laws.

The second hearing before the Senate Subcommittee on Consumer Protection invited cybersecurity professionals to the Hill to discuss the historically uneasy relationship between companies and hackers. Some highlights: John Flynn, Uber’s chief information security officer, told the panel that his company “made a misstep” by failing to promptly report a 2016 data breach that recently came to light. Mårtin Mickos, CEO of HackerOne, a bug bounty startup, urged legislators to revise laws used to prosecute hackers and to standardize data breach notification requirements at the federal level. And Katie Moussouris, founder of Luta Security, a bug bounty consultancy, pressed companies to adopt clear policies around vulnerability reporting. (HackerOne posted a nice recap of the day’s happenings, which you can read on its blog here.)

Both hearings were highly encouraging. Let’s hope that when the lawmakers reexamine their books, they’ll keep the good sense of these experts in mind.

Have a great weekend.

Robert Hackett

@rhhackett

[email protected]

Welcome to the Cyber Saturday edition of Data Sheet, Fortune’s daily tech newsletter. Fortune reporter Robert Hackett here. You may reach Robert Hackett via Twitter, Cryptocat, Jabber (see OTR fingerprint on my about.me), PGP encrypted email (see public key on my Keybase.io), Wickr, Signal, or however you (securely) prefer. Feedback welcome.

THREATS

Digital defense discount deals. Insurer Allianz will offer discounts on cybersecurity insurance coverage to customers that use Apple devices, like Macs and iPhones, Cisco security products designed to protect against ransomware attacks, and risk evaluations from Aon, the professional services firm. Apple CEO Tim Cook and Cisco CEO Tim Robbins revealed in June that they were collaborating with insurers on these new policies.

Suspicious spy saga sours. U.S. intelligence agents, lured by the possibility of recovering hacking tools stolen from the NSA, paid a Russian intermediary an installment of $100,000 for the alleged cyber weapons last year. Further negotiations fell through after the Russian source delivered only materials already made public by the “shadow brokers,” a mysterious group that first started leaking the NSA attack code in 2016, and as the source continued to push unverifiable, allegedly compromising materials related to President Donald Trump.

Intern infiltrates iPhone internals. Apple forced the code-sharing website Github to take down a post containing leaked source code for the iPhone’s boot process this week, as Motherboard first reported. Apparently, the code escaped Apple headquarters when a lowly intern absconded with the files and shared them with friends in the “jailbreaking” hacker community.

Banks ban Bitcoin buys. Credit card issuers are forbidding cryptocurrency purchases on credit in an effort to reduce financial and legal risks. Firms that have recently blacklisted Bitcoin sellers include Bank of America, J.P. Morgan Chase, Citigroup, Capital One, Discover, and Lloyds.

Hey, you using that nuclear supercomputer? Mind if I borrow it for something?

Share today’s Data Sheet with a friend:

http://fortune.com/newsletter/datasheet/

Looking for previous Data Sheets? Click here.

ACCESS GRANTED

“If we lived in a dystopian world without trust, Bitcoin might dominate existing payment methods. But in this world, where people do tend to trust financial institutions to handle payments and central banks to maintain the value of money it seems unlikely that bitcoin could ever be as convenient as existing payment means.”

Antoine Martin, an economist at the Federal Reserve Bank of New York, penned an op-ed that takes a whack at Bitcoin. He said the cryptocurrency could be useful—just not in this universe. But then, that’s what a Fed banker would say…

Meanwhile, Tyler Winklevoss told CNBC that people who fail to see Bitcoin’s potential suffer a “failure of imagination.”

ONE MORE THING

Inside the “smart” home panopticon. If you’re interested in living in a “smart” home—an abode outfitted with hi-tech, Internet-connected gadgetry—you should first understand the extent to which everyday household items will spy on you. This Gizmodo investigation details, in an entertaining firsthand account, the many ways that connected TVs, security cameras, coffee makers, mattress covers, and more mundane objects invade people’s privacy. Add to that the micro-aggravations of dealing with buggy domestic devices and you’ll be left wondering how this stuff ever came to be called “smart.”

Uber board got assurances on diligence ahead of self-driving deal: investor

SAN FRANCISCO (Reuters) – A key Uber investor testified on Thursday that the company’s board received assurances that due diligence had turned up no problems with a self-driving car startup which Uber acquired, differing from testimony by Uber’s former chief executive.

Benchmark venture capitalist Bill Gurley, who has since left Uber’s board, said that before the company acquired a startup founded by a former Waymo engineer in 2016, board members were told that due diligence on the company “had turned up nothing.”

Alphabet Inc’s (GOOGL.O) Waymo sued Uber Technologies Inc [UBER.UL] a year ago, accusing it of theft of self-driving car trade secrets.

Waymo said that one of the company’s former engineers, Anthony Levandowski, downloaded more than 14,000 confidential files containing designs for autonomous vehicles before Uber acquired his startup, Otto.

The trial could influence one of the most important and potentially lucrative races in Silicon Valley – to create fleets of self-driving cars.

Gurley’s recollection is different with former Uber CEO Travis Kalanick, who testified on Wednesday that he never read a due diligence report prepared by an outside firm that determined Levandowski did possess Alphabet data.

Kalanick denied telling the board that diligence on Levandowski had come back “clean.”

As part of the Otto acquisition, Uber indemnified Levandowski and his team against any future lawsuits filed by Waymo over trade-secret theft.

In a brief appearance on the witness stand on Thursday, Gurley said he could not say for sure who from Uber management assured the board, but recalled that Kalanick led the majority of the presentation. He called the indemnification agreement “atypical.”

“We as a group made the decision to move forward because the diligence was OK,” Gurley said in court on Thursday. He also said “as far as I know” no trade secrets came from Waymo to Uber.

The trial is expected to continue through next week. The jury will have to decide whether the documents downloaded by Levandowski were indeed trade secrets and not common knowledge, and whether Uber improperly acquired them, used them and benefited from them.

Reporting by Heather Somerville; Writing by Dan Levine; Editing by Bill Rigby

Big Tech should pay more taxes: German coalition

BERLIN (Reuters) – The two political parties seeking to form Germany’s next government want big companies to pay more tax, according to a coalition agreement whose text singled out U.S. tech giants by name.

“We support fair taxation of large companies, in particular Internet concerns like Google, Apple, Facebook and Amazon,” according to a brief passage in their 177-page coalition pact published on Wednesday.

Chancellor Angela Merkel’s conservatives and the Social Democratic Party (SPD) are seeking to revive the ‘grand coalition’ alliance that has governed Germany for the past four years.

SPD leader Martin Schulz, poised to become foreign minister if party members back the coalition deal, has urged the European Union to ensure that Big Tech pays more tax. He also wants to create the post of EU finance minister.

Separately, French Finance Minister Bruno Le Maire told Reuters the EU must lead the way by adopting legislation early next year to ensure that big global tech companies pay billions of euros in taxes in Europe.

Google (GOOGL.O), Apple (AAPL.O) Facebook (FB.O). and Amazon (AMZN.O) are in Europe often taxed on profits booked by subsidiaries in low-tax countries like Ireland or Luxembourg even though their revenues are earned across the bloc.

The European Commission declined to comment on the German coalition agreement, but did say that it was examining “all possible policy options” and would come forward this spring with new rules for digital taxation.

“As for every business, digital giants should pay their fair share of tax in the countries where their profits are earned,” the Commission said in written answers to Reuters questions.

Reporting by Andrea Shalal, Ingrid Melander and Foo Yun Chee; Writing by Douglas Busvine; Editing by Richard Balmforth

Watch Out, Sony and Microsoft: Google Is Developing a Video Game Streaming Service

Google, which has largely sat on the sidelines of the video game industry, seems ready to get in the fight.

The company is working on a new service codenamed Yeti, which would let people play games streamed to them online, potentially eliminating the need for a dedicated console like the PlayStation 4 or a high-end gaming computer.

News of the service first broke via The Information. Gaming industry insiders, who were not authorized to speak on-the-record, tell Fortune that Google is targeting a holiday 2019 release for Yeti, though the company is currently behind schedule and that date could shift.

Google recently hired Phil Harrison, a long-time gaming industry veteran. Sources indicate he is closely involved with the project. Harrison spent 15 years as the head of Sony’s network of game studios and three years as a senior member of Microsoft’s Xbox team. Since leaving those companies, he has served as an adviser and board member to various gaming companies.

Google declined to discuss the initiative, citing a company policy of not commenting on rumors or speculation.

Some details about Yeti are still fuzzy. It could be a dedicated streaming box or could operate through the company’s Chromecast device. How it will overcome issues of in-game lag is one of the biggest hurdles. But Fortune has learned that several major publishers are working with Google on the project.

Yeti would compete with Sony’s Playstation Now streaming service, which carries a $19.95 monthly fee (or $100 annual fee). That service, built off of one of the pioneers in game streaming, has not found an especially large audience, in part because of the high price and older catalog of games. Microsoft has previously discussed launching a game streaming service, but has not made any announcements about a new streaming product.

Google has flirted with the game industry before. It almost acquired Twitch in 2014 for $1 billion, but the deal fell apart in the final stages. (Amazon would later acquire that game streaming service.) Since then, Google’s YouTube division has dramatically increased its presence in the video game world, live streaming from E3, the video game industry trade show, and enabling live game streaming.

There’s certainly a big financial incentive for Google in video games. The industry saw revenues of $36 billion in the U.S. alone in 2017. Globally, it generates over $100 billion each year.

Apple says it sees 'strong demand' for replacement iPhone batteries: letter

WASHINGTON (Reuters) – Apple Inc (AAPL.O) is seeing “strong demand” for replacement iPhone batteries and disclosed it is considering offering rebates for consumers who paid full price for new batteries, the company said in a February 2 letter to U.S. lawmakers made public Tuesday.

Apple confirmed in December that software to deal with aging batteries in iPhone 6, iPhone 6s and iPhone SE models could slow down performance. The company apologized and lowered the price of battery replacements for affected models from $79 to $29.

The company said it is considering issuing rebates to consumers who paid full price for replacement batteries.

Reporting by David Shepardson, Editing by Franklin Paul

Amazon faces test of ambition in English Premier League soccer auction

LONDON (Reuters) – Amazon’s sports broadcasting ambitions will be tested this week in a multi-billion pound auction of English Premier League soccer rights, potentially pitting it against Sky and BT.

England’s auction for the rights to screen matches including Manchester United and City, Liverpool and Chelsea is one of the biggest money spinners in world sport, with the last three-year domestic package raising 5.14 billion pounds ($7.25 billion).

Broadcasters who have stumped up for the best packages to win viewers and fend off rivals could now face another threat from one of the big U.S. tech groups entering the fray.

Amazon, the world’s largest retailer, has moved aggressively into TV to bolster its Amazon Prime membership service, which offers free delivery and content for a flat monthly fee, and the new auction appears to have been structured specifically to attract a digital player for at least a small set of games.

That is likely to force Rupert Murdoch’s Sky and Britain’s biggest telecoms group BT to increase their offers, but financial and strategic pressures mean analysts do not think they will match previous 70 percent jumps.

“(There is a) very real threat that Amazon will look to take at least some of the UK and later on, international rights,” Guy Bisson from the media analyst firm Ampere said ahead of the auction which begins on Friday.

Amazon, which in 2015 tapped up the Top Gear presenters Jeremy Clarkson, Richard Hammond and James May to produce a new series called the Grand Tour, has also won the rights to some of America’s National Football League and ATP tennis.

Sky refused to discuss its commercial strategy, while Amazon declined to comment.

WHOLE LOT OF BIDDING

The English auction for the three seasons beginning 2019/20 will make 200 live games available out of the 380 played each season, divided into seven lots, with five packages consisting of 32 games and two packages of 20 games.

FILE PHOTO: A passer-by talks on their phone as she passes a branded sign displayed outside of a BT building in London, Britain January 27, 2017. REUTERS/Neil Hall/File Photo

One package will include the rights to show a whole round of matches at the same time, an option that could be more attractive to a digital provider than a traditional broadcaster.

In the previous auction Sky, which built its business on the back of the Premier League, picked up 126 games to BT’s 42 and analysts expect they will want to achieve at least a similar outcome this time around.

Both are to some extent limited, however.

FILE PHOTO: The Sky logo is seen on outside of an entrance to offices and studios in west London, Britain June 29, 2017. REUTERS/Toby Melville/File Photo

Sky, present in 13 million homes, had to cut costs, hike prices and drop other sports to afford the last round of rights.

It also has an uncertain future as it is not clear who will own Sky when the 2019 season begins, with Murdoch’s 21st Century Fox trying to buy the 61 percent it does not own. Sky could then be sold to Disney if a separate sale of Murdoch’s TV and film assets receives the green light.

For BT, investors would be unlikely to welcome a blow-out bid with its shares at five-year lows. It faces other calls on its cash, including investments in ultrafast fiber, pension deficit top-ups and dividend payments.

But the companies have recently agreed a wholesale deal to allow their customers to watch the other service’s channels, in an easing of the previous competition in their relationship.

“We continue to see the Premier League content as being an important part of BT Sport but it’s only one part of the channel,” BT Chief Executive Gavin Patterson said last week.

“We know what it’s worth to us, and we model that and we bid up to but no further than the value of it, and we always have a plan B if we do not get the content we want.”

($1 = 0.7093 pounds)

Reporting by Kate Holton; editing by Alexander Smith

Jury told by Waymo lawyer Uber 'cheating' on autonomous car secrets

SAN FRANCISCO (Reuters) – A jury in a trade-secrets lawsuit heard its first earful Monday as opening statements began in a bitter legal battle between Waymo and Uber Technologies Inc [UBER.UL] that has captivated Silicon Valley and could help determine who emerges in the forefront of the fast-growing field of autonomous cars. On the first day of an expected two weeks of testimony before a 10-person jury in San Francisco federal court, a lawyer for Waymo, Charles Verhoeven, said the case ”is about two competitors where one competitor decided they needed to win at all costs. Losing was not an option.

“They would do anything they needed to do to win, no matter what. No matter if it meant breaking some rules … in this case whether it meant taking trade secrets from a competitor,” Verhoeven told the jury.

A lawyer for ride-hailing firm Uber will make an opening statement later on Monday.

“We’re bringing this case because Uber is cheating. They took our technology ….to win this race at all costs,”  Verhoeven added.

Waymo, Alphabet Inc’s (GOOGL.O) self-driving car unit, sued Uber nearly a year ago, sparking a showdown between the two technology companies over allegations by Waymo that one of its former engineers took trade secrets just before quitting and going to work at Uber. The case hinges on whether Uber used apparent trade secrets, a total of eight according to court filings, to advance its autonomous vehicle program. Waymo said engineer Anthony Levandowski downloaded more than 14,000 confidential files in December 2015 containing designs for autonomous vehicles before he went on to lead Uber’s self-driving car unit in 2016.

The jury will have to decide whether these were indeed trade secrets and not common knowledge, and whether Uber improperly acquired them, used them and benefited from them.

FILE PHOTO – Uber’s logo is pictured at its office in Tokyo, Japan, November 27, 2017. REUTERS/Kim Kyung-Hoon

Uber has said while Levandowski downloaded the files, the data never made their way into its own self-driving car designs.

Levandowski, regarded as a visionary in autonomous technology, is not a defendant in the case but is on Waymo’s witness list. Levandowski was fired from Uber in May 2017 because the company said he refused to cooperate with Uber in the Waymo lawsuit and did not hand over information requested of him in the case.

Waymo and Uber are part of a crowded and hotly competitive field of automakers and technology companies aiming to build fleets of self-driving cars that could transform urban transportation systems.

Waymo has estimated damages in the case at about $1.9 billion. Uber rejects the financial damages claim. Still, the lawsuit has hobbled Uber’s self-driving car program, with Uber attorney Bill Carmody telling the court last week that the case “is the biggest in the history of Uber.”

Waymo intends to call Waymo Chief Executive Officer John Krafcik as it first witness, court documents show. Uber has co-founder and former CEO Travis Kalanick at the top of its witness list.

There are a total of 99 potential witnesses between the two companies, according to court documents, including Google co-founders Larry Page and Sergey Brin, Benchmark venture capitalist and Uber investor Bill Gurley and Alphabet executive David Drummond.

The U.S. Department of Justice is conducting a separate criminal investigation into what transpired, according to court filings.

Additional Reporting by Heather Somerville in San Francisco; editing by Grant McCool

The Era of Quantum Computing Is Here. Outlook: Cloudy

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

Quanta Magazine


author photo

About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits—1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states—a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said—as it often is said—that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions—some of the time—but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Connie Zhou for IBM

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits—and hence the potential to interact with the environment—increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with—you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead—all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Photo by John T. Consoli/University of Maryland

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit—a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

Lucy Reading-Ikkanda/Quanta Magazine

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties—such as stability and chemical reactivity—of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last September when they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just—or even mainly—how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched—if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert—it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.