WASHINGTON (Reuters) – Apple Inc (AAPL.O) is seeing “strong demand” for replacement iPhone batteries and disclosed it is considering offering rebates for consumers who paid full price for new batteries, the company said in a February 2 letter to U.S. lawmakers made public Tuesday.
Apple confirmed in December that software to deal with aging batteries in iPhone 6, iPhone 6s and iPhone SE models could slow down performance. The company apologized and lowered the price of battery replacements for affected models from $79 to $29.
The company said it is considering issuing rebates to consumers who paid full price for replacement batteries.
Reporting by David Shepardson, Editing by Franklin Paul
LONDON (Reuters) – Amazon’s sports broadcasting ambitions will be tested this week in a multi-billion pound auction of English Premier League soccer rights, potentially pitting it against Sky and BT.
England’s auction for the rights to screen matches including Manchester United and City, Liverpool and Chelsea is one of the biggest money spinners in world sport, with the last three-year domestic package raising 5.14 billion pounds ($7.25 billion).
Broadcasters who have stumped up for the best packages to win viewers and fend off rivals could now face another threat from one of the big U.S. tech groups entering the fray.
Amazon, the world’s largest retailer, has moved aggressively into TV to bolster its Amazon Prime membership service, which offers free delivery and content for a flat monthly fee, and the new auction appears to have been structured specifically to attract a digital player for at least a small set of games.
That is likely to force Rupert Murdoch’s Sky and Britain’s biggest telecoms group BT to increase their offers, but financial and strategic pressures mean analysts do not think they will match previous 70 percent jumps.
“(There is a) very real threat that Amazon will look to take at least some of the UK and later on, international rights,” Guy Bisson from the media analyst firm Ampere said ahead of the auction which begins on Friday.
Amazon, which in 2015 tapped up the Top Gear presenters Jeremy Clarkson, Richard Hammond and James May to produce a new series called the Grand Tour, has also won the rights to some of America’s National Football League and ATP tennis.
Sky refused to discuss its commercial strategy, while Amazon declined to comment.
WHOLE LOT OF BIDDING
The English auction for the three seasons beginning 2019/20 will make 200 live games available out of the 380 played each season, divided into seven lots, with five packages consisting of 32 games and two packages of 20 games.
FILE PHOTO: A passer-by talks on their phone as she passes a branded sign displayed outside of a BT building in London, Britain January 27, 2017. REUTERS/Neil Hall/File Photo
One package will include the rights to show a whole round of matches at the same time, an option that could be more attractive to a digital provider than a traditional broadcaster.
In the previous auction Sky, which built its business on the back of the Premier League, picked up 126 games to BT’s 42 and analysts expect they will want to achieve at least a similar outcome this time around.
Both are to some extent limited, however.
FILE PHOTO: The Sky logo is seen on outside of an entrance to offices and studios in west London, Britain June 29, 2017. REUTERS/Toby Melville/File Photo
Sky, present in 13 million homes, had to cut costs, hike prices and drop other sports to afford the last round of rights.
It also has an uncertain future as it is not clear who will own Sky when the 2019 season begins, with Murdoch’s 21st Century Fox trying to buy the 61 percent it does not own. Sky could then be sold to Disney if a separate sale of Murdoch’s TV and film assets receives the green light.
For BT, investors would be unlikely to welcome a blow-out bid with its shares at five-year lows. It faces other calls on its cash, including investments in ultrafast fiber, pension deficit top-ups and dividend payments.
But the companies have recently agreed a wholesale deal to allow their customers to watch the other service’s channels, in an easing of the previous competition in their relationship.
“We continue to see the Premier League content as being an important part of BT Sport but it’s only one part of the channel,” BT Chief Executive Gavin Patterson said last week.
“We know what it’s worth to us, and we model that and we bid up to but no further than the value of it, and we always have a plan B if we do not get the content we want.”
($1 = 0.7093 pounds)
Reporting by Kate Holton; editing by Alexander Smith
SAN FRANCISCO (Reuters) – A jury in a trade-secrets lawsuit heard its first earful Monday as opening statements began in a bitter legal battle between Waymo and Uber Technologies Inc [UBER.UL] that has captivated Silicon Valley and could help determine who emerges in the forefront of the fast-growing field of autonomous cars. On the first day of an expected two weeks of testimony before a 10-person jury in San Francisco federal court, a lawyer for Waymo, Charles Verhoeven, said the case ”is about two competitors where one competitor decided they needed to win at all costs. Losing was not an option.
“They would do anything they needed to do to win, no matter what. No matter if it meant breaking some rules … in this case whether it meant taking trade secrets from a competitor,” Verhoeven told the jury.
A lawyer for ride-hailing firm Uber will make an opening statement later on Monday.
“We’re bringing this case because Uber is cheating. They took our technology ….to win this race at all costs,” Verhoeven added.
Waymo, Alphabet Inc’s (GOOGL.O) self-driving car unit, sued Uber nearly a year ago, sparking a showdown between the two technology companies over allegations by Waymo that one of its former engineers took trade secrets just before quitting and going to work at Uber. The case hinges on whether Uber used apparent trade secrets, a total of eight according to court filings, to advance its autonomous vehicle program. Waymo said engineer Anthony Levandowski downloaded more than 14,000 confidential files in December 2015 containing designs for autonomous vehicles before he went on to lead Uber’s self-driving car unit in 2016.
The jury will have to decide whether these were indeed trade secrets and not common knowledge, and whether Uber improperly acquired them, used them and benefited from them.
FILE PHOTO – Uber’s logo is pictured at its office in Tokyo, Japan, November 27, 2017. REUTERS/Kim Kyung-Hoon
Uber has said while Levandowski downloaded the files, the data never made their way into its own self-driving car designs.
Levandowski, regarded as a visionary in autonomous technology, is not a defendant in the case but is on Waymo’s witness list. Levandowski was fired from Uber in May 2017 because the company said he refused to cooperate with Uber in the Waymo lawsuit and did not hand over information requested of him in the case.
Waymo and Uber are part of a crowded and hotly competitive field of automakers and technology companies aiming to build fleets of self-driving cars that could transform urban transportation systems.
Waymo has estimated damages in the case at about $1.9 billion. Uber rejects the financial damages claim. Still, the lawsuit has hobbled Uber’s self-driving car program, with Uber attorney Bill Carmody telling the court last week that the case “is the biggest in the history of Uber.”
Waymo intends to call Waymo Chief Executive Officer John Krafcik as it first witness, court documents show. Uber has co-founder and former CEO Travis Kalanick at the top of its witness list.
There are a total of 99 potential witnesses between the two companies, according to court documents, including Google co-founders Larry Page and Sergey Brin, Benchmark venture capitalist and Uber investor Bill Gurley and Alphabet executive David Drummond.
The U.S. Department of Justice is conducting a separate criminal investigation into what transpired, according to court filings.
Additional Reporting by Heather Somerville in San Francisco; editing by Grant McCool
After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)
It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.
Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.
Shut Up and Compute
Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits—1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.
To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states—a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.
Note that I’ve not said—as it often is said—that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions—some of the time—but none captures the essence of quantum computing.
It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?
Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”
Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits—and hence the potential to interact with the environment—increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.
There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with—you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.
Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead—all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”
A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?
One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.
How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit—a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.
An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.
Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.
Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.
Living With Errors
For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.
This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.
One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties—such as stability and chemical reactivity—of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.
In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.
Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last September when they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.
But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.
“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”
What’s Your Volume?
Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just—or even mainly—how many qubits you have, but how good they are, and how efficient your algorithms are.
Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched—if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.
What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.
Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.
This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?
So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.
Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert—it would prove that quantum computers really can extend what is technologically possible.
That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.
“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.
Before we delve into the darkness of the world this week, let’s consider these two tweets from the past seven days that really tell any outside viewer exactly what they need to know about the platform that is Twitter (in addition to the stuff about Nazis and harassment, of course).
Oh, social media! So many fascinating characters… But that’s not what you came here for. This is what you came here for.
The State of the Uniom?
What Happened: President Trump got to give his report card to Congress last week, prompting all kinds of commentary across the world wide web.
What Really Happened: Last week, Trump gave his first State of the Union address, although for some, it was the State of the Uniom thanks to some misprinted tickets.
The typo was apparently not the White House’s fault. But whether it was Uniom or Union, there was a lot of anticipation in the air for Trump’s SOTU speech: After such an eventful first year in office, everyone wondered, what would he talk about?
Well, yes. Sure. But nonetheless, many outlets triedtopredict what he’dsay ahead of the night itself, which might explain why the media was able to concentrate on, shall we say, less weighty topics. Like fashion.
OK, so the Democrats were wearing black to stand with #MeToo, and that’s not exactly a non-weighty topic. But what about the Republicans? One Republican figure attracted notice for not wearing black.
But back to the speech itself. According to Trump, the State of the Union was strong—isn’t that always traditionally the case?—although many of the other parts of the speech were less traditional, as Twitter was quick to note.
Never mind what was said, some people noticed what wasn’t…
As it turned out, he wasn’t the only one whose applause was noticed, however.
The Takeaway: As should only be expected, Trump’s first post-speech comment about the State of the Union was all about how many people were watching:
Wait. The “highest number in history”? Turns out, that’s not even vaguely true. As was pointed out by none other than Fox News:
(Another) Shake-Up at the Justice Department
What Happened: For anyone keeping score, go ahead and add “deputy director of the FBI” to the list of surprise resignations during the Trump administration.
What Really Happened: The fight between the president and the Justice Department continues apace. Following reports that President Trump had launched a campaign to discredit FBI witnesses and asked the acting director of the FBI who he’d voted for, last week saw another departure from office for a Department of Justice official.
To say that FBI Deputy Director Andrew McCabe’s departure was big news wouldbeadrasticunderstatement, and Twitter dug in with its traditional vigor.
But how bad could things have been, really?
OK, so that’s pretty bad.
That last point may not be right, according to the White House.
Still, at least one man was willing to stand up for McCabe. A very familiar man, as it turned out.
The Takeaway: There is, of course, one thing to remember when looking at this, especially as it revolves around legal matters.
Soap and Water Never Did Me Any Harm, Ask My Acne
What Happened: Do you take care of your skin? According to a new report, some people might be doing too much for it. Those folks were not ready to hear that.
What Really Happened: Political maneuvering wasn’t the only discussion on social media last week. Surprisingly, a story about skincare started quite a bit of typing, too.
Indeed, so many people were talking about it that the conversation provokedevenmore (andmoreandmore) opinionpieces on whether or not skincare was something that people should be discussing, and why. Oh, and it helped others to come forward to share their skincare tips, too.
The Takeaway: For those who don’t spend much time on skincare, there is only one response to be made here.
So, What’s Your Child Texting About?
What Happened: These kids and their phones and their slang. Who can even keep up?
What Really Happened: Quite why the “Is Your Child Texting?” meme returned this week—it’s been around for months, potentially inspired by this story from USA Today last May—is a mystery, but we’re quite glad it did. And apparently, we’re not alone, as multiplesitesnoticedit this time around.
We’d explain what it is, but you’ll pick it up. Let’s end this week on, if not a high note, then at least a silly one.
The Takeaway: There was, of course, only one way this could end…
Your three-pound brain runs on just 20 watts of power—barely enough to light a dim bulb. Yet the machine behind our eyes has built civilizations from scratch, explored the stars, and pondered our existence. In contrast, IBM’s Watson, a supercomputer that runs on 20,000 watts, can outperform humans at calculation and Jeopardy! but is still no match for human intelligence.
James J. DiCarlo, MD/PhD, is a professor of neuroscience, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds and Machines, and the head of the department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology.
Neither Watson, nor any other artificially “intelligent” system, can navigate new situations, infer what others believe, use language to communicate, write poetry and music to express how it feels, and create math to build bridges, devices, and life-saving medicines. Why not? The society that solves the problem of intelligence will lead the future, and recent progress shows how we can seize that opportunity.
Imagine human intelligence as a skyscraper. Instead of girders and concrete, this structure is built with algorithms, or sequences of interacting rules that process information, layered upon and interacting with each other like the floors of that building.
The floors above the street represent the layers of intelligence that humans have some conscious access to, like logical reasoning. These layers inspired the pursuit of artificial intelligence in the 1950s. But the most important layers are the many floors that you don’t see, in the basement and foundation. These are the algorithms of everyday intelligence that are at work every time we recognize someone we know, tune in to a single voice at a crowded party, or learn the rules of physics by playing with toys as a baby. While these subconscious layers are so embedded in our biology that they often go unnoticed, without them the entire structure of intelligence collapses.
As an engineer-turned-neuroscientist, I study the brain’s algorithms for one of these foundational layers—visual perception, or how your brain interprets your surroundings using vision. My field has recently experienced a remarkable breakthrough.
For decades, engineers built many algorithms for machine vision, yet those algorithms each fell far short of human capabilities. In parallel, cognitive scientists and neuroscientists like myself accumulated myriad measurements describing how the brain processes visual information. They described the neuron (the fundamental building block of the brain), discovered that many neurons are arranged in a specific type of multi-layered, “deep” network, and measured how neurons inside that neural network respond to images of the surroundings. They characterized how humans quickly and accurately respond to those images, and they proposed mathematical models of how neural networks might learn from experience. Yet, these approaches alone failed to uncover the brain’s algorithms for intelligent visual perception.
The key breakthrough came when researchers used a combination of science and engineering. Specifically, some researchers began to build algorithms out of brain-like, multi-level, artificial neural networks so that they had neural responses like those that neuroscientists had measured in the brain. They also used mathematical models proposed by scientists to teach these deep neural networks to perform visual tasks that humans were found to be especially good at—like recognizing objects from many perspectives.
This combined approach rocketed to prominence in 2012, when computer hardware had advanced enough for engineers to build these networks and teach them using millions of visual images. Remarkably, these brain-like, artificial neural networks suddenly rivaled human visual capabilities in several domains, and as a result, concepts like self-driving cars aren’t as far-fetched as they once seemed. Using algorithms inspired by the brain, engineers have improved the ability of self-driving cars to process their environments safely and efficiently. Similarly, Facebook uses these visual recognition algorithms to recognize and tag friends in photos even faster than you can.
This deep learning revolution launched a new era in A.I. It has completely reshaped technologies from the recognition of faces and objects and speech, to automated language translation, to autonomous driving, and many others. The technological capability of our species was revolutionized in just a few years—the blink of an eye on the timescale of human civilization.
But this is just the beginning. Deep learning algorithms resulted from new understanding of just one layer of human intelligence—visual perception. There is no limit to what can be achieved from a deeper understanding of other algorithmic layers of intelligence.
As we aspire to this goal, we should heed the lesson that progress did not result from engineers and scientists working in silos; it resulted from the convergence of engineering and science. Because many possible algorithms might explain a single layer of human intelligence, engineers are searching for the proverbial needle in a haystack. However, when engineers guide their algorithm-building and testing efforts with discoveries and measurements from brain and cognitive science, we get a Cambrian explosion in A.I.
This approach of working backwards from measurements of the functioning system to engineer models of how that system works is called reverse engineering. Discovering how the human brain works in the language of engineers will not only lead to transformative A.I. It will also illuminate new approaches to helping those who are blind, deaf, autistic, schizophrenic, or who have learning disabilities or age-related memory loss. Armed with an engineering description of the brain, scientists will see new ways to repair, educate, and augment our own minds.
The race is on to see if reverse engineering will continue to provide a faster and safer route to real A.I. than traditional, so-called forward engineering that ignores the brain. The winner of this race will lead the economy of the future, and the nation is positioned to seize this opportunity. But to do so, the US needs significant new financial commitments from government, philanthropy, and industry that are devoted to supporting novel teams of scientists and engineers. In addition, universities must create new industry-university partnership models. Schools will need to train brain and cognitive scientists in engineering and computation, train engineers in the brain and cognitive sciences, and uphold mechanisms of career advancement that reward such teamwork. To advance A.I., reverse engineering the brain is the way forward. The solution is right behind our eyes.
WIRED Opinion *publishes pieces written by outside contributors and represents a wide range of viewpoints. *
Nearly two-thirds of internet users turn to Chrome for their browsing needs, but far fewer take full advantage of its available extensions, the add-ons that elevate it from good to great. If you’re one of those plain vanilla Chrome users—or if you’ve only dabbled in the extensions game—check out these sprinkles of joy that the WIRED staff swears by.
The following list of Chrome extension recommendations is by no means comprehensive; there are plenty to explore and discover in the Chrome Web Store. (If you go exploring, just make sure you stick with reputable developers.) But these are the ones we depend on every day to keep our internet experience as sane and enjoyable as possible. May they do they same for you.
Have you ever clicked on an interesting link, only to be greeted by a 404 Error? Wayback Machine’s Chrome extension can help. Created by the Internet Archive—a nonprofit that preserves billions of web pages—the extension shows you what a website looked like in the past, even if has since been deleted. It can turn up the most recent version of a page it has saved, or go back to the first time the Internet Archive recorded it. That latter can be especially illuminating. For example, you can see what a user’s Twitter account looked like when they created it, or how a company’s website appeared when it first launched. One drawback: Wayback Machine doesn’t have a record of every webpage on the internet. But it can also help you prevent others from vanishing in the future: The extension lets you save the web page you’re currently visiting to the Internet Archive’s database. —Staff Writer Louise Matsakis
You’ll find many tab management solutions on this list, but the best by far for my purposes is the Great Suspender, an extension which, as the name suggests, suspends any Chrome tabs that you’ve left fallow for a given amount of time. As someone who keeps well over a dozen tabs open at any given time during the day—and often more—being been an inestimable boon to my laptop and my sanity. And when it’s time to revisit a page, a simple click springs it back to life. It also lets you whitelist any tabs, like Gmail, that are too precious to suspend. —News Editor Brian Barrett
The best Chrome extensions effortlessly improve our lives in a small but impactful ways. And animatedTabs does exactly that. Once installed, the extension will automatically load a random GIF in the center of every new Chrome tab you open. Sound annoying? Come on, people, this is a pure delight. It seems like the GIFs largely source from Reddit’s /r/gifs/, so you mostly get previously undiscovered gems; there’s not much crying Jordan, or and shark cat on a Roomba. But what beats new? And all because you opened a tab to finally pay your three months overdue speeding ticket! The only downside to animatedTabs? You never know when it’s going to generate something NSFW or just dumb. But the real internet cred comes from not caring. —Staff Writer Lily Newman
Bedeviled by browser-tab clutter? Try xTab. It restricts the number of pages you can have open in a given browser window. Just set your cap and go about your business. When you exceed your limit, the extension gets to culling, automatically axing your oldest, least-accessed, or least-recently used tab. It can also prevent you from opening excess tabs altogether. I use that last setting the most; I like to do triage myself. Plus, I’m working on killing my reflexive tabbing habit, and being interrupted in the act helps keep my fingers in check. If you’ve tried other tab managers in the past and found them wanting, this could be your ticket; where most encourage you to cmd-T with abandon, xTab retrains you to curate a more manageable tabscape in real-time. —Senior Writer Robbie Gonzalez
In July of 2016, the world changed for the worse. Up until that point, the backspace key on your desktop keyboard doubled as a back button in Chrome. It had been that way since the browser’s launch some eight years prior. By mid-2016, this action—a simple keystroke to go back one page in your browser history—had become hardwired in our lizard brains. But Google removed the backspace action that summer, because it caused a particularly Googley problem: People were losing work in web apps. When a user typed into a browser text field and hit the backspace key hoping to correct a typo, they’d sometimes inadvertently cause the browser to jump back one page, nuking whatever efforts they’d spent the last few minutes sweating over. Sure, that’s annoying. But imagine the outrage of millions of Chrome users when, upon the next browser update, the backspace key suddenly did nothing. Google had neutered one of the most useful mechanisms for navigating the web. Thankfully, the company recognized our plight and just weeks later released this extension, which restores the back-button functionality of the backspace key. Hallelujah. The preferred keystroke of Alt + left arrow is still the default in Chrome, and maybe you’re used to that now. But why force yourself to press two keys when you can install this extension and press only one? —Senior Editor Michael Calore
You know when you open Chrome and the browser is like, “Are you sure you want to reopen 400 tabs?” (Yes I do, and rude!) Maybe it’s a selection of news articles you’re planning to read later, or the aftermath of clicking through dozens of Wikipedia pages. Maybe you don’t even know what’s in all those tabs. Either way, keeping them all open puts a huge strain on your browser. Close them all—without losing them forever—with the handy OneTab extension. One click of the button neatly collates all your open tabs into one list of links that you can revisit later. It saves your computer incredible amounts of RAM, speeds up the browser immediately, and keeps all those links handy for when you’re totally, definitely, someday coming back to read them. —Senior Associate Editor Arielle Pardes
My name is Tom and I have a Twitter problem—but I’m getting help from a Chrome extension called HabitLab. Anytime I look at the bird-logoed slot machine of trolling, outrage, and thinkfluencing there’s now a bold banner at the top counting up how long I’ve been on the site that day. If I open a Twitter tab but regain my senses and close it again quickly, a popup informs me how many seconds I just saved compared to my usual time-wasting visit. The message comes with a different “Good job!” GIF each time; most recently it was Jimmy Kimmel. HabitLab was developed by Stanford’s Human Computer Interaction group to help those of us suffering internet distraction disorder (most of us?) take control of our online habits. When first installed, it prompts you to identify the sites you want to spend less time on. HabitLab will then keep track of your wasted seconds, minutes, and hours, and display them in neat charts. It also offers a menu of “nudges” to help keep those trend lines moving in the right direction. One of them is the timer that now haunts me on Twitter, a nudge named The Supervisor. Others include GateKeeper, which makes you wait a few seconds before a page you’re trying to give up loads, and the devilish 1Minute Assassin, which kills a tab after 60 seconds. —Senior Writer Tom Simonite
I am not a designer, and I’m sure that those who are have far better tools for pulling colors off of web pages than “Eye Dropper,” a mostly-but-not-always-functional extension that lets you eye-drop any color from around the web, and grab its RGB and Hex color codes. It’s particularly handy for quick fixes that don’t necessitate slowing down your computer by opening up Photoshop—like, say, updating the text on a WIRED section page to make it more readable. It isn’t the prettiest extension, and it’s all too easy to accidentally trigger the eyedropper if, like me, you’re prone to hitting alt-P instead of command-P when trying to print—but Eye Dropper gets the job done. —Digital Producer Miranda Katz
If you’ve ever seen a Google ad follow you around the entire web and back, you know just how annoying and invasive online tracking can become. Ghostery is a fascinating way to see what services websites use to track and collect data about you. It creates a little icon with a number, showing you how many trackers every site uses. Wikipedia, for example, has 0. Most other sites have at least a few. You can see what they use to monitor their website traffic and serve ads, and block services that you don’t like. It’s not perfect; sometimes it will break sites you want to visit, and you’ll have to turn it off or pause it, although the latest release uses AI powers to help minimize the collateral damage. —Senior Writer Jeffrey Van Camp
ProPublica’s What Facebook Thinks You Like
Facebook thinks I like arachnids because my brother writes for a TV show called Scorpion. It thinks I like Christmas Eve because Pearlstein, and it thinks I like flywheels because my late friend Eric Scott was in a band by that name. I know all of this thanks to ProPublica’s cool Facebook Chrome Extension, which helps me see what Facebook thinks about me, and then lets me rate how spot-on—or not—the site’s analysis is, using the aptly named Creepy Meter. —JP
I fly a lot. In the past year, I’ve taken roughly a dozen round trips, each with their own fun, idiosyncratic layovers and delays. To pass the tarmac time, I could watch a bunch of downloaded episodes of The Crown or The Great British Baking Show. I could read a good ol’ fashioned book. Or I could connect to plane Wi-Fi and incessantly check Twitter. Instead, what I prefer to do before leaving for the airport is save a bunch of stories to Pocket. This nifty extension allows you to stow away things you want to read later, no internet connection necessary (though if you use the Pocket app on your phone, be sure to sync it over Wi-Fi or a network connection before going into airplane mode). Pocket also recommends stories, based on other users you follow or topics that interest you, and allows you to optimize your reading experience—I prefer a serif font with a black background and very large text to protect my fatigued eyes. But for someone who opens a million tabs with an intention to eventually read them all, it’s my preferred way to dog-ear a story. If you want to start saving, here’s a shameless plug to visit WIRED’s Backchannel page, chock full of excellent longform narratives that will transport you during your disconnected commute. —WIRED.com Editor Andrea Valdez
Getting a password manager extension means getting a password manager, so definitely do that. All the major managers—LastPass, Dashlane, 1Password, KeePass—offer Chrome extensions, and they’re crucial to making password managers easy to use. The browser extensions act as a quick control center to fill login forms, generate new passwords, and save new credentials into your manager. And though password managers can work without extensions, switching back and forth to a standalone desktop application can be clunky while you’re browsing online. These extensions do carry some potential security risks, but if they’re what get you on a password manager in the first place, they’re worth it. —LN
You probably use Google Calendar every day—many, many times. Instead of letting it permanently squat on valuable tab real estate on your desktop, try the Google Calendar Chrome extension instead. It puts a small Calendar icon in the upper right of your browser window, right where you’d expect. Tap it, and a box drops down, showing you all the meetings you have coming up. I like the design because it reminds me of the wonderful Google Cal widget on my Android home screen. It’s just a one-shot view of the meetings and events you have coming up in the next week or two. You can customize which calendars appear, which is also nice, because if you’re like me, you have a ton of them. For more display options—or to get crazy and log into two Google Calendars at the same time—try the Checker Plus for Google Calendar extension. It’s not official, but works well. —JVC
WIRED Editor-in-Chief Nicholas Thompson swears by Grammarly, an extension that checks your emails, tweets, Facebook posts, and other online missives for spelling and grammar mistakes. Features Editor Mark Robinson recommends Reader View, which he describes as a “one-button, rather lo-fi instant Instapaper,” stripping web articles down to the bare essentials. And while Senior Writer Andy Greenberg has not and likely would never use it, he did find an extension called Kardashian Krypt, which encrypts your messages in images of Kim Kardashian using a technique known as steganography.
Regular readers of this column know that I spend a lot of time thinking about what makes people tick.
That’s because I learned long ago that the secret to appealing to customers, stakeholders, audience members and anyone you care about is to understand who they are and what they want.
And that’s why I carefully read the obituaries of two men who exemplified this philosophy: Ingvar Kamprad, the founder of IKEA, and Mort Walker, the creator of “Beetle Bailey,” a comic strip about a lazy Army private. (Both men died this week.)
Wait–what could these two possibly have in common?
Well, start with the fact that each man was extremely successful in his field. When Kamprad was 17, he launched the store that, over the next seven decades, became the world’s largest seller of furniture (with 400 stores and $42 billion in revenue). And Walker created the comic strip that would ultimately be syndicated in 1,800 newspapers around the world; he had the longest tenure of any cartoonist on an original creation.
Although they were in very different realms, here’s what united Kamprad and Walker: their deep connection to their customers.
For example, in a Forbes interview in 2000, Kamprad summed up his approach this way: “I see my task as serving the majority of people. The question is, how do you find out what they want, how best to serve them? My answer is to stay close to ordinary people, because at heart I am one of them.”
And, as Richard Goldstein wrote in Walker’s New York Times obituary, “‘Beetle Bailey’ used the Army as its setting, but its popularity derived from everyday life and the universal battles against authority figures and mindless bureaucracy.”
When the Defense Department congratulated Mr. Walker on his 80th birthday, he said: “Human frailty is what humor is all about. People like to see the foibles of mankind. And they relate to the little guy, the one on the bottom.”
For both Kamprad and Walker, their understanding of customers–readers or shoppers–wasn’t theoretical or informed only by data; it was based on personal experience.
Walker spent a stint in the Army, and he stayed in touch with servicemen throughout his life. And although Kamprad became very, very rich, he regularly flew economy and popped into his stores unannounced to replicate the customer experience.
These men knew that in order to break through today’s noise and nonsense, you have to not only know your customers; you have to love them.
As I’ve written, your love has to be real–not manufactured or manipulative–and unconditional. You have to clearly see your customers’ faults, but love them anyway. Your love has to be unwavering, despite inattention, inconstancy and even infidelity.
Only by truly loving your customers can you deliver in a way that’s truly about them, not about you. The leap to loving brings you in touch with what matters to people. Suddenly you’re able to communicate in ways that profoundly connect. You’re not on the other side of the chasm from your customers: You’re right there next to them, talking softly, saying what they’ve always wanted to hear. As a result, you can give customers what they actually want.
Nearly everyone wants to be luckier. Some people think success is about preparing for luck, while others think success is about what you do with luck when you find it. There may be different perspectives on luck, but everyone agrees you can’t go wrong with more of it, as long as it’s good.
YPO member Stuart Lacey is considered by many to be an extremely lucky guy, personally and professionally. He married the woman of his dreams, lives exactly where and how he wants, and has traveled to over 70 countries. He’s built 5 successful companies, including Trunomi, a customer consent data rights platform. Bank Innovation even named Lacey as an Innovator to Watch. Lacey has made respecting luck a regular part of his business activity. He even created a mathematical formula to analyze and replicate it.
Lacey’s Lucky Formula
% Luckiness =
(situational awareness) x [perseverance (work ethic / heart)]unlimited x (# of times attempted)failure is good x (choice to act)binary x (Respect and EQ)
Lacey likens experience to the process of securing a patent. “Anyone can file for a patent in a matter of hours for a few hundred bucks. But without a deepunderstandingof the technical, engineering, design, and geo-political aspects, and withoutappreciatingthe importance of opportunity cost and due diligence, the chance of receiving that patent is practically zero.” And Lacey understands that outside factors can influence the outcome, explaining, “Of course experience can be borrowed, for example, by using a world-class (and equally expensive!) patent attorney.” But in the long run, no amount of money can make up for a lack of experience.
2. Have Situational Awareness
Lacey asks a frightening question: “When you’re in amovie theatre, do you actually know where the exits are?” Lacey asserts, “Havingsituationalawareness can multiply by a thousand your chances ofsurvival.” The same is true in business. In a less frightening scenario, Lacey suggests it’s like skiing: “When you’re at the bottom of the mountain, have the foresight to recognize that the last on the ski tram is the first one on the slope. You cantotally change your experiencejust by thinking ahead.” What you do when you get there is up to you, but you can maximize your potential by understanding what’s going on around you.
3. There’s No Substitute for Heart
A hockey fan, Lacey likes to quote Luc Robitaille, who said, “You can find someone smart, but never underestimateheart.” Lacey says, “Passionand work ethic usually trump everything else, and luck does not favor those who don’t put in the hours.” It helps, of course, when your career is doing something you love. But when you put in the solid work, the rest will follow with more ease.
4. Embrace Failure
Lacey is a firm believer in the adage, “Fail quickly, fail cheaply, andfail often.” Lacey says, “The willingness toaccept and learnfrom one’s mistakes is vital for luck.” Mistakes here can multiply. “You have to invest time with your head down, ready to constantly pivot and adjust. Embracing change and innovation IS toembrace failure.” People are told from childhood that failure is bad, and this is a crutch that any entrepreneur has to overcome.
5. Take Action
“How often do you look at somethingnewand say, ‘I thought of that a while ago!'” Lacey asks. “There are so many stories of inventions that never occurred or were greatly delayed until someone else took theinitiative to act.” It’s not always easily done. “It takes courage,” Lacey acknowledges, “and a willingness to fail and bounce back.” But the alternative is always worse. Lacey asserts, “Either you act and luck has a shot, or you don’t act and the chances of your influencing the outcome are nil.” Take the chance on yourself, and don’t be afraid – failure is an opportunity.
6. Attitude Matters
For Lacey, it’s important to remember that people are human. “If your flight is cancelled, it’s not the gate agent’s fault. I have always found that a kind, supportive, appreciative tone, with a strong measure of compassion, works absolute wonders.” No one likes being unappreciated or disrespected. “Focusing on thehuman elementof interactions at all times is a multiplier of your chances for a lucky outcome.” Another important element is to maintain optimism. Lacey explains, “I’m realistic about the work required, but I’m also aware thefutureis one that we will create and craft. You need to have the ability to accept bumps in the road while keeping your eye on the prize.” Further, a willingness to accept compromise is key. Be humble, and remember to exercise emotional intelligence.
Each week Kevin explores exclusive stories insideYPO, the world’s premiere peer-to-peer organization for chief executives, eligible at age 45 or younger.
HONG KONG (Reuters) – Chinese personal computer maker Lenovo Group reported a quarterly loss of $289 million on Thursday against a $98 million profit a year earlier, due mainly to a one-off charge of $400 million resulting from U.S. tax reform.
Revenue for the three-month period ending December was $12.94 billion, compared with $12.17 billion a year ago.
Lenovo said its core PC and smart devices business group posted an 8 percent rise in revenue to $9.25 billion as sales exceeded shipments growth thanks to better average selling prices driven by innovative products and a better product mix.
Its struggling mobile business – which the group had set a target to turn around by the end of the financial year in March – reported a narrower operating loss before taxation of $92 million, compared with a loss of $132 million in the preceding quarter.
($1 = 6.2842 Chinese yuan renminbi)
Reporting by Sijia Jiang and Donny Kwok; Editing by Stephen Coates