President Trump's State of the Union Speech Tops This Week's Internet News

Before we delve into the darkness of the world this week, let’s consider these two tweets from the past seven days that really tell any outside viewer exactly what they need to know about the platform that is Twitter (in addition to the stuff about Nazis and harassment, of course).

Oh, social media! So many fascinating characters… But that’s not what you came here for. This is what you came here for.

The State of the Uniom?

What Happened: President Trump got to give his report card to Congress last week, prompting all kinds of commentary across the world wide web.

What Really Happened: Last week, Trump gave his first State of the Union address, although for some, it was the State of the Uniom thanks to some misprinted tickets.

The typo was apparently not the White House’s fault. But whether it was Uniom or Union, there was a lot of anticipation in the air for Trump’s SOTU speech: After such an eventful first year in office, everyone wondered, what would he talk about?

Well, yes. Sure. But nonetheless, many outlets tried to predict what he’d say ahead of the night itself, which might explain why the media was able to concentrate on, shall we say, less weighty topics. Like fashion.

OK, so the Democrats were wearing black to stand with #MeToo, and that’s not exactly a non-weighty topic. But what about the Republicans? One Republican figure attracted notice for not wearing black.

But back to the speech itself. According to Trump, the State of the Union was strong—isn’t that always traditionally the case?—although many of the other parts of the speech were less traditional, as Twitter was quick to note.

Never mind what was said, some people noticed what wasn’t

It was, overall, a very Donald Trump speech, if somewhat lighter on the insults and uses of the term “fake news.” There were, of course, any number of fact checks for what was said, but there was one thing that everyone accepted as true: The president needs to stop applauding himself so close to his microphone.

As it turned out, he wasn’t the only one whose applause was noticed, however.

The Takeaway: As should only be expected, Trump’s first post-speech comment about the State of the Union was all about how many people were watching:

Wait. The “highest number in history”? Turns out, that’s not even vaguely true. As was pointed out by none other than Fox News:

(Another) Shake-Up at the Justice Department

What Happened: For anyone keeping score, go ahead and add “deputy director of the FBI” to the list of surprise resignations during the Trump administration.

What Really Happened: The fight between the president and the Justice Department continues apace. Following reports that President Trump had launched a campaign to discredit FBI witnesses and asked the acting director of the FBI who he’d voted for, last week saw another departure from office for a Department of Justice official.

To say that FBI Deputy Director Andrew McCabe’s departure was big news would be a drastic understatement, and Twitter dug in with its traditional vigor.

But how bad could things have been, really?

OK, so that’s pretty bad.

That last point may not be right, according to the White House.

Still, at least one man was willing to stand up for McCabe. A very familiar man, as it turned out.

The Takeaway: There is, of course, one thing to remember when looking at this, especially as it revolves around legal matters.

Soap and Water Never Did Me Any Harm, Ask My Acne

What Happened: Do you take care of your skin? According to a new report, some people might be doing too much for it. Those folks were not ready to hear that.

What Really Happened: Political maneuvering wasn’t the only discussion on social media last week. Surprisingly, a story about skincare started quite a bit of typing, too.

Indeed, so many people were talking about it that the conversation provoked even more (and more and more) opinion pieces on whether or not skincare was something that people should be discussing, and why. Oh, and it helped others to come forward to share their skincare tips, too.

The Takeaway: For those who don’t spend much time on skincare, there is only one response to be made here.

So, What’s Your Child Texting About?

What Happened: These kids and their phones and their slang. Who can even keep up?

What Really Happened: Quite why the “Is Your Child Texting?” meme returned this week—it’s been around for months, potentially inspired by this story from USA Today last May—is a mystery, but we’re quite glad it did. And apparently, we’re not alone, as multiple sites noticed it this time around.

We’d explain what it is, but you’ll pick it up. Let’s end this week on, if not a high note, then at least a silly one.

The Takeaway: There was, of course, only one way this could end…

The Era of Quantum Computing Is Here. Outlook: Cloudy

After decades of heavy slog with no promise of success, quantum computing is suddenly buzzing with almost feverish excitement and activity. Nearly two years ago, IBM made a quantum computer available to the world: the 5-quantum-bit (qubit) resource they now call (a little awkwardly) the IBM Q experience. That seemed more like a toy for researchers than a way of getting any serious number crunching done. But 70,000 users worldwide have registered for it, and the qubit count in this resource has now quadrupled. In the past few months, IBM and Intel have announced that they have made quantum computers with 50 and 49 qubits, respectively, and Google is thought to have one waiting in the wings. “There is a lot of energy in the community, and the recent progress is immense,” said physicist Jens Eisert of the Free University of Berlin.

Quanta Magazine


author photo

About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

There is now talk of impending “quantum supremacy”: the moment when a quantum computer can carry out a task beyond the means of today’s best classical supercomputers. That might sound absurd when you compare the bare numbers: 50 qubits versus the billions of classical bits in your laptop. But the whole point of quantum computing is that a quantum bit counts for much, much more than a classical bit. Fifty qubits has long been considered the approximate number at which quantum computing becomes capable of calculations that would take an unfeasibly long time classically. Midway through 2017, researchers at Google announced that they hoped to have demonstrated quantum supremacy by the end of the year. (When pressed for an update, a spokesperson recently said that “we hope to announce results as soon as we can, but we’re going through all the detailed work to ensure we have a solid result before we announce.”)

It would be tempting to conclude from all this that the basic problems are solved in principle and the path to a future of ubiquitous quantum computing is now just a matter of engineering. But that would be a mistake. The fundamental physics of quantum computing is far from solved and can’t be readily disentangled from its implementation.

Even if we soon pass the quantum supremacy milestone, the next year or two might be the real crunch time for whether quantum computers will revolutionize computing. There’s still everything to play for and no guarantee of reaching the big goal.

IBM’s quantum computing center at the Thomas J. Watson Research Center in Yorktown Heights, New York, holds quantum computers in large cryogenic tanks (far right) that are cooled to a fraction of a degree above absolute zero.

Connie Zhou for IBM

Shut Up and Compute

Both the benefits and the challenges of quantum computing are inherent in the physics that permits it. The basic story has been told many times, though not always with the nuance that quantum mechanics demands. Classical computers encode and manipulate information as strings of binary digits—1 or 0. Quantum bits do the same, except that they may be placed in a so-called superposition of the states 1 and 0, which means that a measurement of the qubit’s state could elicit the answer 1 or 0 with some well-defined probability.

To perform a computation with many such qubits, they must all be sustained in interdependent superpositions of states—a “quantum-coherent” state, in which the qubits are said to be entangled. That way, a tweak to one qubit may influence all the others. This means that somehow computational operations on qubits count for more than they do for classical bits. The computational resources increase in simple proportion to the number of bits for a classical device, but adding an extra qubit potentially doubles the resources of a quantum computer. This is why the difference between a 5-qubit and a 50-qubit machine is so significant.

Note that I’ve not said—as it often is said—that a quantum computer has an advantage because the availability of superpositions hugely increases the number of states it can encode, relative to classical bits. Nor have I said that entanglement permits many calculations to be carried out in parallel. (Indeed, a strong degree of qubit entanglement isn’t essential.) There’s an element of truth in those descriptions—some of the time—but none captures the essence of quantum computing.

Inside one of IBM’s cryostats wired for a 50-qubit quantum system.

Connie Zhou for IBM

It’s hard to say qualitatively why quantum computing is so powerful precisely because it is hard to specify what quantum mechanics means at all. The equations of quantum theory certainly show that it will work: that, at least for some classes of computation such as factorization or database searches, there is tremendous speedup of the calculation. But how exactly?

Perhaps the safest way to describe quantum computing is to say that quantum mechanics somehow creates a “resource” for computation that is unavailable to classical devices. As quantum theorist Daniel Gottesman of the Perimeter Institute in Waterloo, Canada, put it, “If you have enough quantum mechanics available, in some sense, then you have speedup, and if not, you don’t.”

Some things are clear, though. To carry out a quantum computation, you need to keep all your qubits coherent. And this is very hard. Interactions of a system of quantum-coherent entities with their surrounding environment create channels through which the coherence rapidly “leaks out” in a process called decoherence. Researchers seeking to build quantum computers must stave off decoherence, which they can currently do only for a fraction of a second. That challenge gets ever greater as the number of qubits—and hence the potential to interact with the environment—increases. This is largely why, even though quantum computing was first proposed by Richard Feynman in 1982 and the theory was worked out in the early 1990s, it has taken until now to make devices that can actually perform a meaningful computation.

Quantum Errors

There’s a second fundamental reason why quantum computing is so difficult. Like just about every other process in nature, it is noisy. Random fluctuations, from heat in the qubits, say, or from fundamentally quantum-mechanical processes, will occasionally flip or randomize the state of a qubit, potentially derailing a calculation. This is a hazard in classical computing too, but it’s not hard to deal with—you just keep two or more backup copies of each bit so that a randomly flipped bit stands out as the odd one out.

Researchers working on quantum computers have created strategies for how to deal with the noise. But these strategies impose a huge debt of computational overhead—all your computing power goes to correcting errors and not to running your algorithms. “Current error rates significantly limit the lengths of computations that can be performed,” said Andrew Childs, the codirector of the Joint Center for Quantum Information and Computer Science at the University of Maryland. “We’ll have to do a lot better if we want to do something interesting.”

Andrew Childs, a quantum theorist at the University of Maryland, cautions that error rates are a fundamental concern for quantum computers.

Photo by John T. Consoli/University of Maryland

A lot of research on the fundamentals of quantum computing has been devoted to error correction. Part of the difficulty stems from another of the key properties of quantum systems: Superpositions can only be sustained as long as you don’t measure the qubit’s value. If you make a measurement, the superposition collapses to a definite value: 1 or 0. So how can you find out if a qubit has an error if you don’t know what state it is in?

One ingenious scheme involves looking indirectly, by coupling the qubit to another “ancilla” qubit that doesn’t take part in the calculation but that can be probed without collapsing the state of the main qubit itself. It’s complicated to implement, though. Such solutions mean that, to construct a genuine “logical qubit” on which computation with error correction can be performed, you need many physical qubits.

How many? Quantum theorist Alán Aspuru-Guzik of Harvard University estimates that around 10,000 of today’s physical qubits would be needed to make a single logical qubit—a totally impractical number. If the qubits get much better, he said, this number could come down to a few thousand or even hundreds. Eisert is less pessimistic, saying that on the order of 800 physical qubits might already be enough, but even so he agrees that “the overhead is heavy,” and for the moment we need to find ways of coping with error-prone qubits.

An alternative to correcting errors is avoiding them or canceling out their influence: so-called error mitigation. Researchers at IBM, for example, are developing schemes for figuring out mathematically how much error is likely to have been incurred in a computation and then extrapolating the output of a computation to the “zero noise” limit.

Some researchers think that the problem of error correction will prove intractable and will prevent quantum computers from achieving the grand goals predicted for them. “The task of creating quantum error-correcting codes is harder than the task of demonstrating quantum supremacy,” said mathematician Gil Kalai of the Hebrew University of Jerusalem in Israel. And he adds that “devices without error correction are computationally very primitive, and primitive-based supremacy is not possible.” In other words, you’ll never do better than classical computers while you’ve still got errors.

Others believe the problem will be cracked eventually. According to Jay Gambetta, a quantum information scientist at IBM’s Thomas J. Watson Research Center, “Our recent experiments at IBM have demonstrated the basic elements of quantum error correction on small devices, paving the way towards larger-scale devices where qubits can reliably store quantum information for a long period of time in the presence of noise.” Even so, he admits that “a universal fault-tolerant quantum computer, which has to use logical qubits, is still a long way off.” Such developments make Childs cautiously optimistic. “I’m sure we’ll see improved experimental demonstrations of [error correction], but I think it will be quite a while before we see it used for a real computation,” he said.

Living With Errors

For the time being, quantum computers are going to be error-prone, and the question is how to live with that. At IBM, researchers are talking about “approximate quantum computing” as the way the field will look in the near term: finding ways of accommodating the noise.

This calls for algorithms that tolerate errors, getting the correct result despite them. It’s a bit like working out the outcome of an election regardless of a few wrongly counted ballot papers. “A sufficiently large and high-fidelity quantum computation should have some advantage [over a classical computation] even if it is not fully fault-tolerant,” said Gambetta.

Lucy Reading-Ikkanda/Quanta Magazine

One of the most immediate error-tolerant applications seems likely to be of more value to scientists than to the world at large: to simulate stuff at the atomic level. (This, in fact, was the motivation that led Feynman to propose quantum computing in the first place.) The equations of quantum mechanics prescribe a way to calculate the properties—such as stability and chemical reactivity—of a molecule such as a drug. But they can’t be solved classically without making lots of simplifications.

In contrast, the quantum behavior of electrons and atoms, said Childs, “is relatively close to the native behavior of a quantum computer.” So one could then construct an exact computer model of such a molecule. “Many in the community, including me, believe that quantum chemistry and materials science will be one of the first useful applications of such devices,” said Aspuru-Guzik, who has been at the forefront of efforts to push quantum computing in this direction.

Quantum simulations are proving their worth even on the very small quantum computers available so far. A team of researchers including Aspuru-Guzik has developed an algorithm that they call the variational quantum eigensolver (VQE), which can efficiently find the lowest-energy states of molecules even with noisy qubits. So far it can only handle very small molecules with few electrons, which classical computers can already simulate accurately. But the capabilities are getting better, as Gambetta and coworkers showed last September when they used a 6-qubit device at IBM to calculate the electronic structures of molecules, including lithium hydride and beryllium hydride. The work was “a significant leap forward for the quantum regime,” according to physical chemist Markus Reiher of the Swiss Federal Institute of Technology in Zurich, Switzerland. “The use of the VQE for the simulation of small molecules is a great example of the possibility of near-term heuristic algorithms,” said Gambetta.

But even for this application, Aspuru-Guzik confesses that logical qubits with error correction will probably be needed before quantum computers truly begin to surpass classical devices. “I would be really excited when error-corrected quantum computing begins to become a reality,” he said.

“If we had more than 200 logical qubits, we could do things in quantum chemistry beyond standard approaches,” Reiher adds. “And if we had about 5,000 such qubits, then the quantum computer would be transformative in this field.”

What’s Your Volume?

Despite the challenges of reaching those goals, the fast growth of quantum computers from 5 to 50 qubits in barely more than a year has raised hopes. But we shouldn’t get too fixated on these numbers, because they tell only part of the story. What matters is not just—or even mainly—how many qubits you have, but how good they are, and how efficient your algorithms are.

Any quantum computation has to be completed before decoherence kicks in and scrambles the qubits. Typically, the groups of qubits assembled so far have decoherence times of a few microseconds. The number of logic operations you can carry out during that fleeting moment depends on how quickly the quantum gates can be switched—if this time is too slow, it really doesn’t matter how many qubits you have at your disposal. The number of gate operations needed for a calculation is called its depth: Low-depth (shallow) algorithms are more feasible than high-depth ones, but the question is whether they can be used to perform useful calculations.

What’s more, not all qubits are equally noisy. In theory it should be possible to make very low-noise qubits from so-called topological electronic states of certain materials, in which the “shape” of the electron states used for encoding binary information confers a kind of protection against random noise. Researchers at Microsoft, most prominently, are seeking such topological states in exotic quantum materials, but there’s no guarantee that they’ll be found or will be controllable.

Researchers at IBM have suggested that the power of a quantum computation on a given device be expressed as a number called the “quantum volume,” which bundles up all the relevant factors: number and connectivity of qubits, depth of algorithm, and other measures of the gate quality, such as noisiness. It’s really this quantum volume that characterizes the power of a quantum computation, and Gambetta said that the best way forward right now is to develop quantum-computational hardware that increases the available quantum volume.

This is one reason why the much vaunted notion of quantum supremacy is more slippery than it seems. The image of a 50-qubit (or so) quantum computer outperforming a state-of-the-art supercomputer sounds alluring, but it leaves a lot of questions hanging. Outperforming for which problem? How do you know the quantum computer has got the right answer if you can’t check it with a tried-and-tested classical device? And how can you be sure that the classical machine wouldn’t do better if you could find the right algorithm?

So quantum supremacy is a concept to handle with care. Some researchers prefer now to talk about “quantum advantage,” which refers to the speedup that quantum devices offer without making definitive claims about what is best. An aversion to the word “supremacy” has also arisen because of the racial and political implications.

Whatever you choose to call it, a demonstration that quantum computers can do things beyond current classical means would be psychologically significant for the field. “Demonstrating an unambiguous quantum advantage will be an important milestone,” said Eisert—it would prove that quantum computers really can extend what is technologically possible.

That might still be more of a symbolic gesture than a transformation in useful computing resources. But such things may matter, because if quantum computing is going to succeed, it won’t be simply by the likes of IBM and Google suddenly offering their classy new machines for sale. Rather, it’ll happen through an interactive and perhaps messy collaboration between developers and users, and the skill set will evolve in the latter only if they have sufficient faith that the effort is worth it. This is why both IBM and Google are keen to make their devices available as soon as they’re ready. As well as a 16-qubit IBM Q experience offered to anyone who registers online, IBM now has a 20-qubit version for corporate clients, including JP Morgan Chase, Daimler, Honda, Samsung and the University of Oxford. Not only will that help clients discover what’s in it for them; it should create a quantum-literate community of programmers who will devise resources and solve problems beyond what any individual company could muster.

“For quantum computing to take traction and blossom, we must enable the world to use and to learn it,” said Gambetta. “This period is for the world of scientists and industry to focus on getting quantum-ready.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

The Chrome Extensions the WIRED Staff Can't Live Without

Nearly two-thirds of internet users turn to Chrome for their browsing needs, but far fewer take full advantage of its available extensions, the add-ons that elevate it from good to great. If you’re one of those plain vanilla Chrome users—or if you’ve only dabbled in the extensions game—check out these sprinkles of joy that the WIRED staff swears by.

The following list of Chrome extension recommendations is by no means comprehensive; there are plenty to explore and discover in the Chrome Web Store. (If you go exploring, just make sure you stick with reputable developers.) But these are the ones we depend on every day to keep our internet experience as sane and enjoyable as possible. May they do they same for you.

Wayback Machine

Have you ever clicked on an interesting link, only to be greeted by a 404 Error? Wayback Machine’s Chrome extension can help. Created by the Internet Archive—a nonprofit that preserves billions of web pages—the extension shows you what a website looked like in the past, even if has since been deleted. It can turn up the most recent version of a page it has saved, or go back to the first time the Internet Archive recorded it. That latter can be especially illuminating. For example, you can see what a user’s Twitter account looked like when they created it, or how a company’s website appeared when it first launched. One drawback: Wayback Machine doesn’t have a record of every webpage on the internet. But it can also help you prevent others from vanishing in the future: The extension lets you save the web page you’re currently visiting to the Internet Archive’s database. —Staff Writer Louise Matsakis

The Great Suspender

You’ll find many tab management solutions on this list, but the best by far for my purposes is the Great Suspender, an extension which, as the name suggests, suspends any Chrome tabs that you’ve left fallow for a given amount of time. As someone who keeps well over a dozen tabs open at any given time during the day—and often more—being been an inestimable boon to my laptop and my sanity. And when it’s time to revisit a page, a simple click springs it back to life. It also lets you whitelist any tabs, like Gmail, that are too precious to suspend. —News Editor Brian Barrett

PixelBlock

Have I read your email? That’s for me to know and you not to find out. This Chrome Extension spots and blocks attempts to track when messages are opened and send that data back to the sender. I know who’s tracking me by the small red eye icon that appears next to messages in Gmail. Sure, I’m not surprised that services like Mailchimp track when messages are opened, but I’m sketched out when professional contacts do the same. —Deputy Managing Editor Joanna Pearlstein

animatedTabs

The best Chrome extensions effortlessly improve our lives in a small but impactful ways. And animatedTabs does exactly that. Once installed, the extension will automatically load a random GIF in the center of every new Chrome tab you open. Sound annoying? Come on, people, this is a pure delight. It seems like the GIFs largely source from Reddit’s /r/gifs/, so you mostly get previously undiscovered gems; there’s not much crying Jordan, or and shark cat on a Roomba. But what beats new? And all because you opened a tab to finally pay your three months overdue speeding ticket! The only downside to animatedTabs? You never know when it’s going to generate something NSFW or just dumb. But the real internet cred comes from not caring.
—Staff Writer Lily Newman

xTab

Bedeviled by browser-tab clutter? Try xTab. It restricts the number of pages you can have open in a given browser window. Just set your cap and go about your business. When you exceed your limit, the extension gets to culling, automatically axing your oldest, least-accessed, or least-recently used tab. It can also prevent you from opening excess tabs altogether. I use that last setting the most; I like to do triage myself. Plus, I’m working on killing my reflexive tabbing habit, and being interrupted in the act helps keep my fingers in check. If you’ve tried other tab managers in the past and found them wanting, this could be your ticket; where most encourage you to cmd-T with abandon, xTab retrains you to curate a more manageable tabscape in real-time. —Senior Writer Robbie Gonzalez

Go Back With Backspace

In July of 2016, the world changed for the worse. Up until that point, the backspace key on your desktop keyboard doubled as a back button in Chrome. It had been that way since the browser’s launch some eight years prior. By mid-2016, this action—a simple keystroke to go back one page in your browser history—had become hardwired in our lizard brains. But Google removed the backspace action that summer, because it caused a particularly Googley problem: People were losing work in web apps. When a user typed into a browser text field and hit the backspace key hoping to correct a typo, they’d sometimes inadvertently cause the browser to jump back one page, nuking whatever efforts they’d spent the last few minutes sweating over. Sure, that’s annoying. But imagine the outrage of millions of Chrome users when, upon the next browser update, the backspace key suddenly did nothing. Google had neutered one of the most useful mechanisms for navigating the web. Thankfully, the company recognized our plight and just weeks later released this extension, which restores the back-button functionality of the backspace key. Hallelujah. The preferred keystroke of Alt + left arrow is still the default in Chrome, and maybe you’re used to that now. But why force yourself to press two keys when you can install this extension and press only one? —Senior Editor Michael Calore

OneTab

You know when you open Chrome and the browser is like, “Are you sure you want to reopen 400 tabs?” (Yes I do, and rude!) Maybe it’s a selection of news articles you’re planning to read later, or the aftermath of clicking through dozens of Wikipedia pages. Maybe you don’t even know what’s in all those tabs. Either way, keeping them all open puts a huge strain on your browser. Close them all—without losing them forever—with the handy OneTab extension. One click of the button neatly collates all your open tabs into one list of links that you can revisit later. It saves your computer incredible amounts of RAM, speeds up the browser immediately, and keeps all those links handy for when you’re totally, definitely, someday coming back to read them. —Senior Associate Editor Arielle Pardes

HabitLab

My name is Tom and I have a Twitter problem—but I’m getting help from a Chrome extension called HabitLab. Anytime I look at the bird-logoed slot machine of trolling, outrage, and thinkfluencing there’s now a bold banner at the top counting up how long I’ve been on the site that day. If I open a Twitter tab but regain my senses and close it again quickly, a popup informs me how many seconds I just saved compared to my usual time-wasting visit. The message comes with a different “Good job!” GIF each time; most recently it was Jimmy Kimmel. HabitLab was developed by Stanford’s Human Computer Interaction group to help those of us suffering internet distraction disorder (most of us?) take control of our online habits. When first installed, it prompts you to identify the sites you want to spend less time on. HabitLab will then keep track of your wasted seconds, minutes, and hours, and display them in neat charts. It also offers a menu of “nudges” to help keep those trend lines moving in the right direction. One of them is the timer that now haunts me on Twitter, a nudge named The Supervisor. Others include GateKeeper, which makes you wait a few seconds before a page you’re trying to give up loads, and the devilish 1Minute Assassin, which kills a tab after 60 seconds. —Senior Writer Tom Simonite

Eye Dropper [Miranda]

I am not a designer, and I’m sure that those who are have far better tools for pulling colors off of web pages than “Eye Dropper,” a mostly-but-not-always-functional extension that lets you eye-drop any color from around the web, and grab its RGB and Hex color codes. It’s particularly handy for quick fixes that don’t necessitate slowing down your computer by opening up Photoshop—like, say, updating the text on a WIRED section page to make it more readable. It isn’t the prettiest extension, and it’s all too easy to accidentally trigger the eyedropper if, like me, you’re prone to hitting alt-P instead of command-P when trying to print—but Eye Dropper gets the job done. —Digital Producer Miranda Katz

Ghostery

If you’ve ever seen a Google ad follow you around the entire web and back, you know just how annoying and invasive online tracking can become. Ghostery is a fascinating way to see what services websites use to track and collect data about you. It creates a little icon with a number, showing you how many trackers every site uses. Wikipedia, for example, has 0. Most other sites have at least a few. You can see what they use to monitor their website traffic and serve ads, and block services that you don’t like. It’s not perfect; sometimes it will break sites you want to visit, and you’ll have to turn it off or pause it, although the latest release uses AI powers to help minimize the collateral damage. —Senior Writer Jeffrey Van Camp

ProPublica’s What Facebook Thinks You Like

Facebook thinks I like arachnids because my brother writes for a TV show called Scorpion. It thinks I like Christmas Eve because Pearlstein, and it thinks I like flywheels because my late friend Eric Scott was in a band by that name. I know all of this thanks to ProPublica’s cool Facebook Chrome Extension, which helps me see what Facebook thinks about me, and then lets me rate how spot-on—or not—the site’s analysis is, using the aptly named Creepy Meter. —JP

Pocket

I fly a lot. In the past year, I’ve taken roughly a dozen round trips, each with their own fun, idiosyncratic layovers and delays. To pass the tarmac time, I could watch a bunch of downloaded episodes of The Crown or The Great British Baking Show. I could read a good ol’ fashioned book. Or I could connect to plane Wi-Fi and incessantly check Twitter. Instead, what I prefer to do before leaving for the airport is save a bunch of stories to Pocket. This nifty extension allows you to stow away things you want to read later, no internet connection necessary (though if you use the Pocket app on your phone, be sure to sync it over Wi-Fi or a network connection before going into airplane mode). Pocket also recommends stories, based on other users you follow or topics that interest you, and allows you to optimize your reading experience—I prefer a serif font with a black background and very large text to protect my fatigued eyes. But for someone who opens a million tabs with an intention to eventually read them all, it’s my preferred way to dog-ear a story. If you want to start saving, here’s a shameless plug to visit WIRED’s Backchannel page, chock full of excellent longform narratives that will transport you during your disconnected commute. —WIRED.com Editor Andrea Valdez

1Password (Lily)

Getting a password manager extension means getting a password manager, so definitely do that. All the major managers—LastPass, Dashlane, 1Password, KeePass—offer Chrome extensions, and they’re crucial to making password managers easy to use. The browser extensions act as a quick control center to fill login forms, generate new passwords, and save new credentials into your manager. And though password managers can work without extensions, switching back and forth to a standalone desktop application can be clunky while you’re browsing online. These extensions do carry some potential security risks, but if they’re what get you on a password manager in the first place, they’re worth it. —LN

Google Calendar

You probably use Google Calendar every day—many, many times. Instead of letting it permanently squat on valuable tab real estate on your desktop, try the Google Calendar Chrome extension instead. It puts a small Calendar icon in the upper right of your browser window, right where you’d expect. Tap it, and a box drops down, showing you all the meetings you have coming up. I like the design because it reminds me of the wonderful Google Cal widget on my Android home screen. It’s just a one-shot view of the meetings and events you have coming up in the next week or two. You can customize which calendars appear, which is also nice, because if you’re like me, you have a ton of them. For more display options—or to get crazy and log into two Google Calendars at the same time—try the Checker Plus for Google Calendar extension. It’s not official, but works well. —JVC

And More

WIRED Editor-in-Chief Nicholas Thompson swears by Grammarly, an extension that checks your emails, tweets, Facebook posts, and other online missives for spelling and grammar mistakes. Features Editor Mark Robinson recommends Reader View, which he describes as a “one-button, rather lo-fi instant Instapaper,” stripping web articles down to the bare essentials. And while Senior Writer Andy Greenberg has not and likely would never use it, he did find an extension called Kardashian Krypt, which encrypts your messages in images of Kim Kardashian using a technique known as steganography.

The Chrome Zone

To Advance Artificial Intelligence, Reverse-Engineer the Brain

Your three-pound brain runs on just 20 watts of power—barely enough to light a dim bulb. Yet the machine behind our eyes has built civilizations from scratch, explored the stars, and pondered our existence. In contrast, IBM’s Watson, a supercomputer that runs on 20,000 watts, can outperform humans at calculation and Jeopardy! but is still no match for human intelligence.

WIRED OPINION

ABOUT

James J. DiCarlo, MD/PhD, is a professor of neuroscience, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds and Machines, and the head of the department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology.

Neither Watson, nor any other artificially “intelligent” system, can navigate new situations, infer what others believe, use language to communicate, write poetry and music to express how it feels, and create math to build bridges, devices, and life-saving medicines. Why not? The society that solves the problem of intelligence will lead the future, and recent progress shows how we can seize that opportunity.

Imagine human intelligence as a skyscraper. Instead of girders and concrete, this structure is built with algorithms, or sequences of interacting rules that process information, layered upon and interacting with each other like the floors of that building.

The floors above the street represent the layers of intelligence that humans have some conscious access to, like logical reasoning. These layers inspired the pursuit of artificial intelligence in the 1950s. But the most important layers are the many floors that you don’t see, in the basement and foundation. These are the algorithms of everyday intelligence that are at work every time we recognize someone we know, tune in to a single voice at a crowded party, or learn the rules of physics by playing with toys as a baby. While these subconscious layers are so embedded in our biology that they often go unnoticed, without them the entire structure of intelligence collapses.

As an engineer-turned-neuroscientist, I study the brain’s algorithms for one of these foundational layers—visual perception, or how your brain interprets your surroundings using vision. My field has recently experienced a remarkable breakthrough.

For decades, engineers built many algorithms for machine vision, yet those algorithms each fell far short of human capabilities. In parallel, cognitive scientists and neuroscientists like myself accumulated myriad measurements describing how the brain processes visual information. They described the neuron (the fundamental building block of the brain), discovered that many neurons are arranged in a specific type of multi-layered, “deep” network, and measured how neurons inside that neural network respond to images of the surroundings. They characterized how humans quickly and accurately respond to those images, and they proposed mathematical models of how neural networks might learn from experience. Yet, these approaches alone failed to uncover the brain’s algorithms for intelligent visual perception.

The key breakthrough came when researchers used a combination of science and engineering. Specifically, some researchers began to build algorithms out of brain-like, multi-level, artificial neural networks so that they had neural responses like those that neuroscientists had measured in the brain. They also used mathematical models proposed by scientists to teach these deep neural networks to perform visual tasks that humans were found to be especially good at—like recognizing objects from many perspectives.

This combined approach rocketed to prominence in 2012, when computer hardware had advanced enough for engineers to build these networks and teach them using millions of visual images. Remarkably, these brain-like, artificial neural networks suddenly rivaled human visual capabilities in several domains, and as a result, concepts like self-driving cars aren’t as far-fetched as they once seemed. Using algorithms inspired by the brain, engineers have improved the ability of self-driving cars to process their environments safely and efficiently. Similarly, Facebook uses these visual recognition algorithms to recognize and tag friends in photos even faster than you can.

This deep learning revolution launched a new era in A.I. It has completely reshaped technologies from the recognition of faces and objects and speech, to automated language translation, to autonomous driving, and many others. The technological capability of our species was revolutionized in just a few years—the blink of an eye on the timescale of human civilization.

But this is just the beginning. Deep learning algorithms resulted from new understanding of just one layer of human intelligence—visual perception. There is no limit to what can be achieved from a deeper understanding of other algorithmic layers of intelligence.

As we aspire to this goal, we should heed the lesson that progress did not result from engineers and scientists working in silos; it resulted from the convergence of engineering and science. Because many possible algorithms might explain a single layer of human intelligence, engineers are searching for the proverbial needle in a haystack. However, when engineers guide their algorithm-building and testing efforts with discoveries and measurements from brain and cognitive science, we get a Cambrian explosion in A.I.

This approach of working backwards from measurements of the functioning system to engineer models of how that system works is called reverse engineering. Discovering how the human brain works in the language of engineers will not only lead to transformative A.I. It will also illuminate new approaches to helping those who are blind, deaf, autistic, schizophrenic, or who have learning disabilities or age-related memory loss. Armed with an engineering description of the brain, scientists will see new ways to repair, educate, and augment our own minds.

The race is on to see if reverse engineering will continue to provide a faster and safer route to real A.I. than traditional, so-called forward engineering that ignores the brain. The winner of this race will lead the economy of the future, and the nation is positioned to seize this opportunity. But to do so, the US needs significant new financial commitments from government, philanthropy, and industry that are devoted to supporting novel teams of scientists and engineers. In addition, universities must create new industry-university partnership models. Schools will need to train brain and cognitive scientists in engineering and computation, train engineers in the brain and cognitive sciences, and uphold mechanisms of career advancement that reward such teamwork. To advance A.I., reverse engineering the brain is the way forward. The solution is right behind our eyes.

WIRED Opinion *publishes pieces written by outside contributors and represents a wide range of viewpoints. *

MORE ON GRAY MATTER

6 Proven Ways to Generate Good Luck Every Day

Nearly everyone wants to be luckier. Some people think success is about preparing for luck, while others think success is about what you do with luck when you find it. There may be different perspectives on luck, but everyone agrees you can’t go wrong with more of it, as long as it’s good.

YPO member Stuart Lacey is considered by many to be an extremely lucky guy, personally and professionally. He married the woman of his dreams, lives exactly where and how he wants, and has traveled to over 70 countries. He’s built 5 successful companies, including Trunomi, a customer consent data rights platform. Bank Innovation even named Lacey as an Innovator to Watch. Lacey has made respecting luck a regular part of his business activity. He even created a mathematical formula to analyze and replicate it.

Lacey’s Lucky Formula

% Luckiness =

(experience)

+

(situational awareness) x [perseverance (work ethic / heart)]unlimited x (# of times attempted)failure is good x (choice to act)binary x (Respect and EQ)

/

(tolerance for adversity)

Here are Lacey’s tips on making your own luck daily:

1. Use Your Education and Experience

Lacey likens experience to the process of securing a patent. “Anyone can file for a patent in a matter of hours for a few hundred bucks. But without a deep understanding of the technical, engineering, design, and geo-political aspects, and without appreciating the importance of opportunity cost and due diligence, the chance of receiving that patent is practically zero.” And Lacey understands that outside factors can influence the outcome, explaining, “Of course experience can be borrowed, for example, by using a world-class (and equally expensive!) patent attorney.” But in the long run, no amount of money can make up for a lack of experience.

2. Have Situational Awareness

Lacey asks a frightening question: “When you’re in a movie theatre, do you actually know where the exits are?” Lacey asserts, “Having situational awareness can multiply by a thousand your chances of survival.” The same is true in business. In a less frightening scenario, Lacey suggests it’s like skiing: “When you’re at the bottom of the mountain, have the foresight to recognize that the last on the ski tram is the first one on the slope. You can totally change your experience just by thinking ahead.” What you do when you get there is up to you, but you can maximize your potential by understanding what’s going on around you.

3. There’s No Substitute for Heart

A hockey fan, Lacey likes to quote Luc Robitaille, who said, “You can find someone smart, but never underestimate heart.” Lacey says, Passion and work ethic usually trump everything else, and luck does not favor those who don’t put in the hours.” It helps, of course, when your career is doing something you love. But when you put in the solid work, the rest will follow with more ease.

4. Embrace Failure

Lacey is a firm believer in the adage, “Fail quickly, fail cheaply, and fail often.” Lacey says, “The willingness to accept and learn from one’s mistakes is vital for luck.” Mistakes here can multiply. “You have to invest time with your head down, ready to constantly pivot and adjust. Embracing change and innovation IS to embrace failure.” People are told from childhood that failure is bad, and this is a crutch that any entrepreneur has to overcome.

5. Take Action

“How often do you look at something new and say, ‘I thought of that a while ago!'” Lacey asks. “There are so many stories of inventions that never occurred or were greatly delayed until someone else took the initiative to act.” It’s not always easily done. “It takes courage,” Lacey acknowledges, “and a willingness to fail and bounce back.” But the alternative is always worse. Lacey asserts, “Either you act and luck has a shot, or you don’t act and the chances of your influencing the outcome are nil.” Take the chance on yourself, and don’t be afraid – failure is an opportunity.

6. Attitude Matters

For Lacey, it’s important to remember that people are human. “If your flight is cancelled, it’s not the gate agent’s fault. I have always found that a kind, supportive, appreciative tone, with a strong measure of compassion, works absolute wonders.” No one likes being unappreciated or disrespected. “Focusing on the human element of interactions at all times is a multiplier of your chances for a lucky outcome.” Another important element is to maintain optimism. Lacey explains, “I’m realistic about the work required, but I’m also aware the future is one that we will create and craft. You need to have the ability to accept bumps in the road while keeping your eye on the prize.” Further, a willingness to accept compromise is key. Be humble, and remember to exercise emotional intelligence.

Each week Kevin explores exclusive stories inside YPO, the world’s premiere peer-to-peer organization for chief executives, eligible at age 45 or younger.

What Do Customers Want? Insights from IKEA's Founder and Beetle Bailey's Creator

Regular readers of this column know that I spend a lot of time thinking about what makes people tick.

That’s because I learned long ago that the secret to appealing to customers, stakeholders, audience members and anyone you care about is to understand who they are and what they want.

And that’s why I carefully read the obituaries of two men who exemplified this philosophy: Ingvar Kamprad, the founder of IKEA, and Mort Walker, the creator of “Beetle Bailey,” a comic strip about a lazy Army private. (Both men died this week.)

Wait–what could these two possibly have in common?

Well, start with the fact that each man was extremely successful in his field. When Kamprad was 17, he launched the store that, over the next seven decades, became the world’s largest seller of furniture (with 400 stores and $42 billion in revenue). And Walker created the comic strip that would ultimately be syndicated in 1,800 newspapers around the world; he had the longest tenure of any cartoonist on an original creation.

Although they were in very different realms, here’s what united Kamprad and Walker: their deep connection to their customers.

For example, in a Forbes interview in 2000, Kamprad summed up his approach this way: “I see my task as serving the majority of people. The question is, how do you find out what they want, how best to serve them? My answer is to stay close to ordinary people, because at heart I am one of them.”

And, as Richard Goldstein wrote in Walker’s New York Times obituary, “‘Beetle Bailey’ used the Army as its setting, but its popularity derived from everyday life and the universal battles against authority figures and mindless bureaucracy.”

When the Defense Department congratulated Mr. Walker on his 80th birthday, he said: “Human frailty is what humor is all about. People like to see the foibles of mankind. And they relate to the little guy, the one on the bottom.”

For both Kamprad and Walker, their understanding of customers–readers or shoppers–wasn’t theoretical or informed only  by data; it was based on personal experience.

Walker spent a stint in the Army, and he stayed in touch with servicemen throughout his life. And although Kamprad became very, very rich, he regularly flew economy and popped into his stores unannounced to replicate the customer experience.

These men knew that in order to break through today’s noise and nonsense, you have to not only know your customers; you have to love them.

As I’ve written, your love has to be real–not manufactured or manipulative–and unconditional. You have to clearly see your customers’ faults, but love them anyway. Your love has to be unwavering, despite inattention, inconstancy and even infidelity.

Only by truly loving your customers can you deliver in a way that’s truly about them, not about you. The leap to loving brings you in touch with what matters to people. Suddenly you’re able to communicate in ways that profoundly connect. You’re not on the other side of the chasm from your customers: You’re right there next to them, talking softly, saying what they’ve always wanted to hear. As a result, you can give customers what they actually want.

China's Lenovo posts third-quarter loss due to U.S. tax reform

HONG KONG (Reuters) – Chinese personal computer maker Lenovo Group reported a quarterly loss of $289 million on Thursday against a $98 million profit a year earlier, due mainly to a one-off charge of $400 million resulting from U.S. tax reform.

Revenue for the three-month period ending December was $12.94 billion, compared with $12.17 billion a year ago.

Lenovo said its core PC and smart devices business group posted an 8 percent rise in revenue to $9.25 billion as sales exceeded shipments growth thanks to better average selling prices driven by innovative products and a better product mix.

Its struggling mobile business – which the group had set a target to turn around by the end of the financial year in March – reported a narrower operating loss before taxation of $92 million, compared with a loss of $132 million in the preceding quarter.

($1 = 6.2842 Chinese yuan renminbi)

Reporting by Sijia Jiang and Donny Kwok; Editing by Stephen Coates

Don't Quit Your PR Program Unless You've Considered These 3 Things

Whether you are looking to gain awareness, improve SEO, or increase sales, having great exposure can help you get there. But PR is not a band-aid for an overarching business problem–nor is it a get rich fast technique.

A great PR strategy can take many years to build. Over the years, I’ve seen many companies start their efforts, only to stop before they’ve given the program enough time to develop. I’ve heard dozens of marketers and founders explain that they quit their PR efforts after their pitch didn’t get picked up by enough outlets in the first few weeks. Gaining great coverage takes time, pitch optimization, and persistence.

Often times, if a brand could have taken a step back after a rejected story to tweak their angle and try again, the second story they pitch could have been widely successful. Here’s why you shouldn’t throw in the towel for your PR outreach just yet:

1. Relationships take time to build.

Imagine you are at a party. You immediately start talking about you, your business, and your news. Very quickly, many people will not want to talk with you.

The same holds true when you’re building relationships with the media. It takes time to get to know a reporter and what they are writing about and then creating relevant pitches that are helpful to them. When you build trust and rapport with reporters, they’ll be more likely to open your emails, which is the first step to gaining great coverage.

You can build a better relationship with reporters by becoming well versed with their past writings and looking for opportunities to tell them stories of interest. Take a look through their Twitter accounts and personal websites to learn more about what they’re covering and the news that is important to them.

When you reach out to a reporter for the first time, show them that you are knowledgeable about their area of coverage and that your story fits their angle. When we reach out to reporters we make sure to spend time reading their past work to ensure our pitch is the right fit for their area of expertise.  It can be easy to burn a press bridge simply by not personalizing an email enough–take your time, do your research, and get to know reporters for the long term. Slow and steady wins the race.

2. SEO is a long-term game.

When you receive a press mention, you’ll likely see a spike in traffic on the day it’s published–but don’t discount the future traffic. If you are a mattress company and you get listed as “The Best Mattresses Ever Made,” you’ll benefit from both the spike and also later from people who are searching for mattresses and come across the article. Traffic from press articles should be monitored for months to come, even after publication.

An authoritative link will not only drive traffic, but will also help your website in the search engine rankings. This boost will not happen instantly. With time and relevant inbound links, you’ll see not just your referral traffic grow, but also your organic search traffic from Google.

3. Press takes commitment–and a bit of luck.

It takes a while to learn about the best way to pitch your product. Each time you pitch, you’ll learn more about what copy and message resonates with reporters.

If you’re not seeing any success, it does not mean you don’t have an interesting story. It might mean you are pitching to the wrong reporters, your email subject line needs work, or you simply didn’t follow up.

By tracking your emails with a tool like SideKick or Yesware, you’ll be better able to see who is opening your mails, what they’re clicking on, and how many times they went back to the email. You can use this data to refine your pitch the next time. With the media always changing, it also takes a bit of luck to pitch at the right time to the right reporter with the right story.

Pitching takes a strong backbone and you’ll get a lot of rejections. If you haven’t had success yet, keep trying. And if you’ve been pitching for months with still no results, it might be time to call in a PR pro to help you optimize your pitch and press kit.

If you’re looking to reap the benefits of the press, start early, optimize often, and plan your strategy for the long haul. This time next year, you’ll be glad you stuck with it.

Before Investing in Artificial Intelligence, You Should Know These 4 Things

IPsoft is, in many ways, an unusual entrant into the crowded, but burgeoning, artificial intelligence industry. First of all, it is not a startup, but a 20-year-old company and its leader isn’t some millennial savant, but a fashionable former NYU professor named Chetan Dube. It bills its cognitive agent, Amelia, as the “world’s most human AI.”

It got its start building and selling autonomic IT solutions and its years of experience providing business solutions give it a leg up on many of its competitors. It can offer not only technological solutions, but the insights it has gained helping businesses to streamline operations with automation.

Ever since IBM’s Watson defeated human champions on the game show Jeopardy!, the initial excitement has led to inflated expectations and often given way to disappointment. So I recently met with a number of top executives at IPsoft to get a better understanding of how leaders can successfully implement AI solutions. Here are four things you should keep in mind:

1. Match The Technology With The Problem You Need To Solve

AI is not a single technology, but encompasses a variety of different methods. In The Master Algorithm veteran AI researcher Pedro Domingos explains that there are five basic approaches to machine learning, from neural nets that mimic the brain, to support vector machines that classify different types of information to graphical models that use a more statistical approach.

“The first question to ask is what problem you are trying to solve.” Chetan Dube, CEO of IPsoft told me. “Is it analytical, process automation, data retrieval or serving customers? Choosing the right a technology is supremely important.” For example, with Watson, IBM has focused on highly analytical tasks, like helping doctors to diagnose a rare case of cancer.

With Amelia, IPsoft has chosen to target customer service, which is extraordinarily difficult. Humans tend not to think linearly. They might call about a lost credit card and then immediately realize that they wanted to ask about paperless billing or how to close an account. Sometimes the shift can happen mid-sentence, which can be maddening even for trained professionals.

So IPsoft relies on a method called spreading activation, which helps Amelia to engage or disengage different parts of the system. For example, when a bank customer asks how much money she has in her account, it is a simple data retrieval task. However, if a customer asks how she can earn more interest on her savings, logical and analytical functions come into play.

2. Train Your AI As You Would A New Employee

Most people by now have become used to using consumer facing cognitive agents like Google voice search or Apple’s Siri. These work well for some tasks, such as locating the address for your next meeting or telling you how many points the Eagles beat Vikings by in the 2018 NFC Championship (exactly 31, if you’re interested).

However, for enterprise level applications, simple data retrieval will not suffice, because systems need domain specific knowledge, which often has to be related to other information. For example, if a customer asks which credit card is right for her, that requires not only deep understanding of what’s offered, but also some knowledge about the customer’s spending habits, average balance and so on.

One of the problems that many companies run into with cognitive applications is that they expect them to work much like installing an email system — you just plug it in and it works. But you would never do that with a human agent. You would expect them to need training, to make mistakes and to learn as they gained experience.

“Train your algorithms as you would your employees” says Ergun Ekici, a Principal and Vice President at IPsoft. “Don’t try to get AI to do things your organization doesn’t understand. You have to be able to teach and evaluate performance. Start with the employee manual and ask the system questions.” From there you can see what it is doing well, what it’s doing poorly and adapt your training strategy accordingly.

3. Apply Intelligent Governance

No one calls a customer service line and asks a human to talk to a machine. However, we often prefer to use automated systems for convenience. For example, when most people go to their local bank branch they just use the ATM machine outside without giving a thought to the fact that there are real humans inside ready to give them personalized service.

Nevertheless, there are far more bank tellers today than there were in before ATMs, ironically due to the fact that each branch needs far fewer tellers. Because ATMs drastically reduced the costs to open and run branches, banks began opening up more of them and still needed tellers to do higher level tasks, like opening accounts, giving advice and solving problems.

Yet because cognitive agents tend to be so much cheaper than human ones, many firms do everything they can to discourage a customer talking to a human. To stretch the bank teller analogy a little further, that’s almost like walking into a branch with a problem and being told to go back outside and wrestle with the ATM some more. Customers find it incredibly frustrating.

So IPsoft stresses to its enterprise customers that it’s essential that humans stay involved with the process and make it easy to disengage Amelia when a customer should be rerouted to a human agent. It also uses sentiment analysis to track how the system is doing. Once it becomes clear that the customer’s mood is deteriorating, a real person can step in.

Training a cognitive agent for enterprise applications is far different than, say, Google training an algorithm to play Go. When Google’s AI makes a mistake, it only loses a game, but when an enterprise application screws up, you can lose a customer.

4. Prepare Your Culture For AI As You Would For Any Major Shift

There are certain things robots will never do. They will never strike out in a little league game. They will never have their heart broken or get married and raise a family. That means that they will never be able to relate to humans as humans do. So you can’t simply inject AI into your organizational culture and expect a successful integration.

“Integration with organizational culture as well as appetite for change and mindset are major factors in how successful an AI program will be. The drive has to come from the top and permeate through the ranks,” says Edwin Van Bommel, Chief Cognitive Officer at IPsoft.

In many ways, the shift to cognitive is much like a merger or acquisition — which are notoriously prone to failure. What may look good on paper rarely pans out when humans get involved, because we have all sorts of biases and preferences that don’t fit into neat little strategic boxes.

The one constant in the history of technology is that the future is always more human. So if you expect to cognitive applications simply to reduce labor, you will likely be disappointed. However, if you want to leverage and empower the capabilities of your organization, then the cognitive future may be very bright for you.

‘Who Is Jesus?’ Google Home Couldn’t Answer and People Weren’t Happy

Anger over Google Home’s inability to answer questions about Jesus led the company to bar the device from answering questions about all religious figures, according to a statement released Friday.

Some users became angry when the smart speaker was unable to answer questions such as, “Who is Jesus?” but could respond to similar queries about Buddha, Muhammad and Satan, CNBC reports. Some unhappy social media users alleged that Google was “censoring” Jesus.

Danny Sullivan, Google’s public search liason, tweeted a statement by way of explanation on Friday. “The reason the Google Assistant didn’t respond with information about ‘Who is Jesus’ or ‘Who is Jesus Christ’ wasn’t out of disrespect but instead to ensure respect,” the statement reads. “Some of the Assistant’s spoken responses come from the web, and for certain topics, this content can be more vulnerable to vandalism and spam.”

Until the issue is fixed, according to the statement, all responses for questions about religious figures will be temporarily unavailable.

Google’s reliance on “featured snippets” — the pullout information that appears at the top of a page of search results — has gotten the company in hot water before. Inaccurate and offensive information can find its way into featured snippets, which has led Google’s smart products to repeat sometimes inflammatory comments.

Google Home is now responding to questions about religious figures with, “Religion can be complicated, and I am still learning,” users report.