Apple iPhone X Might Be Cannibalizing iPhone 8

Apple’s iPhone 8 pre-orders are a little slower than previous models, and the handset might have an unlikely foe to thank for it.

In a note to investors this week, KGI Securities analyst Ming-Chi Kuo said that iPhone 8 and iPhone 8 Plus pre-orders are sluggish because of strong demand for Apple’s upcoming iPhone X. In the note, which was earlier reported on by Apple-tracking site 9to5Mac, Kuo said pre-order shipment dates after initial orders are placed usually stands around three to six weeks. Depending on the model they want, if consumers order an iPhone 8 or iPhone 8 Plus today, they might be able to get it on Friday’s launch day or need only to wait a week for the handset to arrive.

Apple (aapl) has been offering pre-orders on new iPhones for years. And in most cases, the handsets it starts selling in September see their initial supply run out soon after the company turns on pre-order sales. By mid-morning of pre-order day, it’s not uncommon for new purchasers to have to wait weeks, if not a couple of months, for their smartphones to arrive.

But the iPhone 8 and iPhone 8 Plus were different. The smartphones were announced alongside the iPhone X, a major upgrade, featuring a big screen that nearly entirely covers the face and a revamped design featuring glass and stainless steel. Apple has called the iPhone X the “future” of smartphone technology, which might have made some would-be iPhone 8 customers feel like they were buying outdated hardware.

Get Data Sheet, Fortune’s technology newsletter

For its part, Apple (aapl) tried to allay some of those fears by bringing a similar glass finish to the iPhone 8 line. Apple’s handsets are also running the same A11 Bionic chip that users would find in the iPhone X, and all three of the company’s new smartphones support wireless charging.

Still, iPhone 8 and iPhone 8 Plus models are readily available, marking a stark departure from Apple’s recent iPhone pre-orders.

According to Kuo, it appears a large number of Apple customers are simply waiting for Apple to offer pre-orders on the iPhone X starting on October 27. And although the iPhone X comes with a hefty $ 999 price tag to start, at least the early adopters don’t seem concerned.

While Apple hasn’t commented on pre-orders, it’s unlikely the company would bemoan customers waiting to buy the iPhone X.

On Monday, researcher Susquehanna International Group estimated that Apple pays $ 581 for the components inside its iPhone X, giving the company a profit margin of $ 418 per unit before it factors in assembly cost. Last year’s iPhone 7, which cost $ 649, cost Apple $ 401 for its components. That translated at the time to a $ 401 profit. Apple, in other words, should make a surprisingly high margin on the sale of each iPhone X.

Tech

Tencent, Guangzhou Auto agree to collaborate on connected cars

HONG KONG (Reuters) – Chinese internet giant Tencent Holdings and Guangzhou Automobile Group Company Ltd said on Monday they had agreed to collaborate on connected cars.

The two companies will also explore investment in areas such as auto-related e-commerce, new energy cars and auto insurance, Guangzhou Automobile said in a filing.

Guangzhou Automobile Group said it would aim to tap Tencent’s expertise in mobile payments, social networking, big data and artificial intelligence.

Reporting by Sijia Jiang; Editing by Edwina Gibbs

Our Standards:The Thomson Reuters Trust Principles.

Tech

AI Research Is in Desperate Need of an Ethical Watchdog

About a week ago, Stanford University researchers (posted online)[https://osf.io/zn79k/] a study on the latest dystopian AI: They’d made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data and Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

For example, if you merely use a database without interacting with real humans for a study, it’s not clear that you have to consult a review board at all. Review boards aren’t allowed to evaluate a study based on its potential social consequences. “The vast, vast, vast majority of what we call ‘big data’ research does not fall under the purview of federal regulations,” says Metcalf.

So researchers have to take ethics into their own hands. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. They trained the algorithm using millions of names from Twitter and from e-mail contact lists provided by an undisclosed company—and they didn’t have to go through a university review board to make the app.

The app, called NamePrism, allows you to analyze millions of names at a time to look for society-level trends. Stony Brook computer scientist Steven Skiena, who used to work for the undisclosed company, says you could use it to track the hiring tendencies in swaths of industry. “The purpose of this tool is to identify and prevent discrimination,” says Skiena.

Skiena’s team wants academics and non-commercial researchers to use NamePrism. (They don’t get commercial funding to support the app’s server, although their team includes researchers affiliated with Amazon, Yahoo, Verizon, and NEC.) Psychologist Sean Young, who heads University of California’s Institute for Prediction Technology and is unaffiliated with NamePrism, says he could see himself using the app in HIV prevention research to efficiently target and help high-risk groups, such as minority men who have sex with men.

But ultimately, NamePrism is just a tool, and it’s up to users how they wield it. “You can use a hammer to build a house or break a house,” says sociologist Matthew Salganik of Princeton University and the author of Bit by Bit: Social Research In The Digital Age. “You could use this tool to help potentially identify discrimination. But you could also use this tool to discriminate.”

Skiena’s group considered possible abuse before they released the app. But without having to go through a university IRB, they came up with their own safeguards. On the website, anonymous users can test no more than a thousand names per hour, and Skiena says they would restrict users further if necessary. Researchers who want to use the app for large-scale studies have to ask for permission from Skiena. He describes the approval process as “fairly ad hoc.” He has refused access to businesses and accepted applications from academics affiliated with established institutions who have proposed “what seem to be reasonable topics of study.” He also points out that names are public data.

The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the “weakest level of review that they could do.” That’s because the law does not require companies to follow the same regulations as publicly-funded research. “It’s not transparent at all to you or me how [the evaluation] was made, and whether it’s trustworthy,” Metcalf says.

But the problem isn’t about NamePrism. “This tool by itself is not likely to cause a lot of harm,” says Metcalf. In fact, NamePrism could do a lot of good. Instead, the problem is the broken ethical system around it. AI researchers—sometimes with the noblest of intentions—don’t have clear standards for preventing potential harms. “It’s not very sexy,” says Metcalf. “There’s no Skynet or Terminator in that narrative.”

Metcalf, along with researchers from six other institutions, has recently formed a group called Pervade to try to mend the system. This summer, they received a three million dollar grant from the National Science Foundation, and over the next four years, Pervade wants to put together a clearer ethical process for big data research that both universities and companies could use. “Our goal is to figure out, what regulations are actually helpful?” he says. But before then, we’ll be relying on the kindness—and foresight—of strangers.

Tech

Uber faces big jump in fees if London license is renewed

LONDON (Reuters) – Uber will face a big jump in the fee it pays to operate in London to 2.9 million pounds if the ride hailing company is granted a new license by the city’s transport authority.

Transport for London said on Monday companies with more than 10,000 vehicles would pay 2.9 million pounds ($ 4 million) for a license under a new multi-tiered system coming into force this week.

In 2012, Uber paid less than 3,000 pounds for a five-year license to operate in London, which was extended in May by four months partly because TfL needed to finalize its new fees system.

Uber, which allows users to book journeys on their smartphones, has roughly 40,000 drivers in London. A decision on Uber’s license is due by the end of the month.

TfL’s General Manager of Taxi & Private Hire Helen Chapman said: “There has been a huge growth in the industry in recent years and it is only fair that the license fee reflects the costs of regulation and enforcement.”

“The changes to fees will help us fund additional compliance officers who do a crucial job cracking down on illegal and dangerous activity,” she said.

Uber has previously said it backed the principle of large firms paying more. The company declined to comment on Monday on the license fees.

The number of private hire drivers in London has almost doubled to more than 116,000 from 65,000 in 2013/14, prompting TfL’s decision to introduce higher fees for the bigger operators.

Uber has faced protests from drivers of London’s traditional black cabs and criticism over working conditions.

Several British lawmakers wrote a letter last week calling for Uber’s license not to be renewed, accusing it of not being a “fit and proper operator” and criticizing its record on safety and working rights.

The GMB union handed in a petition with 100,000 signatures on Monday to TfL, calling on Uber to improve workers’ rights or “get out of London” ahead of the license decision.

An Uber spokesman said the company was taking steps to improve security for its drivers and that they are paid more than the minimum wage, enjoying the flexibility offered by the app.

Reporting by Costas Pitas. Editing by Jane Merriman

Our Standards:The Thomson Reuters Trust Principles.

Tech