Research Shows That Companies That Do This One Thing Increase Worker Productivity by 25%

When we think productivity, we rarely think of workplace design as a major contributor or detractor, but compelling ongoing research shows that it plays a much larger role than initially thought. According to research published in the Journal of Experimental Psychology, an empowered office environment can increase worker productivity on cognitive tasks by 25%, and possibly more.

Workspace design today is undergoing a major creative shift. We’ve gone from cubicles (people are productive in isolation) to open-plan spaces (collaboration leads to success) to what I believe is the next major step – integrated multi-function design which recognizes that people need multiple spaces based on their ongoing and changing needs within a business day.

Instead of looking out across rows of cubicles, today’s office worker needs a mix of team meeting rooms, open lounge-like areas, and private workspaces.

This is the “empowered office” – an office in which workers can choose their work environment. It’s a design concept that’s gaining traction – and not only because it creates more pleasant workspaces. It also has a powerful influence on worker productivity.

Great office design isn’t just for startups anymore

Major tech companies and Silicon Valley startups were among the first to embrace the concept of the empowered office. The New York Times recently highlighted Microsoft for its forward-thinking office designs, which incorporate everything from “isolation rooms,” or soundproof private spaces, to comfy central lounges with large tables and couches.

What’s really exciting, however, is that this way of thinking about space – specifically, about the ways that spaces influence behavior – is becoming more mainstream.  

“The great thing we are seeing, as far as transformative spaces in the workplace, is that these principles are being adopted across all disciplines – all fields and industries,” says Architectural Designer Jared Skinner, co-founder at MADE Design. “Companies are realizing that these best practices are bolstering not only creative collaboration – often seen as a soft skill, but productivity and results. It’s impacting the bottom line.”

Striking the perfect balance between privacy and collaboration

When it comes to progressive, transformative workspaces. some of the most successful companies have been the ones that aren’t afraid to experiment.

At Microsoft, for example, designers began testing open team workspaces in one specific area in one building. Through experimentation, they learned that the spaces they’d started with were too open – they were built for 16 to 24 software engineers, and those who worked in them found them to be too loud and distracting.

Working with that knowledge, Microsoft then adjusted those team spaces till they held just 8 to 12 engineers, which the company – and more importantly, the employees – believe to be ideal.

To achieve higher productivity, then, companies must embrace the need for creativity and flexibility. They must allow themselves to try out new configurations and change them as needed, adding in more private spaces, perhaps, or bringing in standing desks, or creating smaller collaborative work stations.  

Workspace design must embrace our digital, connected reality

Just as today’s consumers are constantly connected, so are today’s workers. What’s more, they’re mobile – work no longer has to be tied to a desk or an office.

When designing workspaces, it’s crucial to take these realities into account. But it takes more than an espresso machine or a pingpong table to make your workspace truly progressive, and thereby productive. If you’re not baking the principles of empowerment, connectedness, and mobility into your office design at its most basic level, then you can easily end up with a workspace that feels gimmicky and disingenuous.

That’s not to mention that you won’t be reaping the real productivity benefits of empowered office design.

Integrated design is a must for attracting talent – especially among Millennials and Gen Z

Millennials and members of Generation Z take connectivity for granted in their workspaces, so companies that want to truly stay ahead of the pack must go further.

We need to create designs that engage members of these generations. This isn’t the old model of engagement, either – Millennials and Gen Zers have a completely unique approach to engaging with spaces that’s based on more than just technology. To be successful, companies must keep these new sensibilities in mind as they design or renovate their workspaces.

This shift in workplace design is both responding to and influencing the new ways we’re defining work in the digital age. It’s an incredibly exciting time to be working at the intersection of design and branding, as we do at MADE.

To quote my co-founder, Jared Skinner, once more: “We’re living in an evolve-or-die day and age. Smart companies are being proactive and taking initiative to welcome this much-needed change.”

Tech

AI Research Is in Desperate Need of an Ethical Watchdog

About a week ago, Stanford University researchers (posted online)[https://osf.io/zn79k/] a study on the latest dystopian AI: They’d made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data and Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.

Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.

For example, if you merely use a database without interacting with real humans for a study, it’s not clear that you have to consult a review board at all. Review boards aren’t allowed to evaluate a study based on its potential social consequences. “The vast, vast, vast majority of what we call ‘big data’ research does not fall under the purview of federal regulations,” says Metcalf.

So researchers have to take ethics into their own hands. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. They trained the algorithm using millions of names from Twitter and from e-mail contact lists provided by an undisclosed company—and they didn’t have to go through a university review board to make the app.

The app, called NamePrism, allows you to analyze millions of names at a time to look for society-level trends. Stony Brook computer scientist Steven Skiena, who used to work for the undisclosed company, says you could use it to track the hiring tendencies in swaths of industry. “The purpose of this tool is to identify and prevent discrimination,” says Skiena.

Skiena’s team wants academics and non-commercial researchers to use NamePrism. (They don’t get commercial funding to support the app’s server, although their team includes researchers affiliated with Amazon, Yahoo, Verizon, and NEC.) Psychologist Sean Young, who heads University of California’s Institute for Prediction Technology and is unaffiliated with NamePrism, says he could see himself using the app in HIV prevention research to efficiently target and help high-risk groups, such as minority men who have sex with men.

But ultimately, NamePrism is just a tool, and it’s up to users how they wield it. “You can use a hammer to build a house or break a house,” says sociologist Matthew Salganik of Princeton University and the author of Bit by Bit: Social Research In The Digital Age. “You could use this tool to help potentially identify discrimination. But you could also use this tool to discriminate.”

Skiena’s group considered possible abuse before they released the app. But without having to go through a university IRB, they came up with their own safeguards. On the website, anonymous users can test no more than a thousand names per hour, and Skiena says they would restrict users further if necessary. Researchers who want to use the app for large-scale studies have to ask for permission from Skiena. He describes the approval process as “fairly ad hoc.” He has refused access to businesses and accepted applications from academics affiliated with established institutions who have proposed “what seem to be reasonable topics of study.” He also points out that names are public data.

The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the “weakest level of review that they could do.” That’s because the law does not require companies to follow the same regulations as publicly-funded research. “It’s not transparent at all to you or me how [the evaluation] was made, and whether it’s trustworthy,” Metcalf says.

But the problem isn’t about NamePrism. “This tool by itself is not likely to cause a lot of harm,” says Metcalf. In fact, NamePrism could do a lot of good. Instead, the problem is the broken ethical system around it. AI researchers—sometimes with the noblest of intentions—don’t have clear standards for preventing potential harms. “It’s not very sexy,” says Metcalf. “There’s no Skynet or Terminator in that narrative.”

Metcalf, along with researchers from six other institutions, has recently formed a group called Pervade to try to mend the system. This summer, they received a three million dollar grant from the National Science Foundation, and over the next four years, Pervade wants to put together a clearer ethical process for big data research that both universities and companies could use. “Our goal is to figure out, what regulations are actually helpful?” he says. But before then, we’ll be relying on the kindness—and foresight—of strangers.

Tech

Highly Cited Research Points To Nobel Prize Favorites

Beginning October 5, a select few will get the call of a lifetime. That’s when scientists who have been chosen to receive a Nobel Prize will start hearing from Stockholm. It’s a closely guarded secret as to who will land this coveted honor, but at Thomson Reuters, we’ve designed a way to show which researchers have the inside track.


All articles