fbpx
3-5 Evaggelikis Scholis, 14231 Nea Ionia, Greece
+302108321279
+302110135659

What Are The Pitfalls Of Outsourcing Self-Awareness To AI?

Civilization’s first tools were hammerstones. Over time, our Stone-age forebearers learned to fashion those simple stone cores into other implements which could be used to cut, chop, bang, and dig. As Earth’s earliest humans became increasingly more resourceful, their primitive technologies progressed. They developed better implements to get food, build shelters, cook, carry and store things, protect themselves. They discovered new skills and used ingenuity to make things needed to live and thrive in adverse circumstances. They—we—invented to survive.

That was a million years ago.

In the centuries since, our tools evolved from mechanisms of necessity to those which can assist us and enhance our lives. We’ve developed machines to build, lift and haul far beyond our physical abilities; compute faster and more accurately than we can; enable us to safely travel at tremendous speeds; leave Earth’s orbit to explore the universe; cure illness, eradicate diseases and extend life spans; communicate in real-time with people thousands of miles away; access the wealth of human knowledge on command; and many other extraordinary innovations and dazzling technological achievements which, in ways both positive and negative, shape the world we live in now.

But many of today’s technological innovations have a different purpose: to do things just so we don’t have to. Vehicles that travel without our controlling them. Robotic manufacturing. Genetically engineering foods instead of farming them. Passive threat monitoring. Algorithms pre-selecting information customized for mass consumption. Autonomous agents providing services in human affairs without human supervision or involvement.

Consider Humu, a start-up co-founded in 2017 by three former veterans of Google, including Laszlo Bock who’d led Google’s human -resources, what Google calls People Operations. As described in a New York Times feature article last week, Humu “uses A.I. to ‘nudge’ workers toward happiness.”  Bringing data analytics to human resources functions isn’t new. Humu’s self-declared differentiator is leveraging artificial intelligence using natural language programing and proprietary algorithms to run its special-purpose “Nudge Engine” which “deploys thousands of customized nudges—small, personal steps—throughout the organization to empower every employee, manager, team, and leader as a change agent.” All of this draws on Nudge Theory, a concept catapulted into the mainstream by Richard Thaler, the University of Chicago Behavioral Science and Economics professor who won the 2017 Nobel Prize in Economics for his work on the subject.

People are “complex, messy things,” Humu co-founder and CEO Laszlo Bock wrote in a company blog post last October. “If work is going to be better tomorrow,” Bock continued, “we have to change the way we do things today. And getting us to change? That’s one of the hardest organizational challenges out there. So to make work better, we have to make change easier.” Building a stronger, happier, and more productive workplace is the aspiration of many organizations. It promises solutions in a slew of vexing people-oriented areas, including retention, engagement, talent acquisition, performance, culture, absenteeism and insider misbehavior, among others. The premise is so appealing, in year one, Humu raised $40 million and is already deployed to 15 enterprise customers including one with 65,000 employees.

People dressed up as emoji faces.

People dressed up as emoji faces. Fuyang, Anhui Province, China. May 2018PHOTO BY VCG/VCG VIA GETTY IMAGES

But looking past the hype, Humu is a case study in problems typical of many ventures pushing products and services which center on machine learning, artificial intelligence and other iterations of enhanced computational cognition to analyze and forecast human thought and behavior.

By design, AI and ML mimic aspects of human-like cognition and decision-making, using algorithms and training data to “learn” and develop answers and solutions independently rather than operating according to set programing. Simply put, AI is modeled on neural (brain) structures and human cognition and decision-making.

But there’s a sub-group of businesses, of which Humu is one, I call Human-Oriented AI (HOAI for short). These don’t just apply AI as an advanced computational tool to analyze data or run the interactive technologies we use and, for the most part, enjoy and benefit from every day (think Siri, Alexa, Google, Amazon and the like). They monitor and assess people for the express purpose of delivering insight, feedback, reporting, and forecasts about ourselves we ostensibly couldn’t get on our own.

While there are certain positives to the enhanced analytics human-oriented systems can offer (for instance, in sports training and performance and pilot, first responder and medical training), there are two major problems. One, endemic to most current-day iterations of AI, is a bias which over-values cognition—human mentation and decision-making as dominantly mechanistic and behavioristic—and subordinates or even dismisses arrays of non-cognitive constituents of mental processing and psychological existence. The paradox is that even as we talk about emotions—happiness for example—we do so in terms which over-simplify the complex psychology of emotions and misconstrue how affects—not just irrationality—influence thinking and behaving. Absent encoding more sophisticated facets of the psychodynamic drivers of human thought and behavior into the general architecture undergirding machine learning and artificial intelligence, applications will remain flawed.

The second is using analytics drawing on biometrics, behavior-based observation, self-reporting and laboratory research data to gain insight into profoundly nuanced, inscrutable aspects of our internal worlds—how subjective and embodied experience and all the feelings and memories of a life lived become and influence who we are and how we think and behave as a person. It’s a cardinal misunderstanding about the exquisitely complex non- and para-cognitive dimensions of how we learn, think and act.

In addition, while Humu claims to have addressed concerns about employee privacy and deprivation of free-will (nudging by definition doesn’t dictate but allows choice-making), the potential for abuse and misuse by malicious actors remains significant. For one thing, Humu’s nudge messages are sent from the address of Wayne Crosby, Humu’s Director of Product and Engineering and one of the co-founders (presumably this could be customized to be sent from the mailbox of any designated administrator within a client organization). But irrespective, these recommendations are in fact generated by Humu’s nudge engine—machine-learning algorithms. Even if not done covertly, it’s strictly speaking a form of spoofing. Exposure to this can over time degrade or inure employees to accurately discerning Phishing emails from legitimate ones. Could members of an organization who are most receptive to Humu’s nudges be tagged as highly susceptible targets for manipulation? Just from this one perspective alone, Humu’s delivery model contradicts most Info-Sec best practices and cyber-security awareness training recommendations.

Even beyond the many ways in which nudging alarmingly resembles and can unwittingly promote social engineering, the core business model cannot escape the limitations of technology’s ultimate handicap as a tool to tell us about people. All claims to the contrary notwithstanding, computational agents cannot intuit human intention, understand meaning, infer subtext, comprehend conflict, anguish, fury, shame, love and desire as motivators, detect percolating internal machinations, and predict either imminent malicious action or most other forms of anomalous behavior.People aren’t weather systems. High-octane pattern analysis only suggests so much. Everything else are unknown unknowns. Until they’re not.

What are the alternatives for enterprise leaders wanting to better understand and address people and culture issues without completely outsourcing them to technology? That’s the million-dollar question. The short answer: reference the vast literature on leadership development and consult a knowledgeable and experienced mentor or advisor. Ask questions, listen, learn. There are no instant bromides or silver-bullets solutions.

But here’s one immediately actionable recommendation. Any business leader who’s considering a technological solution to a people-related issue has already importantly identified at least three critical data points: (1) the awareness of an issue, (2) it’s potential source, and (3) uncertainty about how to proceed. What’s next? Further investigation. Good solutions come from good diagnoses. Determining the correct response is predicated on understanding the causes of the problem. Technology might be an effective response. Or a placebo. Or a mistake. We’re tasking technology to understand and resolve the complexities intrinsic to our humanness. Why? Because we’re flummoxed to contend with them.  We can do better.

I’m an advisor to senior business leaders, entrepreneurs and boards, combining clinical insight with practical business strategies to address issues involving people, culture, ethical decision-making and other aspects of corporate and organizational life with complex psychol…

Alexander Stein is Founder of Dolus Advisors, a consultancy advising senior leaders and boards on human risk and the complex psychodynamics of corporate culture, governance, and ethics.

Source: forbes

Related Posts