Commentary: Handling A.I. Responsibly

O.Tung/The Phillipian

Before this year, I had always associated the terms “machine learning” and “artificial intelligence” with fantastical, sci-fi universes, and sentient robots. Even now, hearing the word “A.I.” reminds me of Ava from the 2014 movie “Ex Machina” and the concept of digital consciousness in Netflix’s “Black Mirror.”

But the term “machine learning” is less shrouded in mystery and jargon than I had thought, and current applications of it are far less fantastical and far more ubiquitous. Machine learning is simply taking big chunks of data, analyzing it to find patterns and correlations, and predicting new data based on those conceived patterns. It is also subset of artificial intelligence which is the ability of machines to demonstrate human-like “intelligence.”

Today, A.I. is used as household assistants like Alexa or Google Home, drug discovery software such as Atomwise, navigational tools like Google smart maps, and much more. A.I. is increasing in popularity, with worldwide spending on machine learning and A.I. estimated to increase from $19.1 billion this year to $52.2 billion in 2021, according to the International Data Corporation. But, with increased interest in the industry comes a greater necessity of oversight and responsibility. The burgeoning of the A.I. industry clearly comes with its economic and technological benefits, but there are also drawbacks that we must be aware of as the field continues to advance.

The purpose of A.I. is to expedite our current processes for analyzing data. If we feed a chunk of code on a computer some big data, the computer can churn out: what a Netflix customer should watch after they’ve finished their latest show binge, whether insurance fraud has occurred, and even how well a student would perform in an academic environment. But this immense computational power and versatility of A.I. can also supplant jobs in the automation industry. We’ve already seen mentions of Tesla’s self-driving cars on news networks. Uber’s brief stint with automated truck driving made them the first company to complete an autonomous truck delivery. And currently, start-ups such as Starsky Robotics and Embark are trying to manufacture trucks that can be remotely piloted.

While this is definitely a reflection of our advancements in technology, self-driving trucks do not bode well for actual human truck drivers. According to the American Trucking Associations, the trucking industry produces $738.9 billion in gross revenue, and there are currently 7.4 million people employed in jobs related to trucking. Most, if not all, of these people might be left without a job as advancements in self-driving continue to occur. The trucking industry is just one example; jobs involving retail, manufacturing, and transportation can all be superseded by A.I. in the next couple decades. There is no clear way to address this problem, but I think that we must take steps to regulate A.I. automation in the workforce to find the balance between expediting certain processes and providing enough workers with jobs.

Moreover, the exorbitant amount of data that these algorithms require have raised issues of possible breaches of privacy. In order for A.I. to be accurate in their predictions, they must be trained on huge amounts of data, and companies are willing to “betray” consumers, patients, or anyone who might have the data they need. The Cambridge Analytica scandal is a highly publicized instance of a company violating the privacy of its consumers.

Alexas, Google Homes, and similar devices also raise concerns — if they’re able to recognize when they’re being addressed, doesn’t that mean they’re constantly listening? Amazon and Google state in their privacy policies that their devices only listen to certain keywords, but that does not mean that these devices are susceptible to being hacked. An experiment led by cybersecurity research lab MWR Labs demonstrated that an attacker can hijack an Amazon Echo to gain remote access to the device and stream live audio without altering the device in any way.

R.Haltmaier/The Phillipian

Additionally, like humans, these machines can make mistakes when making decisions and can alter the lives of innocent people. The company Northpointe released the program Correctional Offender Management Profiling for Alternative Sanctions, which assesses how likely a defendant is to re-offend. However, after an analysis of its usage on 10,000 criminal defendants in Broward County, Fla., statisticians realized that the program gave higher scores to black defendants — even if they had no criminal record — than white defendants who had committed major offenses in the past. There was no explicit racial profiling programmed into the software, but race had surfaced as an invisible factor. These rulings have inadvertently kept defendants in jail longer than they should be, delaying their prospects of attaining jobs and reuniting with their families.

As credit scoring agencies and banking companies also begin to incorporate A.I. that calculates credit scores or determines whether someone can be granted a loan, bias can heavily influence such decisions and calculations. Unfortunately, avoiding this bias is harder than we had thought. A.I. is considered to have a “black box” algorithm — we know what goes in and what comes out but are uninformed as to what happens in between. In a way, this “black box” phenomenon is similar to the human brain; we know about the basic components of the brain and how or when they work, but we’re not sure how the vast, complex network of neurons interact with each other to create function. Given that this technology is not immune to bias, we must add human input to the decisions and calculations that A.I. makes. We should use A.I. as a tool to support the analysis of some data or a decision-making process, but not as a machine that is left to—literally—its own devices.

In our future endeavors with A.I., we must be more sensitive to the issues that are likely to arise and work proactively to address such problems. As a computer science student, being aware of these problems has made me more conscious with my code. I strive to be meticulous in the documenting and commenting on my code in order to explain the decisions that I have made when implementing certain calculations and algorithms. This not only allows a reader to more easily digest the code, but also allows for someone to assess my coding and thought processes to identify possible areas of improvement or concern.

Andover provides students with a wide range of STEM-related opportunities. Whether it be courses, clubs, community engagement activities, or independent projects, there are many ways for students to explore their interests in computer science. With the growing interest, however, we must also ensure that students are learning how to create and handle their code responsibly.

Dec 7, 2018