LOADING

Type to search

The Sophist

E-legal: Law in the Age of Automation

In the near future, drastic advances in domains such as artificial intelligence and gene editing will fundamentally change the economic, political, and cultural realities of society. Be it through the automation of jobs, the advanced capabilities of government, or increased immersion in a virtual space, a person who fell asleep today and woke up a decade later might find themselves in a world that does not resemble our own. One of the most profound ways that technology might impact our lives is through its interaction with the criminal justice system, a crossroads that has already sparked heated debates over ideas like privacy, freedom of speech, and the boundaries of sovereignty in a digital world. As technology continues to grow more powerful, these questions only become more pressing.

Even in the present day, it is not difficult to imagine a computer program participating in legal adjudication and law enforcement. Already, judges across the U.S. use predictive analytics in sentencing, and in many states, Child Protective Services uses algorithms to help decide whether to open investigations. In theory, a judge could issue a warrant for a local police department to install a program to search for child pornography, something that we all agree is morally abhorrent. If the program were to find evidence, it would notify the authorities. When should such methods be used, and when should such evidence be admissible in a criminal trial?

There are two ways the U.S. could decide whether a program like this should be put into use. The first is based on legal precedent, which would mean assessing the constitutionality of the search. The hypothetical program is such a far cry from the intrusive and imperfect human searches of the past that legal experts are split on whether it counts as unlawful, and whether a warrant would be required for its use. In the case of national security, Judge Richard Posner, at a conference on privacy and cybercrime, said, “If the NSA wants to vacuum all the trillions of bits of information that are crawling through the electronic worldwide networks…that’s fine.” Though not necessarily a matter of national security, the impartial big-data collection of the pornography program represents a similar type of search. The question arises: On what grounds did the founding fathers want to block searches and seizures? What would they have said if they knew how sophisticated our searching capabilities would become?

The second possible route would be to dispense with questions of constitutionality completely. It seems obvious that the Constitution does guarantee a right for privacy, even if the details of that right are in flux. That said, some believe that a right to privacy today is wishful thinking.  Former Amazon chief scientist Andreas Weigend wrote in his column “Is Privacy Dead?” that, “the time has come to recognize that privacy is now an illusion.” Others, like Posner, argue that “much of what passes for the name of privacy is really just trying to conceal the disreputable parts of your conduct” and “privacy interests should really have very little weight when you’re talking about national security.” If one agrees that we should move away from, or fundamentally change our understanding of, the right to privacy, the question then becomes: legal precedent be damned, should we institute an omnipotent legal system?

Some people would answer in the affirmative. Most Americans agree that child pornography ought to be illegal. If society suddenly had a program that could catch everyone who has broken this law, what is the argument against implementing it? Is it logical to ban this program for privacy’s sake when the only thing that privacy is protecting is something we all agree shouldn’t be legal, and is, in fact, morally abhorrent? As technology continues to advance, confronting these questions and ethical dilemmas will become increasingly difficult and vital.

Rather than a program that detects a single type of illicit activity, imagine an omnipotent system that could detect whenever a law was broken, whether on the internet or in real life. If someone committed a crime, the authorities would instantly be notified  Regardless of the type of law broken, how difficult it currently is for the government to find the lawbreaker, or the extent to which the law was broken, this hypothetical system would gather all of the relevant information for identification of the criminal. This could take the form of a microchip put into the head of every citizen that allows the government to monitor what they perceive or even think.  All the data gleaned from this system would be run through a program that could detect any instance of illegal action.

One might contend that a society of this type is reminiscent of a 1984-esque dystopian nightmare. That said, this system seems to be one logical consequence of artificial intelligence employed in the service of law enforcement. Theoretically, this system could put an end to wrongful convictions and police bias. Additionally, no one would ever get away with a crime. There would be no more serial killers claiming victim after victim for years on end. If one believes the purpose of the legal system is to deter harmful behavior, and to protect law-abiding citizens by taking those who are a danger to others off the street, then this system seems pretty close to ideal.

There would be no risk when breaking a law in this system, because it would be a certainty that all criminals are caught. Unless someone was willing to commit the crime regardless of the punishment, nobody would break any laws.  The public response would inevitably be that the system is overbearing, unnecessary, and a bastardization of our right to privacy. A 2015 study showed that 54 percent of Americans believed the U.S. should not monitor the internet activities of ordinary americans in order to more effectively fight terrorism. Though there are subtle differences—an impartial A.I. scanning through footage and data to find illegal materials is different than potentially corrupt human performing the same task—it is not unreasonable to believe that there would be a negative response to this new system. The public would likely turn even more vehemently against this system if those same Americans were asked if monitoring should be increased to catch not just murderers, but also jaywalkers, drivers without working headlights, or litterers.

The system would be unpopular, not solely because of the ways in which it would compromise individual privacy, but also due to the kinds of rules it could enforce. Theoretically, someone going 61 miles per hour in a 60 m.p.h. zone would instantly be fined. This seems excessive, but even if society decided to change this by saying no fine should be imposed unless the driver was going more than five m.p.h. over the speed limit, the question then becomes why the speed limit shouldn’t be changed to a strictly enforced 65 m.p.h. instead. The government prohibits those under the age of 21 from consuming alcohol, but many institutions and individuals are lax in their enforcement of this rule. Should a college, and many of the students within it, be penalized for all the drinking that occurs on campus? Should a bar that accidentally let in a minor with a fake ID have its liquor license immediately revoked? Should a parent that gives their kid a sip of wine at a family dinner face criminal charges?

The examples are endless. There are plenty of minor infractions that we agree should be illegal, yet we still cringe at the thought of them being perfectly enforced. Surely the societal effects of an unwavering commitment to punishing low-level lawbreakers would be more harmful than the benefits we would gain. One response to this might be the drawing of an arbitrary line for laws that should be enforced under the omnipotent system, and ones that should be enforced under our current, imperfect system.

Though a potential improvement, this altered set-up still raises questions, such as the classic problem of where to  draw the line. Iterations of this problem appear in many political discussions: How much should the government limit free speech to fight against the harm speech can do? How should the government regulate the right to bear arms to fight against the harm guns can do? And the list goes on. This line is, more often than not, arbitrary and difficult to agree on. The tension between personal freedom and societal good lies at the center of many of today’s most intense political debates.

In addition, if we have the technology to perfectly enforce a law, and chose not to use it, would it suggest that we don’t truly care about what the law prohibits, and that the law itself should be changed or removed? To answer this, one must determine the purpose of the law, which is no easy task in the face of technological advancement. If the purpose is to deter people from committing crimes, then this omnipotent system seems to be the clear choice. If the purpose of law is to set the standard of minimally acceptable behavior in a society—by way of enumerating unacceptable behaviors—then surely the omnipotent system is the best way to uphold that standard.

If we consider those the core purposes of the law but we still refuse to perfectly enforce laws like speed limits and the drinking age, then perhaps such minor laws shouldn’t be laws at all. Perhaps, as we grow closer and closer to this hypothetical society, the litmus test for a law is whether we would be okay with it being perfectly enforced. Actions that don’t pass simply become unenforceable guidelines, a recommendation that the government doesn’t care enough about to punish you for.

At the same time, this line of argumentation may suffer from the fallacy of legibility. We as humans seek to make order out of a chaotic world—to transform the illegible to the legible—which often results in us ruining systems that were functional simply because we were unable to comprehend them. This principle additionally applies to the states we collectively form. Author James C. Scott, in his book Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, writes of an approach he calls “authoritarian high modernism,” in which a government looks at a complex reality, fails to understand the subtle properties that allow it to function, blames that failure on the irrationality of this reality instead of its comprehensional limitations, and proceeds to argue for and bring about a simplified and idealized version of the reality, conflating the simplicity of it with rationality. The end result: the collapse of what was supposed to be utopia.

It seems irrational to condemn certain actions in our legal code, and then proceed to turn a blind eye when they happen. However, this is something we have done for thousands of years, it is something we do now, and it is something we will likely do far into the future. Despite what appears to be a logical inconsistency, our society has not yet burst at the seams. What is more likely is that humanity simply doesn’t understand the subtleties of the infinitely complex societies it creates.

A litmus test is attractive because it is simple, providing a (seemingly logically consistent) yes-or-no answer to a multi-layered question. This is exactly what makes it dangerous. Questions of legality, enforceability, and morality are ones we have grappled with for thousands of years, and they are not ones we should try to definitively answer the moment Siri becomes omnipotent. Humans want to make order out of chaos. However, at least for those under the age of 21 trying to cultivate rebel status, there is hope the robot overlords will let people drink illegally.