What is Generative AI and What Does it Mean for the Future of Public School Districts and the Legal Profession?
Introduction
As advancements in artificial intelligence (AI) continue to shape our society, the educational and professional sectors find themselves at a crossroads. These state-of-the-art language models, like GPT-4, hold immense potential for transforming the ways we learn and work. However, alongside the benefits come potential dangers that demand careful consideration. In this article, we delve into the intricate landscape of generative AI and its implications for public schools and the legal workplace. By weighing the risks and rewards, we aim to shed light on the complex legal and ethical challenges that lie ahead and provide insights into navigating the uncharted territory of AI integration in these critical domains.
What is Generative AI/GPT?
Generative AI refers to a branch of artificial intelligence that focuses on creating new content or information based on patterns and examples it has learned from. Think of it as a computer program that can generate human-like text, images, or even music that is often indistinguishable from content created by actual humans. One of the most prominent examples of generative AI is the Generative Pretrained Transformer (GPT) developed by OpenAI in San Francisco. GPT is a sophisticated language model that utilizes a deep neural network to understand and generate text that is coherent, contextually relevant, and often remarkably convincing. By analyzing vast amounts of data, GPT can generate human-like responses, write articles, draft emails, and assist in a range of tasks that involve understanding and producing written content.
Emerging developments
The newest iteration of the GPT platform, GPT-4, has many exciting new capabilities including the ability to build an entire website from what’s written on a napkin, construct a meal based on a picture of the inside of your fridge, and generate images like new shoe designs. It can also achieve even greater levels of analytical sophistication than previous iterations, with the ability to pass the bar exam in the 90th percentile and score a 1300 on the SAT. This new technology is already being put into action. Companies like Khan Academy, Duolingo, and more are using GPT-4 to help educate students worldwide. Whereas this level of computational power was once limited to supercomputers in underground government facilities, everyday people can utilize GPT-4 today for only $20/month on OpenAI’s website, or the older, slightly less powerful version, ChatGPT, for free. The implications for life in general are substantial, but for those of us in the legal and educational fields, the implications could be even greater.
AI in the Law
According to Reuters, law firms around the country are currently facing “more than a 10% year-over-year increase in direct expenses.” As a result, many firms are looking for ways to increase efficiency and cut costs, and some experts believe that embracing AI may be the answer. By leveraging artificial intelligence tools like GPT-4, law firms can cut down significantly on the number of hours dedicated to rote, time-consuming tasks like record keeping and drafting, freeing up time to focus on tasks which are (1) billable; and (2) create direct benefits for their clients. According to one leading attorney in the field, “with AI, law firms can quickly determine outcomes, reducing costs and risks while defending their clients with more information.” The adversarial nature of the practice of law means that if your firm isn’t leveraging this powerful new set of tools, it’s quite possible that your opponents across the aisle are, placing you and your client at a disadvantage.
Although AI can be a powerful tool for legal professionals, there are also important risks to consider before taking the (potentially expensive) leap to integrate AI into your practice. Although the reduction in manual labor in the practice of law could be seen as a positive change, this change may also result in major unemployment. According to Deloitte, about “100,000 legal-related jobs can be automated by 2036.” However, according to the same study, technology and AI have led to “a loss of 31,000 positions in the legal sector, but there has been an overall rise of around 80,000 [legal] jobs, the majority of which are higher qualified and better compensated. ”
One very recent case highlights another potential pitfall of integrating Generative AI into your law practice. A man named Roberto Mata sued Avianca Airlines, claiming that he was injured by a metal serving cart on his flight. Avianca asked the federal judge hearing the case to dismiss the case, but Mr. Mata’s lawyers objected, “submitting a 10-page brief that cited more than half a dozen relevant court decisions,” including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines, and many more. The problem was, however, that no one could find the decisions or the quotations cited in the brief, because ChatGPT had fabricated them. “Judge Castel said in an order that he had been presented with ‘an unprecedented circumstance,’ a legal submission replete with ‘bogus judicial decisions, with bogus quotes and bogus internal citations.’ He ordered a hearing for June 8 to discuss potential sanctions.” In a profession where minute details could mean the difference between victory and defeat, a tool which lacks the ability to identify the finer points necessary for such work could be of limited use. For now, AI lacks the ability to “listen, empathize, advocate, or understand politics” and thus cannot be relied upon solely to handle certain tasks. This hiccup should not dissuade the adoption of Generative AI as a valuable tool for use as a starting point for research. The key here is to verify the information you are receiving. However, because reading, analyzing, and summarizing are all fundamental legal skills which generative AI models like GPT-4 are becoming increasingly proficient in, one law partner believes that AI will “force everyone in the profession, from paralegals to $1,000-an-hour partners, to move up the skills ladder and stay ahead of the technology.”
AI in Education
Because this emerging technology has so many pros and cons, it can be difficult to know what approach to take, especially as someone responsible for the well-being and development of children and young people. This dilemma has been playing out across the country, and different educational entities have approached the AI dilemma in different ways.
For example, at Stanford University, ChatGPT has sparked debate regarding its role in academic integrity. While some claim that using such tools is a violation of the honor code and constitutes plagiarism, others praise these tools as extremely useful for idea generation and brainstorming. In response to this controversy, Stanford’s Board on Judicial Affairs has locked down on the use of AI. However, according to a poll of nearly 4,500 Stanford students, nearly 20 percent used generative AI on one or more of their fall quarter assignments or exams. Some professors have taken drastic measures in response, going as far as requiring exams and essays to be written by hand. However, this has not quelled the use of generative AI by students who were otherwise using it, with 56.2% stating in a poll that they used it for brainstorming, outlining, and forming ideas, despite the institutional ban. This shows that even at the higher education level, outright bans are an ineffective way to prevent student use of generative AI, and the question remains: is banning the use of this technology the right thing to do?
An alternative approach to this controversial issue can be seen emerging in public schools in Boston. While “New York City, Los Angeles Unified, Seattle, and Baltimore School Districts either banned or blocked access to generative AI,” fearing rampant plagiarism and the impediment of critical thinking, Boston has instead adopted a “responsible experimentation approach.” This policy outlines potential use-cases for AI in the public school system which could be used as a blueprint for other districts across the country. For example, Boston public servants are now encouraged to use generative AI to get started on things like memos, letters, and job descriptions. These tools have also helped translate government-speak and legalese into plain English, making information more accessible for residents. “Generative AI can also help with translation into other languages so that a city’s non-English speaking populations can enjoy equal and easier access to information about policies and services affecting them.” These principles showcase a mindset of collaboration with AI rather than fearmongering and panic. By embracing emerging technology and supporting reasonable experimentation, Boston is opening the door to the potential good that AI can do in governance.
Conclusion
The advent of generative AI, exemplified by technologies like GPT-4, holds both immense potential and significant challenges for public school districts and the legal profession. The benefits of integrating AI into these domains are evident, with the ability to increase efficiency, cut costs, and provide valuable insights. AI tools like GPT-4 can assist in various tasks, ranging from drafting legal documents to enhancing educational experiences. However, careful consideration of the risks is essential. The potential for job displacement and inaccuracies in generated content raises important ethical and legal concerns. Finding the right balance between leveraging AI’s capabilities and preserving human expertise and judgment is crucial. As demonstrated by different approaches taken by educational institutions and public-school districts, responsible experimentation and collaboration with AI can pave the way for a future where the benefits of generative AI are harnessed while mitigating potential pitfalls. Navigating this complex landscape requires a commitment to maintaining the integrity and values of education and the legal profession in the face of rapidly evolving technology.