ARTIFICIAL SWEETENER
AI (Artifical Intelligence) isn’t something that screams “Mammoth Lakes” in the same way that “I’m working three jobs to pay for my studio” does. There is irony in the contrast between the high-tech world of AI and the serene, nature-driven environment of town, which often prioritizes sport, relaxation and a break from technology. But even small towns have to ride the technological wave at some point. If you received an email written by AI would you be able to tell? Do you think you would be able to distinguish a human writer from a computer program created to mimic you to its fullest capability? Are we sure we even know what AI is at this point?
Imagine AI as a super smart computer program. It’s not a robot with arms and legs, but more like the brains inside of a computer. The “brain” can do things that far outshine human capabilities, almost like magic. But it’s not magic, it’s just really advanced math and programming.
You can have AI do almost everything. It can write you a poem about being a ski school instructor at the mountain:
On Mammoth’s slopes, where snow-capped peaks stand tall, I’m a ski school instructor, answering winter’s call.
In this pristine playground, where dreams take flight, I share the joy of skiing, in the purest light.
Technology in today’s age permeates even the most remote and traditionally low-tech areas. Even in a town where people come to escape the hustle and bustle, you can find AI influencing local life, whether it’s aiding students in their homework, or assisting a local business in communication. In many cases, the AI algorithms that impact daily life are hidden beneath the surface. They work in the background, learning and adapting to user behavior and preferences, making it difficult for individuals to pinpoint their influence.
Indeed, the potential that generative AI holds for enhancing productivity and streamlining workflows, is immense. Right now, an eerie sense of rivalry envelops the tech landscape as Google and Microsoft sprint to infuse AI alchemy into the core of their offerings. As these two race each other to bring new AI capabilities to market for a price, they do so behind the launch of OpenAI’s ChatGPT, which went live last year. ChatGPT is a free, large language model-based chatbot, and can be used to write professional emails, code, craft news articles, and a slew of other content with humanlike conversational dialogue.
ChatGPT is also raising questions about the limits of academic integrity, and presenting an easy option for those looking to take a shortcut in their school work. Graduating college in 2020 as I did, it’s likely you would have never used and could hardly imagine ChatGPT writing your thesis paper. Graduating in 2023 is a completely different picture. It seems that almost every student knows what ChatGPT is and what it does. Many are willing to admit to using it for school and work. Biologists and healthcare workers alike use it every day in their jobs. Using AI for mundane tasks such as writing a “professional” sounding work email when you are fresh into the workforce seems harmless. Using it as a starting point to explain and learn about a topic seems like a great resource more people should utilize. Olivia Salter, a classmate of mine, works at a law firm in Santa Barbara and says she uses ChatGPT to communicate with clients daily. She believes it’s a good way to say everything she needs to say in a concise and consistent manner.
However, newsrooms and writers across the nation are on guard and taking measures to protect their content from ChatGPT. The Guardian’s Ariel Bogle reports that CNN, The New York Times, and Reuters, have all put code into their websites that blocks OpenAI’s web crawler. CNNs Oliver Darcy reports that several additional news sources have also taken this measure, including Disney, Bloomberg, The Washington Post, The Atlantic, ESPN, and ABC News. Having access to the deep and resourceful archives—intellectual property— of a news organization is invaluable to training an AI model such as ChatGPT. There is an increasing urgency to take a look at how intellectual property is being used on the internet, from news articles to novels.
Jane Friedman reported to CNN that she found her name on books being sold online, only they weren’t her words, but vague expressions of her style that seemed to be AI generated. James Patterson and Margaret Atwood are two of tens-of-thousands of authors who have signed an open letter calling on industry leaders like Microsoft and OpenAI to obtain consent before using authors’ work to train AI, and to be properly compensating authors when they do.
Another peer of mine, Reilee Handlin, attended Butte-Glenn Community College District, in Oroville, California, in Spring of 2023. She said that in her Political Science of the United States Government course taught by Nathan Steffan, they were warned not to use ChatGPT. Steffan informed students that there was software in place to detect AI-generated work, and that the consequences of plagiarism were not worth it.
Recent graduates of San Diego State University (whom I know well) report widespread use of ChatGPT for multiple purposes, even though using the technology at all, even just to generate notes or create an outline for a paper, is considered academically dishonest. “Now there’s a new program the teachers have integrated into campus. Along with checking for plagiarism, it checks to see if the writing is AI-generated or not. Although this isn’t a 100% proof way of detecting AI base writing, it sure helps teachers,” one recent Business major said. However, nowadays, there are also sites like GPTZero, where students can put their work in to see if it’ll come out saying that it’s AI-generated or not, to help remain undetected.
I asked recent graduates about the credibility of their degrees and reputation of institutions in the face of growing AI learning assistance, and hindrance, tools. “There’s definitely a way to use AI like ChatGPT to help students, and there’s ways that it definitely hinders learning. I hope there’s a way to find that balance,” the same Business grad said. Apparently, there’s been a lot of talk that professionals in the future aren’t going to be as well educated as they are now because of tech like ChatGPT. There is a line between having a website create a bullet point list about a topic, and having a website write an entire paper that a student simply throws their name onto and turns in for a grade.
Certain high schools and colleges have AI detection software to try and detect AI generated work. However, the technology is not 100% accurate and there is room for interpretation and judgment. Educators now are not only having to worry about students copy-pasting a paragraph into an otherwise original essay, but about entire essays being written in seconds by AI. It also raises questions about the future of writing and writers. If the biggest names aren’t protected from the AI revolution, what protections do smaller agencies have, and writers? What does the future of writing look like, of news reporting, of corporate processes, if everything is AI integrated? How can writers protect the integrity of their work and their profession while also supporting technological advancements within society? There is truly no end to the questions …