Independence with or from AI
Why we cannot necessarily trash AI without suffering major consequences.
I was a junior in high school when Chatgpt/OpenAI, and the concept of generative artificial intelligence. I am going to be a Junior in college, and I can hardly conceptualize how much AI has integrated into our daily lives. The question in my honors physics class, led by a ingenious professor (who I hope is still as motivated as he was 2-3 years ago), was whether or not AI was the future, or just a temporary trend. Bringing us back to the future, it is most definitely significant and the future. It is in every google search, giving us synopsis of hundreds of webpages(kinda)in a few short paragraphs, whenever it goes down, students and employees are on social medias complaining about how they need it for whatever projects or assignments are due that day/week. It was down for 10 hours yesterday, something I didn’t even notice because—it worked for me. That was in the afternoon, however, and I don’t usually utilize the tool in the mornings ( i commute and take notes for my classes). This dependence is unbelievable, but I would say its not surprising. In my view, I would see this as a second, more environmentally damaging, version of Google. Chatgpt has about 400 million active users to this day, and it is inevitable that, if it were to go down, shut down, or become illegal, people would find alternatives. There are already popular alternative, such as DeepSeek and Claude. There is a quote I think very often of from the tv series “Pantheon”,a sci-fi show about themes very similar and very real to the ones today, proclaims:
"You cant stop the future.”
Its mentioned multiple times, and its very relatable in regards to the fear and worry that comes with AI. We cannot stop it at this point, we must regulate, and control what we do with it. There must be legislature on what people can legally do with it, and that also needs to be managed by the OpenAI, Deepseek, and every easy-to-access models. There are many bills throughout the United States that have been posed, limiting what the government and companies can do with these models. I would advocate for these kinds of actions, because we need to regulate and place laws for the safety and integrity of our structures and the individual.
As an artist, I am very aware of the harms of AI, and I completely agree that no form of art should be utilized without explicit permission from the artist other than AI art itself. While AI may overcome some copyright laws due to the nature of how it uses images (it may legally be considered unique enough to become a unique work, even if it ultimately is a gross amalgamation of many different artworks). To be fair I don’t think ai will properly replace graphic art or animation with these restrictions any time soon, and frankly the uniqueness of a persons artwork per every new drawing is virtually unreplaceable. Sure, ai can copy it very closely if not exactly, but ultimately we do not value copied art (also illegal per copyright), we value the originals. As long as we respect the originals and maintain copyright laws, we should be okay. In my opinion, ai cannot replace imagination, more specifically a single persons imagination. AI cannot replicate the inspiration and origins of ones ideas to create unique works without prompts. If we understood this innately, we wouldn’t even start to worry about ai art.
There are many predictions as to when AI will become “superhuman” (there are multiple, probably better terms for this), or will surpass the ability of the average human. Websites like AI-2027 provide a fictional-future of how AI might impact the world. It provides two endings, a “Race” and “Slowdown” prediction that gives us a path of heavy regulation and an eventual “slowing down” of the ai, or a race of the superpowers of who can create the best model as soon as possible. Both endings are bleak, but I believe there is a reality where, if we start now, we can mitigate the catastrophic natures of ai optimization and usage.
We need to see AI as a tool, not a replacement for your brain. You need to be able to contain and hold information in the thing that was designed to do that before you start using AI (so no usage until undergrad/graduate school). For most majors, you probably shouldn’t have access for even longer if your major doesn’t need or benefit from it: computer and data science majors can and probably will need to study ai models as it becomes an important aspect of education, while art, writing, and mathematic majors probably don’t need the work, since their professions require some sort of creative or automatic knowledge. You need to have calculus memorized before you go onto anything theoretical because it is the framework for theory. You need to know the creative process as an artist, or none of your work really has any uniqueness with AI.
Finally, thinking to the future, we must think about our values as human beings. We have historically valued human life by their contribution to society. When inanimate consciousness replaces our own usefulness, what are we truly worth? What do we do when, eventually all our jobs are replaced? It is a scary projection, but I think we will eventually find out that we may have more freedoms if we continue to maintain the sacred nature of human life. When we remove our economic value, we find that we can achieve equality. When no one needs to work, my hypothesis is that we will be reduced or increased to our value as a carbon-species. In our dystopian movies, where “robots take over” we always see humanity reduced to cattle, to a less conscious being in comparison to our robot counter-parts. However, I propose an alternate perspective: creators, gods.
We created AI, and we facilitate the existence of these works. I connect this ability to distinctly fabricate (as I did previously with artists),with the abilities of gods. While we don’t literally need to consider it that way, we must maintain some semblance of value to the human race. Intelligence, physical ability, beauty, etc. become irrelevant when we value humans unequivocally and at baseline (inherently). When we say that life is sacred, cited by Catholicism , we mean it is fundamentally important. We as a collective had,have, and will the ability to be great, and even greater, in a way that is not quantifiable with the knowledge we know now. Maybe we will know, but somehow, some part of me thinks it will be easier to create pseudo-superintelligent AI before we even begin to succinctly explain the variance of SSRIs on the human brain. We were given authority without purpose, and we can use that authority to give ourselves meaning and significance.
To Recap:
ai must be seen as a tool, not a replacement for a brain
we need to restrict ai from students, as they need the foundational material of education to be ingrained. This is critical for a functioning society.
ai needs to be regulated, for companies profiting off of it and governments
ai can coexist with artists, an artists work is not simply skill, but the content and richness of their ideas and execution
ai is inevitable, it will become integral, but we have the opportunity to research and prevent a catastrophic future leaded by blind profit
humans must be treated as innately valuable in order to prevent the devaluation of the individual in a society where work becomes optional
I will elaborate more on this in the future, but I just wanted to get my thoughts down. Watch Pantheon, it will make this make so much more sense. Also, please do not give the global superpowers and their allied governments the ability to weaponize AI. A computer can never be held accountable.