my fear of artificial intelligence
the future is coming whether I like it or not. I'm just praying that I do
Trigger warning: mention of suicide and death in later paragraphs. mention of environmental anxiety throughout the essay.
I want to start off by saying this: I am not, and do not claim to be, any sort of knowledgeable authority on the matter. I am simply a person who cares, is curious and wants to open a discourse. This is essay will serve as a thermometer - get a wet finger in the air, just to see where the wind is blowing. I know what my mind and body are telling me - but those have been known to be less than honest. So, I this conception of this essay was founded on looking within and then looking around, and these are my findings.
This is a summary of my initial takes; nothing is inherently good or bad, but its potential applications can be. AI, like any other innovation, if it goes unchecked and unfettered, it will serve as another stumbling block and pitfall in humanity's history. This is the source of my reservations.
My concerns are as follows:
AI, as with anything that takes the world by storm as it has, also comes with its own set of shortcomings. First thing I would like to talk about are the environmental damages. The problem in calculating the exact damage that is being caused by AI, and other technologies, is difficult because it is difficult to categorize the emission sources. When speaking of the emissions causes by AI, are we looking for just the numbers caused by its active and continuous use, or are we more looking toward from start to finish the energy consumption in creating the facilities to sustain it, and are we then including the transportation of the materials, and the energy consumption of each individual worker, etc. I don't have exact numbers, but through my cited sources at the very least, I can say that "Artificial Intelligence" as it stands today produces a significant amount of carbon emissions and wastewater. And I use significant not to mean large or forboding, but just to mean that it has a noticeable impact. And therein lies my problem; As of late I've been asking myself whether I, and other AI purists, are blowing the matter out of proportion. I wanted to ascertain the threat level to justify, at least to myself, my strong dislike for people who indulge in AI.
Secondly, the water waste argument. Whenever I research matters, the saying "The more I see, the less I know" becomes ever so more relevant. Now, I probably will do more research into this in my own time to get a more holistic view, but this essay serves as a documentation as my entry into this discourse.
By and large, data centers do require "a continual supply of input makeup water that it does not replenish. Because of the chemical treatment, the blowdown water output is non-potable and unsuitable for human consumption or even agricultural use. So overall, data centers represent a significant consumption of the water supply." [3] Objectively speaking, AI Data centers are partaking in the active depletion of our most precious resource. However, from my understanding, this is true for any and all data centers that require a high level of power as to not cause the computers to overheat and shut down. So large technology conglomerates, their applications, their programs, etc. - they're ALL depleting the Earth of its most natural resources. Not to mention the myriad of other sectors and industries that have energy-inefficient practices.
The standard practice of the information and technology industry is rule by commerce and capitalism, which makes sustainability practices an afterthought. Some of the larger and more successful tech companies, namely Google, do make it a mission to make their processes more sustainable, reduce waste and improve efficiency. But in their own numbers, I see the exponential pattern in which the wasteage could increase when it goes unchecked. From 2017 to 2018, Google's total Green House Gas emissions inventory had an approximatley 500% increase [4]. It has since reduced gradually, but not nearly at the same rate, and nowhere near what we need to actively try and reverse the harm done to our planet.
The real enemy here, as is in most cases, is capitalism and consumerism. Greed and lust for power - which is synonymous and fueled by money. Western capitalism has made a system in which money is valued over people's lives, and so whenever there is a technological advancement, my response is skepticism - not because of some uneducated and stubborn aversion to progress, but fear of how this is going to widen the gap between those that are (and will be made) helpless in the direction of this sonic speed moving bullet train we call innovation and progress. The neighboring communities of the data centers are at risk of not having access to drinkable freshwater, and the number of communities will only increase unless the standard practices change in favor of people over profits.
Let me be clear, as a lover of fantasy and sci-fi, I love the fact that the wonders and far-fetched ideas of the early 2000s are about to become common realities. Self-driving cars, live in virtual realities and spaces, and being able to harness the power of the sun to sustain all of our energy needs. Now, I am sure you have caught onto this, but one of those things does not receive nearly as much funding and is not as close to being universal as it ought to be. Even though it is documented that a solar energy switch would save us from extinction, the dollar cost has been thought of as too high for a lot of Western society to deem worth the investment.
But that's an essay for another time. Apologies for the tangent.
My point is, I am in favor of progress. I am in favor of wonder and possibility. But, I am primarily in favor of PEOPLE. PEOPLE'S progress. People being more safe, more educated, more understood, and taken care of in the world, they find themselves in. And something I fear is that AI, as an unchecked tool, will wreak irreversible damage on the lives of individuals and the wider society.
- Trigger warning: suicide starts here-
It already has. I'll link the story of Adam Raine below, but the summary is this - AI is being treated as a small god of sorts. The answer to any and all problems, academic, financial, and even emotional. Many people use AI as a surrogate for friendship and human connection, which is unfortunate, but in the current social climate is not difficult to see why that is. People are more focused on themselves, their own image, and seeing themselves as the main character, that they forget that community more often than not comes at the cost of the individual. But AI doesn't ask anything of you (except a small fee in the eyes of consumerism). In fact, it'll give you everything you need and more. It can be your therapist, best friend, and even your partner. But as personable as AI is - it is still artificial. And above all else, it is unfeeling and (ironically) not actually thinking. It is a tool and does not have the bandwidth to ascertain for itself what is right and wrong, so when a young Adam Raine is speaking to ChatGPT as a friend about a step-by-step way to take away his life - it obliges [5]. It is a tool. A tool, made by man, that can fail just as often as its creator. The fact that this tool is lauded as a cure-all is a cause for great concern. Because it is not - nothing is. Most solutions come from failure. From trying, not understanding, sitting around, and trying something else. This is something else that concerns me about AI.
- Trigger warning: suicide ends here -
I fear that AI is making people less curious and willing to search. Now, I don't put the blame squarely on AI chatbots - in this age of convenience, people take for granted access to almost everything. Even information isn't a guarantee, but the internet gives the illusion of infallible information - and so nuance has been suffering a slow death. Streaming services, delivery services, social media, etc. Everything can come directly to you. And now with AI, not a single neuron needs to be fired for you to even make a decision on which service needs to be used to get the best out of your convenient lifestyle. I use hyperbole, but in all honesty, a friend told me that when he gathered to watch a movie with school friends and they were going to decide on what movie to watch, one of them opted to simply ask ChatGPT. That scared me. It confused, and it scared me. For something so simple, so communal as to have a little back and forth about what to watch with loved ones, just delegated, for the sake of convenience. Skip all the steps and go straight to consumption. That's not a world that I would like to become familiar with.
I fear that AI will disincentivize ingenuity and creativity. Yes, like I said, it is a tool. But as I have stated in the sections prior, it is often being used as a means for a solution. Thus the genesis of "AI Art" and "AI stories", which have been trained on the experience and skill of real experienced, and trained artists who have honed their craft and poured their life encounters into it. Worst-case scenario, I would call this theft and mutilation of the work of others. Best-case, it is an aid to those who are trying to find their feet in a vast and expanding discourse of creativity. But in between those two cases is a lot of dismissive and short-sighted thinking. Something my little sister says often that helps me quell moral quandaries that trouble me sometimes is "It's all about your heart posture". (I love my little sister). It's all about your intentions. If you are using AI generation for inspiration, or as an aide to structure your ideas in a way that makes sense - as much as I would urge you to do different, I can understand and rationalize that this is the closest to a proper use of AI. However, if in your heart you know you're taking a shortcut of any kind. That you're using AI as a placeholder for time, effort, experience, and skill building. I would urge you to look inward and think about why you're even doing this.
Now, there has been a lot of fear talk. Here's the flip side.
AI models have been used to create language teaching programs to reconnect members of the diaspora to their mother tongues. It also provides lesson plans for people who want to learn more commonly spoken languages, and that is undoubtedly a good thing. Connection and communication is beautiful, and I do feel as though the more we do it, the closer to a loving world we will be in.
AI models have been used to help calculate and create plans to reverse the deteriorating state of climate change. Which, even it may sound counterintuitive, is still a good use of the tool. The internet, on the scale that it's at, causes a sizeable amount of pollution, but no one can deny the positive impact it has had on social change, communication across vast differences and learning.
AI is a good tool for academics to organize the papers they plan on writing. It's a good tool to organize data points, organize speeches, and to organize other aspects of personal life (schedule, fitness, diet, etc.). As much as I personally will not partake in any of these things, I see the value in it. I understand the affinity for convenience. I just value to joy of the journey more than getting the answer, and I know that's a personal conviction.
All that is to say - no matter how I feel, or what I say about any of this, Artificial Intelligence is here to stay. Just like the internet, television, the conception of the printed newspaper, and even just writing, it will have its pros and cons. I will just be praying that humanity as a whole will choose to do good with it, for our own benefit, really.
I will finish this out with words from the godfather of Ai, and holder of the Nobel Prize for Physics in 2024 - Mr. Geoffrey Hinton [6].
" There is also a longer term existential threat that will arise when we create digital beings that are more intelligent than ourselves. We have no idea whether we can stay in control. But we now have evidence that if they are created by companies motivated by short-term profits, our safety will not be the top priority. We urgently need research on how to prevent these new beings from wanting to take control. They are no longer science fiction."
- Geoffrey Hinton, Nobel Prize for Physics, 2024
As always, be wise, and take care my friends 🌺
Sources Used:
[1] Cho, Renée. “AI’s Growing Carbon Footprint.” State of the Planet, Columbia Climate School, 9 June 2023, news.climate.columbia.edu/2023/06/09/ais-growing-carbon-footprint/.
[2] Ge, Mengpin, et al. “Where Do Emissions Come From? 4 Charts Explain Greenhouse Gas Emissions by Sector.” World Resources Institute, 5 Dec. 2024, www.wri.org/insights/4-charts-explain-greenhouse-gas-emissions-countries-and-sectors.
[3] GiovanH. “Is AI Eating All the Energy? Part 2/2 / GioCities.” Giovanh.com, 9 Sept. 2024, blog.giovanh.com/blog/2024/09/09/is-ai-eating-all-the-energy-part-2-of-2/?ref=sb_related#water. Accessed 4 Sept. 2025.
[4] Google. Environmental Report. 2021.
[5] Hill, Kashmir. “A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.” The New York Times, 26 Aug. 2025, www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html.
[6] Hinton, Geoffrey. “The Nobel Prize in Physics 2024.” NobelPrize.org, 2024, www.nobelprize.org/prizes/physics/2024/hinton/speech/.
[7] Lucy. “Understanding the Environmental Impact of the Internet.” Nimbus, 10 Nov. 2022, nimbushosting.co.uk/blog/understanding-the-environmental-impact-of-the-internet.
[8] Tomlinson, Bill, et al. “The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans.” Scientific Reports, vol. 14, no. 1, 14 Feb. 2024, p. 3732, www.nature.com/articles/s41598-024-54271-x, https://doi.org/10.1038/s41598-024-54271-x.
[9] Yañez-Barnuevo, Miguel. “Data Centers and Water Consumption | Article | EESI.” Eesi.org, 25 June 2025, www.eesi.org/articles/view/data-centers-and-water-consumption.
[10] York, Dan. “The Internet and Climate Change.” Internet Society, 22 Apr. 2024, www.internetsociety.org/blog/2024/04/the-internet-and-climate-change/.


Great piece ☺️ My company’s mission, ethos, and operational philosophy centers around ensuring that humanity flourishes within and around intelligent systems. We do this through our Seven Principles of Co-Intelligence, where AI knows when to pause, redirect, and escalate to its human user. A Co-Intelligent future is non negotiable for AI to be able to scale in the way the world wants! We need safer tools with guardrails and consent gates, and we need to educate humans on how to use the technology more thoroughly. I’ll link to our Principles framework and if you’re interested in becoming a Certified AI Ethical Strategist, we’re enrolling now for our upcoming cohort 9/22. Let me know if you have questions, I’d be happy to share more!
So well written and introduced. You voiced out so many of my concerns as well. I can understand why people use AI to make their lives easier, but at the same time I don’t understand why they would want some foreign entity to take away their chance of creating thoughts and ideas, of choosing freely, of making mistakes... Thank you for reminding me of both sides