My thoughts on AI

By Stefan Nikolaj on January 29, 2025. Tags: thoughts.

I’ve been interested in AI ever since GPT-2 came out in 2019. My first use for it was to generate Shakespeare fanfiction for a creative writing homework assignment for my English class. I presented the homework and how I made it in front of the class, and as far as I remember, everyone agreed that the technology was pretty cool. However, I also remember GPT-2’s tendency to start going “off-the-rails” and after a couple of paragraphs, ultimately losing its mind and talking about unrelated things. When you got it to focus, GPT-2 could write mediocre text that passed off as a bad English homework assignment and was generally only funny when you played around with its settings and gave it weird prompts. I quickly lost interest but kept a close eye on the LLM space since it genuinely seemed like a cool technology development.

Nowadays, the AI space, with LLMs in particular, has grown immensely. GPT-2 allegedly cost tens of thousands, maybe hundreds of thousands of dollars to train. The same company (OpenAI) now needs approximately hundreds of millions. Billions are being shoveled into the endless money pit of rebadged 200$ desktop GPUs that Nvidia is selling to AI companies for 10-100x the price. At the same time, companies wanting to “write blogs for SEO” and do “innovative digital marketing” have completely destroyed search results in all browsers. Starting from GPT-2 and -3, I quickly learned to distinguish between AI writing and human writing. I can confidently say that many top university professors, research papers, government organizations, prominent newspapers, and many other traditionally respectable groups are now extensively relying on AI. Unfortunately, like with most of the amazing technological innovation in the past half-century, AI has been co-opted by companies to replace as many employees with AI as they can, while raising their prices and lowering their quality, even though their input costs keep getting exponentially lower and their tools keep getting better. The easy cynical interpretation of AI development is just one more step in the rich getting exponentially richer, while the poor get exponentially poorer. People will lose jobs, and instead of celebrating the fact that more people don’t need to do menial tasks anymore and that they can be automated, we make them homeless, starve them, and try to find them a new menial task because our current systems aren’t built for humans to have leisure time or not need to work.

Nevertheless, this is not a trend that is new. In fact, this trend has likely been going on since the foundation of the first societies, starting with slavery, then the first industrial revolution, then subsequent developments in computing and automation, then outsourcing to the Global South, and finally replacing even the Global South. In my view, an AI should be to a worker what a typewriter is to a writer or a tractor is to a farmer. A tool that enables them to do their work much faster and easier, but they’re still the specialist leading the work. The AI, like the typewriter or tractor, is there to do the more menial parts of the labor. The goal of the worker is to sit in the captain’s chair and lead the tool. Yet, companies these days act as if they can just completely replace a worker with AI, and then the stock market rewards them for “cost-cutting”. I foresee an even bigger glut in innovation and a drop in quality in the private sector, with even less innovation and real productivity than there already is. By the way, much research has been done on this all around the world, and the public sector is generally responsible for more innovation than the private sector. The ideal combination seems to be a mix of both. Right now, the West is going in the opposite direction. Naturally, this will not end well for the West, but people don’t seem to mind.

However, I see most of AI’s benefits in precisely the areas that big companies don’t really care about and can’t exploit easily. Under AI, I’d like to group all technologies that follow the same idea of multiplying matrices and “learning” based on large data sets. This includes areas like computer vision, big data, LLMs, and others. Additionally, before I list the areas in which I see AI as a positive thing, I’d like to first say that I don’t believe anyone should be forced to use AI. It should exist as a tool to do work and only that. There are people who love some specific part of their work and don’t want to automate it. Those people shouldn’t be forced to do that. Anyways, here are a few fields where I’ve seen these advances, mainly enabled by drastically improved and more affordable computer hardware, make a big difference:

  1. Translation – the original thing LLMs were made for. As I talked about in the previous paragraph, there is immense potential for supervised LLMs to help translators translate texts quickly and accurately. Many are using them incorrectly, but translation will be a much better field once the toolchains improve. Unfortunately, due to how our system works, many translators will lose (and have already lost) their jobs, simply because of overcapacity.
    1. OCR (optical character recognition) tools have also gotten very good recently, making archival and digitization of books and documents much easier and faster without having to do mountains of meaningless labor.
  2. Teaching – teaching in general involves a ton of menial work. Most teachers spend the majority of their time doing things like writing reports, planning classes, or grading homework and exams. AI systems could help teachers accomplish all of these and actually focus on the thing they want to do – teach. An AI, for example, could keep a profile of each student for each class, and tell them consistent areas in which they’re lacking. Teachers already do that, but as they get more overworked and assigned larger classes, their feedback and teaching quality decrease. This is a perfect opportunity for a well-designed system to come and fix that. Also, if anyone reading this has any power to do this, give teachers higher wages. They deserve them.
    1. In addition, students can use LLMs to rephrase texts, get more context about them, and adapt them to their individual learning styles. 
  3. Medicine – I don’t know much about this field first-hand, but I’ve talked with some people from entirely different medical (research) backgrounds who see AI research and data analysis as a very exciting and powerful tool for accelerating medical research and automating away some medical menial tasks. Medical staff also deserve higher wages.
  4. Social sciences – companies like Facebook have already been doing this to incredible (financial) success for over a decade now, but the ability to analyze massive amounts of text and extract nuance, emotions, sarcasm, and similar ideas, is very powerful for social sciences. I’ve also talked to some social scientists who see future (and present) data analysis tools for social science as something that helps them immensely. Anecdotally, I’ve helped a friend with a small research project where I used GPT-3 to analyze almost a hundred study responses to extract the study participants’ emotions from the text. It turned days of work into a few minutes, and then I manually verified all of the responses to find out that GPT-3 had correctly characterized all of them.
  5. Engineering, infrastructure, industry – computer vision and big data have been getting more and more active in these fields to be able to monitor processes and products in order to figure out when a bridge might collapse, or an engine might fail, or a factory machine is going out-of-spec. Putting a smart camera under a bridge and having it look at the supports, monitor some sensors, and compare the results with a simulated model in the bridge to see if everything is going okay. This is much better, easier, cheaper, and faster than having some guy drive to every bridge in the country to look at them, and it allows that guy to not have to do such a tedious job.
  6. Programming – having played around with AI for programming, I can confidently say that it’s a game-changer. My first experience was actually with a Visual Studio beta test of what now is probably Copilot. I didn’t even know I had it, it just appeared one day as I was writing C#, and for menial coding tasks, it was absolutely perfect. Nowadays, it’s only gotten better. Here, too, the idea that AI is just a tool holds very true. If I didn’t know what I do about software architecture, what good code looks like, and what good development practices look like, I’d be writing horrendous code. For me, AI is just like a small assistant that handles menial tasks like class initialization or finding appropriate methods for what I want to do. I still do the majority of the work – but now, I can only do the fun and important parts. Also, I’ve only found Claude and DeepSeek to be any good at coding. I’ve tried o1, but it’s just not that good yet, and neither is GPT-4

There are many more such examples, but I think that one thing is clear – almost none of these actually relate to or use the capability of AI to generate mountains of meaningless filler or propaganda content. With AI, we have the potential to create something incredible for humanity and continue our mission of liberating the world from meaningless labor. I only wish for more and more people to find out how they can do this for themselves, the people they love, and their community, so that we can build a better world to live in while allowing people to actually do what they want in life.

I could’ve used AI for this blog, for example, or to make a SaaS app that vaguely “solves” a “problem” invented by another SaaS app. I could’ve used AI to write my articles, write my exams, write my resume, write my CV, give in to the slop machine, and generate more and more trash. As if it wasn’t enough to fill up the oceans, rivers, and valleys with trash, we had to build digital oceans, rivers, and valleys, and fill them up with digital trash. I almost did contribute myself, when I was deeply under capitalist propaganda and correlated my self-worth with academic, financial, and social success. However, I didn’t give in, and I’m glad to have done so. I’m proud to say that this blog is all written by me, my GitHub is all designed by me, and my YouTube channel is all made by me. Looking back at all of them, I’m proud of all that I’ve written and thought of, even if it may not be the best word choice, even if the text may be too small, even if the design may not be perfect – because I’m not perfect. Perfect is boring.

The more the words,
    the less the meaning,
    and how does that profit anyone?

Ecclesiastes 6:11, NIV

Table of contents: