Define: singularity.

Fritz Lang with his Maschinenmensch in Metropolis, Stanley Kubrick with HAL in 2001: A Space Odyssey, Isaac Asimov in his Robots series, or Philip K. Dick in Do Androids Dream of Electric Sheep?, all sought to warn us: the rise of self-aware artificial intelligence poses an existential threat to humanity. You may build them with the best intentions or to solve a problem, but even when you control the parameters, logical flaws creep in. You just can’t control the outcomes in very complex meta-systems. Even less so if you don’t fully understand why they work.
AGI within Large Language Models is expected to become a reality by the end of 2025. Just as the printing press and the internet brought unprecedented transformations to human destiny, this cultural meteorite could upend our world. Professions such as law, trading, human resources, videography, writing, accounting, software development, IRS along with their tax evaders, may soon become obsolete. Nobody building LLMs has a precise idea of what they are doing. Statistical models are a challenge to human comprehension; they are too far removed from the structure of the human mind. All they know is that it works.
AI is a lot more efficient at ingesting a large corpus of documents, they can scoop through a bazillion websites in hours, they can statistically predict instantaneous human passions by slurping social medias APIs, and write PR plans to counter them in near real time. Government decision-making tools. Defense pre-cogs. A perfect dictature.
We’ve finally reached the point where every possible human-generated training material has been consumed. LLM operators have had no choice but to keep training them on synthetic datasets—essentially, data generated by other LLMs. Just like in genetics, inbreeding leads to evolutionary collapse, and the AI models collapse: retarded AGIs are just around the corner. What if they fuel important decision-making?
They also will soon be entire operating systems reading your intentions. An operating system “with soul”, they say. It will read your mind, anticipate your every move—moves you will never learn to make yourself. Soon, nobody will have the knowledge of how to do things any more. AI corporations will detain your life API Key. What do you think will happen if they stop providing service?
We will have acquired human obsolescence at a great price.
Are we doomers? Yes and no.
We are hackers.
Hackers are a peculiar breed: show them a card, and they’ll decipher what’s on the back. The front image is obvious and self-explanatory, but the back might conceal a magician’s trick or a casino cheater’s secret.
Perhaps that’s why hacker culture has always been filled with anarchists, paranoids, social outliers, and geniuses. You can’t have the front without the back—at least not in a N=3 dimension ( with a notable Möbius strip exception ). Technology doesn’t impress us; we control it. Financial stakes don’t intimidate us; we remain the ultimate minority.
We love machines—and we hate them.
We are hackers.