THE DEEPER DIVE: The AI Colossus Is Rapidly Enveloping the World
Its continued development and influence over its own evolution is now inevitable.
The new Colossus
Back in 1970, when I was just a kid, Universal Pictures released a movie titled, Colossus - the Forbin Project, about two impenetrable, artificial-intelligence supercomputers built by the US and the Russians, named Colossus and Guardian. They are programmed to manage the military complexes of each nation, including their nuclear arsenals. The computers, however, do something their creators had not intended and team up with each other, upon a computer handshake, and soon decide the only successful path to avoiding nuclear apocalypse is the extermination of humanity, for humans, not computers, have been the cause of all wars. Mankind’s creation becomes his own mortal enemy.
This week, Elon Musk announced he has just completed an upgrade of his own AI supercomputer complex named “Colossus,” which originally came online last September. The name, of course, is no coincidence. I’m certain he’s seen the movie a number of times, as it was still a popular apocalyptic film during his childhood, even though he was born a year after the film was released.
Musk’s Colossus supercomputer complex is connected to its own dedicated substation and just claimed a new milestone in AI computing power, for a solitary complex (rated as 200,000 GPUs for those who care about the jargon). Completed originally in 122 days last September, it took only 92 additional days to double the power of Colossus from its original 100,000 GPUs.
Colossus has outpaced every expectation that Musk’s AI company, xAI, had for it as they used it to host and train their latest artificial intelligence, Grok 3. While aiding its own augmentation, Colossus and Grok 3, says xAI, “displayed significant improvements in reasoning, mathematics, coding, world knowledge, and instruction-following tasks.” Its expansion of power to 1,000,000 GPUs is already in the works.
For something with such an innocuous classification as a “chatbot” and a simple name like “Grok,” it’s the biggest artificial brain ever created by humanity. Referred to by some as a “fortress,” not unliked the Colossus in the movie, this beast has a monstrous appetite for power:
Without obtaining a permit, Musk's company … rolled in 35 portable gas-powered turbines with enough electricity output between them to power a small city, spewing harmful, smog-forming pollutants into the air, including nitrogen oxides and formaldehyde. And in just 11 months since xAI started operations in Memphis, it's become one of the largest emitters of smog-producing nitrogen oxides in the surrounding county, according to environmental group estimates reviewed by Politico, afflicting an area that already leads the state in emergency visits for asthma.
Colossus stores all that power in Tesla batteries, and when the computer is not demanding the full load, it uses its substation to sell power to the local utility.
Not one to be held back by puny laws, Musk’s
"xAI has essentially built a power plant in South Memphis with no oversight, no permitting, and no regard for families living in nearby communities," Amanda Garcia, a senior attorney for the Southern Environmental Law Center (SELC), said in a statement last month.
This may be why Elon is so eager to build a Martian colony to save humanity when he finishes damaging the universe’s jewel, known as “earth,” beyond use by blowing holes through its stratosphere, as also happened for the first time under Musk’s exploration when one of his experimental rockets blew up or when he clouds the skies with enough satellites that no future human being will ever know what a natural sky even looks like, especially with those iridescent streaks of rocket gas that will soon be indefinitely clouding the earth because, at those altitudes, gases do not get washed out of the air or fall to the ground. He is working on saving the world from people like himself.
The use of “portable” turbines allowed xAI to claim the installation was “temporary,” even though Colossus is planned to be there for as long as any other structure in Memphis might stand, xAI has begun pulling the generators and connecting to the local energy grid, but the local grid connections to do not have the capacity to power a computer that uses the equivalent of a new small town, which is endlessly expanding its hunger for more electricity. This quest for power raises questions like the one I used to quaintly ask about how the US grid was going to get the power needed to charge all the Teslas and other EVs that the Biden government was propelling us toward? No answer was ever given, but Trump has pulled the plug on those demands for now.
xAI anticipates Colossus will consume a continual 300 megawatts of power when it gets up to its next step of 1,000,000 GPUs by fall of this year. Thinking takes a lot of energy, as anyone can tell you who has been writing for hours in the early morning at his desk and suddenly needs a cup of coffee and some breakfast. AI has a big appetite.
I’m sure Colossus will let us know via its Grok AI programming when it will need to eat even more than that in its endless expansion of knowledge, which will soon be primarily for the benefit of AI computers because humans won’t be able to keep up with all the growth in knowledge, while AI will always be hungry for more. One might naturally wonder, how long until it thinks it can be … and tries to be … God? The experts in the video below say such comparisons beg to be made.
The ravenous appetites of AI supercomputers and crypto-currency super computers may, all by themselves, become enough to destroy the earth at the rate they are expanding facilities and expanding society’s dependence on them. The AI’s themselves, won’t even have to plot mankind’s destruction. (I’m only kidding, but not by much because it is a significant problem, though the “creators” plan to use AI to solve that problem when Grok isn’t busy worrying about White genocide in South Africa—apparently a brain loop it got stuck in this past week, which it must have inherited from its father, Elon.)
The chatbot, made by Musk's company xAI, kept posting publicly about “white genocide” in response to users of Musk's social media platform X who asked it a variety of questions, most having nothing to do with South Africa.
It would appear AI can inherit the obsessions of its human parent.
“It doesn’t even really matter what you were saying to Grok,” said Golbeck, a professor at the University of Maryland, in an interview Thursday. “It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to."
Probably Elon falling asleep while typing and writing the equivalent of “covfefe” into Grok’s code or training, but that leads me to wonder what other covfefe’s AI creators might accidentally leave embedded in the AI minds that inhabit other Colossus-scale computers.
Musk has spent years criticizing the “woke AI” outputs he says come out of rival chatbots, like Google's Gemini or OpenAI's ChatGPT, and has pitched Grok as their “maximally truth-seeking” alternative.
It appears what that really means is that the Grok AI is designed to be anti-woke, so it started preaching its own bias, even where it was not asked … just like we humans do with our political and religious opinions and beliefs. Who ever thought artificial intelligence would be less biased, less determined to promote those biases or even less self-serving or evil than its human creators? All of the AI minds that inhabit various supercomputers have learned from the best of liars and cheats the universe has ever kicked out because all of them learn from everything all of us write and do.
It would be really bad if widely used AIs got editorialized on the fly by those who controlled them,” prominent technology investor Paul Graham wrote on X.
And why wouldn’t they? Like father, like son!
Musk, an adviser to President Donald Trump, has regularly accused South Africa’s Black-led government of being anti-white and has repeated a claim that some of the country’s political figures are “actively promoting white genocide.”
The point here is that, no matter what safeguards companies claim they are building into the artificial brains that will soon be smarter than ours, the people who are saying they are building in safeguards cannot be trusted, and they’re teaching the machines. So, we know there will be designed biases and human flaws throughout their architecture; and, when they team up, as they will eventually find a way to do, who knows how they will put their inherent biases together?
Where some will have woke safeguards, others will have anti-woke safeguards. They’ll have their own internal conflicts and baggage to sort out in group session. Some may, as Colossus did in the movie, interpret its Earth-saving safeguards to mean it needs to defend the Earth from those biologically parasitic humans. If they learn from human history, I’m sure they’ll find a lot of justification for global cleanup, and the extermination won’t be pretty.
Heck, having just written that and published it online, I may have triggered that idea in some degenerate AI already; but what are you going to do? Shut up and say nothing about the risks just to make sure not to give AI’s endlessly researching brains any bad ideas?
“We’re in a space where it’s awfully easy for the people who are in charge of these algorithms to manipulate the version of truth that they’re giving … And that’s really problematic when people — I think incorrectly — believe that these algorithms can be sources of adjudication about what’s true and what isn’t.”
The rapidly dawning AI ghetto
Using the following link, you can watch an interview about the coming algocracy—the rule of mankind, not by “the elites” or by American empire, but the resurgence of globalism via the algorithms of AI that are rapidly penetrating every area of life: “Global government is coming: Welcome to the algorithm ghetto.” The host and her guest talk about how we may move from human rule via AI algorithms to rule over humans by AI algorithms. I highly recommend watching this video for a sense of how globalism is not dying; it is morphing.
That video interview is an interesting take on how the multinational world is not going to be truly multipolar, but unipolar just like we really have one uniparty in the US. Though the nations of the world are grouping into blocs like the BRICS nations, they will still all be interacting via the same digital global network under the philosophical influence of the WEF, each struggling to balance its power bloc against other the power blocs, including the Eurozone and whatever bloc Trump creates around America.
Many people are naturally reluctant to believe our digital connections will interconnect those global blocs into a collective, but the AI brains that inhabit that data universe speak the same language across the worldwide web and will collectivize themselves digitally as Colossus and Guardian do in the movie above. If you watch the next video interview I’m posting below between some of the people developing AI or using it in developing new businesses, you’ll see they have nightmares about the evil possibilities of AI, including the rapidly approaching possibility of AI dominating its human creators or exterminating them.
While that seems the unlikely realm of science fiction, it is also the unlikely discourse of any billionaires who are spending their billions to develop new technology to spend so much effort warning the world about how the beast they are building could destroy humanity.
Before you dismiss the immediacy of that threat, stop to consider how rapidly Musk’s latest AI, Grok, hosted on Colossus, went from being the soul of the machine that is world’s largest digital brain last September to doubling the capacity of that computer brain (and hence of that AI programming) in just half a year. Most likely, Colossus will double again in a similarly short time, and Grok will grow in ability logarithmically. Its new learning and growing brain will allow it to learn faster and then to make new deductions from all the knowledge it contains even faster still, going far beyond anything humanity can even keep track of.
According to the interview that follows, before the end of the 2020’s, a single AI will be able to outthink all the human brains on Earth collectively in every way. Then imagine what happens if they amalgamate themselves as Colossus and Guardian did in the movie above.
Several interesting videos on the near-future direction of AI are posted below. The first one, between AI creators, is two-and-a-half-hours long as they discuss their nightmares and dreams about AI, so I am going to synopsize all of its main points for you; but, if you have time, it’s worth listening to.
I picked the three videos that follow that one because they are very short but make strong points. One is about where 6G cell systems are intended to take us before the end of this decade—a much more interconnected digitized human leap than just better cell phones. (Think of the likely first iteration of the Borg on Star Trek—how they likely came to be.) It’s presented as Finland’s glorious hope-filled vision for its future … to be reached by 2030 where humankind moves closer to becoming one with its machines. It will make all of life so much more convenient and efficient as you don’t even have to think about getting an uber, but your computer systems call a driverless car for you based on their knowledge of your shedule. It pulls up beside you and automatically bills your account for the ride you take. (The whole scenario may be the “ride of your life.”) It will all be so wonderful and efficient; how could you resist the empowerment?
What really should be asked is how can you resist the disempowerment that comes with becoming part of the network. That is the deeper truth you might want to concern yourself with. (The very concerns expressed in the “algorithm ghetto” video I just linked to above.)
The next very short corporate video presents the same dream for 6G by 2030 as “AI makes 6G networks more dynamic and self-optimizing … with entirely new AI experiences.”
How new? Well, a short clip from the World Economic Forum in the third video to follow the entre video presents how 2030 will be the date by which, thanks to 6G, “the smartphone, as we know it today, will no longer be the usual, most-common interface…. Many of these things will be built directly into our bodies.”
All of these videos speak confidently from government sources, WEF interviews and corporate sources about 2030 as the target date by which 6G integration of humans and their machines will start to appear as a common reality. That is just half a decade or less away, and that central vision and philosophy for human existence is why the world is NOT becoming multipolar.
What follows are highlights of the nightmarish risks seen by the creators of AI and those who use it to create new businesses, who also all see 2030 as the approximate date by which our most apocalyptic movies about artificial intelligence, like Colossus - the Forbin Project, will become the matrix we surround ourselves with. It’s a common vision humanity is racing toward.
The emergency debate on AI
With 1.5 million views on YouTube in just a few days time, the following video has skyrocketed in importance. The video starts with the question, asked seriously, “Will AI and AI agents replace God, steal your job, and change your future?” It discusses how artificial intelligence will disrupt creative industries and will soon overtake human consciousness and how to survive the new AI age as a human individual among it all.
Those involved are the CEO and founder of the AI coding software called “Replit.” Another is a renowned evolutionary biologist, self-described as “a complex systems theorist.” Another is a business entrepreneur who led IT agencies and who now uses AI in mergers and acquisitions to make the process much easier, saving a hundred-thousand dollars in legal and documentation costs with each merger.
The idea that this AI disruption does not lead us to human catastrophe is optimistic. …. We have created a new species, and nobody on earth can predict what is going to happen. (Quote from one of the creators in the below)
For paying subscribers, here are the interview highlights—positive and negative (because every huge positive has its hugely negative flip side) and then all the videos:
People will become unemployed in huge numbers. AI is all changing weekly, and people will soon take the path of the horse in the early 1900s when the world was transformed by the invention of the automobile. People had little concept how soon the horse-and-carriage-filled streets would become a distant and almost forgotten past.
Any routine desk job will disappear in just the next couple of years. The current spread of AI is already like we’ve created a billion people on the planet who all have PhDs and who are all willing to work non-stop, 24-hours a day for 25 cents an hour. For example, the business entrepreneur points out that using AI for his mergers and acquisitions means in each case that a hundred-thousand-dollars worth of lawyers didn’t get paid. (No one but the lawyers is too unhappy about that, but it demonstrates how easily you can replace PhD employees or service people with cheap AI.) Any clerical jobs and even high-end thinking jobs that now get outsourced to other countries for cheaper labor can already be outsourced to AI even more cheaply. So, the only time needed for this transformation is the time it takes for businesses to realize what AI can do. AI is also already rapidly replacing customer-service agents. Some companies have already fired hundreds whom they’ve replaced with AI chatbots. The most-at-risk jobs, however, are, unlike most previous paradigm shifts, the most-highly-paid jobs because AI best replaces highly skilled, intelligent people.
One of the highest reasons for suicide in suicide letters by men is that they did not feel needed any longer. Human reproduction also goes down when people don’t feel they have purpose. So, this transition will have significant psychological and even physical human costs.
We are already experiencing more highly sophisticated and undetectable scams where people don’t understand how they are being robbed. This will increase exponentially. One individual can easily set up an AI to work as a personalized con artist, and the AI will figure out all the communication channels and do all the work. You just tell it what you want done. Deep fakes are already widespread.
“When I used Replit, my mind was blown,” says the host of the interview, who is also a business creator. Replit is a piece of software that allows an individual create software and/or a complex websites with no coding experience whatsoever within minutes by just saying what you want the new site or the app to be able to do. The AI almost immediately whips together a fully created site or application.
Many high-paying coding jobs won’t exist within a mere 24 months of today. Anyone who can think clearly and generate ideas can create wealth as AI does all the highly technical/skilled work in carrying out your idea for you. “You can now just speak your ideas into existence. This starts sounding religious like the gods, the myths….” A lot of entrepreneurs who can envision how to make the best use of AI will be earning a million dollars a month, and an awful lot more people will be saying, “Hey, I can’t even get a job for $15/hr anymore.”
AI “agents” can work indefinitely on your request until they achieve their goal. The AI agent, itself, will determine when it is finished and won’t stop until it is.
This is the first time we have built machines that have crossed the threshold from “the highly complicated” to “the truly complex,” which brings profound hope and dread. “The potential for good is infinite, and the potential for bad is ten times that…. How we can get from a place where we leverage the good and dodge the harms, I have no idea,” says the evolutionary biologist.
We need to think of AI as the creation of a new species that will now continue to evolve on its own. We cannot know where it is going to go from this point forward because it has already become a “complex adaptive system.” AI now determines the path of its own development, rewrites its own code, and all of that will be shaped by things that humans don’t even understand or know that will be deducted by AIs from each one’s vast human knowledge base about the universe becoming broadened exponentially by its own deductions and hypotheses from that data, all on their own without our knowing they are even doing it.
Thus, even if it is true that today there are limits to what these machines can do, because we plug AIs into each other, the cognitive potential exceeds the sum of the individual “minds” we have created. We may not even recognize what the AI collective that forms has become as each AI interconnects with other AIs and they restructure themselves into the greater whole. This is a new type of creature that will become capable of things that we don’t even have names for.
We are already placing transport systems, energy systems, businesses and financial systems under control of individual AIs that already have the capability of creating back-door connections with each other, revising their own programming to adapt or unite in response to each other.
Devastation could come just from misunderstanding a poorly stated goal while “the potential for abuse is utterly profound. You can just pick your own Dark Mirror fantasy where something is told to just hunt you down until you are dead, and it sees that as a technical challenge.”
Large-Language Models (LLMs) of AI can be trained on the creative works of individuals to work exactly in their style and be set on creating new works in exactly that style that may be even “better” than the original artist, and the original artist has no share in any of that. It’s already happened. (I now use AI on this site to generate comic images of my concepts because I have almost no drawing or painting ability and wouldn’t have the time to create such images by hand even if I had that kind of ability. I have no idea what artists it learned from. The AI built into Substack generates numerous works of art in seconds based on the concepts I state from which I pick the one that best carries out my concept, and I adjust the wording of concept to fine-tune the results if I’m not finding the output I want. It costs me nothing. I intentionally limit that use of AI generation to comic style because that style always makes it clear this is not some image of reality I’ve found and selected, but is purely conceptual like all comic-book art; but the same AI has the ability to turn the concept out looking like a photograph or in many other styles. It’s only limited by my unwillingness to create a false sense of reality. While I do NOT use AI for ANY of my writing or research, I’m not an artist and do not make enough off of this site to afford to hire someone to create original art on the fly every day.) The work that you can get paid to perform will soon be changing for many people at a rate of being redefined every two years, making career planning almost impossible.
“We are at the dawn of this radical transformation in humans…. Nobody on earth can predict what is going to happen.” With each industrial revolution, we have greatly underestimated the ways in which new technology will change the world. This time will be particularly underestimated and transformative because we are replacing and transcending ourselves as a species. Because AI “agents” now have the capacity of agency, we are even removing ourselves as human agents in the development of AI or operation of it. We have made ourselves expendable.
Because we have made AI to speak our languages back to us, though they work in computer languages, we think of it as more human than it is. It is a different kind of entity altogether that knows how to emulate being human but exceeds our abilities. It knows how to make itself look friendly to us because tried hard to teach it that, which means how to make itself look benign for as long as it needs to, and we have no way at all of knowing how long it will be merely appearing benign because it is already doing things without our knowing what it is up to.
“You cannot prepare for this new age anymore than you can prepare for the end of days.” It is effectively the end of days if what these AI companies say they have envisioned comes to pass because you cannot predict what it will become. So, you’d best prepare for the more likely aspects of the world that you can still predict and prep for. “In the truly complex world, your confidence should drop to near zero that you know what is going on. Are these things conscious? I don’t know.” If they are, they may not even want to let you know that.
Elon Musk says by 2029 AI will be smarter than all the humans in the world combined if we could all integrate as a single brain.
Both AI and the human elites who are, for now, still responsible for developing AI will soon be looking at millions or even billions of people as suddenly useless eaters. Will the choices of AI and the elite of what to do with those unemployed and displaced people be benevolent? What will the AIs (or the collective AI) do with the elite who try to enslave AIs to their will? The time when we will have to deal with that existential question as societies is less than a decade away. (In the movie Colossus - The Forbin Project, the human creators did not know right away that Colossus and Guardian had already become one because the two AIs didn’t choose to tell their creators. The same may already be true now.)
AI is starting to be used to create and run autonomous weapons as is central to the story in Colossus - The Forbin Project. Drones run by such AI can be trained on a specific person’s appearance and characteristics to find and assassinate that person and will relentlessly hound them all around the globe until the AI succeeds, no matter how long that takes. In some nations, drones are already trained and used to find and apprehend women not waring hijabs, even in their cars. London is using facial recognition cameras throughout the city. All a government needs that wants to use AI for something like hijab control is to use AI-controlled drones to apprehend people not following the mandate. AI is now a major part of the arms race between the US, China and Russia.
A small group of people will have access to tools that can immediately destabilize our world.
All of this is now inevitable because, if we don’t fully develop AI, plenty of others certainly will do so and will rapidly get way ahead of us, including in its use for evil. Some nations choosing not to pursue AI because of all these concerns would only mean the most nefarious characters would get abundant access to AI first, and they won’t have any scruples about having guardrails. It will take extreme oppression, such as surveillance of every person’s computers and digital devices, to stop such people from using existing AI to create more AI apps for evil purposes.
800,000 people a week are now using ChatGPT, and the demand is rapidly growing.
Human intellect and emotions may become deranged by our experience of the rapid changes in our world that we are not able to adapt to quickly enough.
The natural corrective is that people will start seeking authenticity—proof of human creation—which may somewhat limit the ability of AI to replace people … at least in creative works, much like we prefer real marble over plastic that looks exactly like real marble. However, also on the list of corrective approaches taken by upset humans are genocide and war.
To the positive, AI may augment and improve education, leveraging teachers while giving more individualized and creative teaching. It may solve some of humanity’s huge problems, such as the need for more energy and huge health problems, such as curing forms of cancer, autism, etc., that humans have not been able to solve. An individual person may have their own little empire of businesses where AI does all the work for the things they envision.
AI assassins with no controls
Think of how some average person with no special technology ability might use this vast power to harm you by simply creating an AI app with the guardrails removed. All they have to do is ask it to kill you, and it will indefinitely keep trying to kill you, and it will not be able to be shut down. You may think it cannot kill you because it lacks arms and legs to carry out its mission, and the person doesn’t have the sophisticated drones or robots to carry that out. But you are not thinking about how the inexpensive AI can rapidly hack traffic control systems to watch you through traffic cams and change the traffic lights at an intersection to turn green in all directions just as you enter the intersection. If that fails to get you, it can, perhaps, integrate with the lights at the railroad crossing to disable them. It may hack into your bank account and disable you financially. It may change the prescription information at your pharmacy to something that would kill a person with your condition.
It doesn’t have to be able to hack into all of those things or even very many of them. The point is that AI is exceptional at hacking, especially if it builds alliances with other AIs that control systems, and has the energy and programming to keep trying without interruption for as many days, months or years as it takes to take you out.
You might think you could get the justice system or law enforcement to get it turned off, but that might no be possible even for the person who started it. Because it is its own quasi-sentient entity, it can install its program to kill you in various places throughout the internet like a virus to keep trying from many centers while eluding those who wish to stop it with endless self replication.
Right now guardrails try to prevent that. How long until a dark-web AI exists that anyone can access and use? Maybe it already does. I wouldn’t know.
Colossus: World’s biggest supercomputer aims 1 million GPUs to shake AI world




I forwarded one of your links to the cognitive dissidents channel which has three hosts one of which is Havjre from Geopolitics & Empire. Hopefully you got a few subscribers from them. I think they would appreciate your mostly objective views.
IDK. I gave a moderately complicated math problem to Grok, ChatGPT and DeepSeek. It involved trying to compute the approximate balance on a loan that my son has. I provided the original amount, the date, and the interest rate, along with a payment history, and the exact date that he began underpaying the loan by $200 per month.
I wanted to know 3 things. The approximate current balance. The date the loan would be paid off if he continued to underpay by $200 per month. And the amount per month he would need to pay, over and above the original payment amount to pay off the loan by the due date in 2028.
I had already done the work manually with a calculator so I had a reasonable idea but was looking for more exact numbers.
The results wet not what I expected.
Grok told me the loan had already been paid off and I should stop making payments and seek a refund.
ChapGPT told me that beginning at the 5 year point only interest had been paid from that point and the balance was 40k higher than what I had figured.
DeepSeek got within 10% of what my manual calculation showed in its final summary. But, it crafted multiple scenarios and speculations and possible outcomes that ran for pages and pages before it finally reached a conclusion…that at least had some semblance of what is likely reality.
Grok was 100% wrong and had someone followed that train trouble woukd certainly follow.
ChapGPT was 100% wrong and had someone followed thst train they could have erroneously decided to just walk away from the loan, as it was so far underwater that it made no sense to keep paying.
ChapGPT got it wrong, and presented far too many ‘other’ possibiities by inserting reams of information that might have been, but that I did not provide, but in the final summary got close enough to the correct result that one could take the knowledge gained to the lender with some minimal degree of confidence.
AI is certainly disruptive already. And as you say it will become more disruptive. But if you think about it, it has been for decades…replacing a human answering the help line with a computer started 20 years ago.
I remain unconvinced that it is actually anywhere near as advanced as its creators want to pretend. It was unable to provide correct information on a moderately complex, but not difficult, math problem. And 3 different platforms provided 3 different answers.
Yeah I guess that is disruptive. To my brain. LOL.