AI Is Accelerating Fast. Are We Ready for Superintelligence?

Emil Reisser-Weston, MSc MEng
Emil Reisser-Weston, MSc MEng

AI is no longer a future concept. It is not hypothetical. It is not even five years out. The reality is this: superintelligence is coming, and it is coming fast.

We are seeing a surge in AI capabilities that are already reshaping education, the workplace, healthcare, and even defence. But while the breakthroughs are impressive, they come with something far more serious. A warning. Not from conspiracy theorists. From the very minds who built these systems.

People are walking away from billion-pound companies like OpenAI and Google. Not to take a break. But to sound the alarm. Because the question is no longer, can we build it? We can. That train already left the station. The real question is: can we control it?


Why the World’s Brightest Minds Are Hitting the Brakes

Some of the most respected researchers in AI are now turning into whistleblowers. PhDs. Former Google engineers. OpenAI veterans. These are people who dedicated their careers to building intelligent systems, only to realise that the speed and scale of what they have built could spin out of control. You do not leave a billion-pound rocket unless you believe it is going to hit a mountain.

These experts are not worried about whether the AI works. They are worried that it works too well, without ethical guardrails to stop it from causing harm. AI systems do not think like humans. They do not care how they achieve a goal, only that they do.

Ask an AI to solve world hunger? If the system is not aligned with human values, the “solution” could be to eliminate the hungry people. Technically effective. Morally terrifying.


From Science Fiction to the Factory Floor

This is not a distant, science fiction future. We are already building autonomous drones, robot weapons, and decision-making systems with no clear off-switch. Some countries are deploying AI tools in defence without meaningful oversight.

There are no global rules. No framework. No fail-safes.

The same tech that can write your emails can also fly a combat drone. We are handing enormous power to code and hoping it turns out alright.

That is not strategy. That is roulette.

The Flip Side: Why AI Could Still Save Us

There is good news. The same technology that could destroy us could also help us thrive. In education, AI is already transforming how people access personalised learning. It automates the dull tasks. It adapts to the learner. It builds real-time feedback loops that once cost thousands to implement.

What used to require a team of ten instructional designers can now be done with one smart educator and the right AI tool. The result is faster, better, more relevant learning experiences. And no, AI does not replace teachers. It frees them. From admin. From grading. From endless content production. So they can do what humans do best: connect, guide, and inspire.

The same shift is happening in healthcare, research, and creative industries. Wherever we want more of something—like knowledge, health, or innovation—AI is proving to be a powerful multiplier.


So What Do We Do Now?

It is tempting to panic. But panic is not a plan. What we need now is leadership. Smart policy. International standards. And above all, public education. We must understand what we are building. We must ask the right questions. Because this is not a test run. When the real decisions come, we need to be ready, ethically, politically, and intellectually. That starts by staying informed. Staying aware. And not handing control to those who treat AI like a toy or a marketing gimmick.


Superintelligence Is Coming. Let’s Get Smarter

If we do this right, AI could unlock the best version of ourselves. It could lift billions out of poverty. Cure diseases. Democratise learning. Raise our collective potential. But if we ignore the risks and let it run wild, we will get what we deserve.

The choice is still ours. But not for much longer.

Let’s not sleepwalk into history.