As promised, here is a summary and reflection on the ideas in (co-founder of DeepMind, co-founder of Inflection AI) Mustafa Suleyman’s new book, The Coming Wave.
What the author does well in the book is show where and how AI is currently being used throughout the world. We all assume AI is being used in creative and innovative ways, and Mustafa uses tons of examples to demonstrate that. But, where he truly distinguishes himself is, because of his own background, he can share with us with insider knowledge, accuracy and confidence not only where we have been and where we are, but where this can(and most likely will) go…hence “the coming wave.”
His main point throughout the book is we have to find ways to contain AI. Not block it, but responsibly and proactively find ways so that we are controlling the technology and not the other way around, or worse….AI controlling itself. The majority of people do not have any idea what AI is, much less how it works. This makes containment that much harder – but also that much more important. We cannot just hand over the reigns of our society to a computer robot, and cross our fingers.
The book lays out the groundwork for how society, over the years, has made efforts to guide, control, regulate, and contain other technologies. So, this idea of containment is not something that is a new, but AI is different because AI learns and gets better at a rate far beyond anything we have seen before. Some technologies take years, even decades, to develop and progress,so we can develop reasonable safeguards and sensibly incorporate the technology into society. Literally every week, we see improvements and breakthroughs in AI.
These technologies are not perfect, and our safeguards and containment attempts are not perfect.
Think about automobiles and airlines. Each has safeguards, regulations, and controls to (somewhat) contain them. Even so, containment is not perfect, we still have issues, right? These technologies are not perfect, and our safeguards and containment attempts are not perfect.
Mustafa makes a great point that AI is different than technologies such as automobiles. An automobile is a single industry, while AI is more like electricity, in that it powers most other industries; so as AI improves and changes, it affects all areas of society, not just a single industry. That makes it even more important that we figure out how to contain AI.
What is being done now to address issues around AI? Unfortunately, society as a whole is not moving fast enough when it comes to AI. Blocking it, ignoring it, and not using it does not limit it’s progression. Regardless of how each country is handling their approach to AI, rest assured that there are other countries, people, groups, and organizations – both good and bad, who are pushing AI to the limits.
Trying to contain a technology that is changing so rapidly is hard.
But, trying to contain a technology that is changing so rapidly is hard. As he points out, predicting at the forefront of discovery is hard, but that is where we are now with AI. It’s a leading edge, bleeding edge technology. What guardrails are already setup and what do we need? Are those once-helpful guardrails still helpful or are they already outdated? Are we noticing the right things? For example, neural networks were not taken seriously for a long time, so we paid them no attention. Suddenly, a breakthrough happens, and they are huge. Even CRISP gene editing was not created to do what we actually use it for now, it was designed for something completely different. GPUs made by Nvidia were created for gaming realism, yet now are central to AI. So, even knowing what to try to contain is hard.
Is there any country which has fully embraced AI? China fully embraces AI and has declared their aspirations to be the clear leader in the world by 2030. And because of their style of government, with the control, finances and intelligence, China can very likely make that happen.
China has declared their aspirations to be the clear leader in the world by 2030.
Where is the United States in this? We barely have legislation, the people in leadership roles in government barely understand how a cell phone works, much less AI. I don’t know what’s worse, those same people making laws dealing with AI, or NOT making laws. The US government still has tape machines doing backups , while China has a quantum satellite. We are so occupied with middle school-level politics, juicy scandals, and the number of likes, that we are not noticing the coming wave. Meanwhile, China is polishing its surfboard.
What can we do right now? For starters, we must have AI literacy for people of all ages. We need to listen to people who understand AI, and ask them to help us figure out what questions we should be asking. Murphy’s Law tells us anything that can happen will happen (got that from Mathew Mcconaughey in Interstellar) that means both the good and bad. This AI wave is coming.
Now there are worldwide efforts to contain nuclear weapons.
What does runaway technology look like when it’s not regulated, no safeguards, or containment? In the 1940s, when we tested the nuclear bomb, we had no idea what to expect. One of the possibilities was a
small back hole formed and sucked our world into it. Yet, we still pushed the button!!?? And only after tens of thousands of people were killed after two bombs were dropped, did the world say, “this is not ok.” Now there are worldwide efforts to contain nuclear weapons. Not perfect by any means, but at least we are trying. Countries and people are talking.
Because of the incredible potential power of AI, Mustafa notes that we as as society are facing the ultimate challenge for Homo Sapiens and Homo technologicus. How do we approach this challenge?
None of us like government regulating our lives. There are many instances in history where that did not go well. But, Mustafa says we do need gov’t, but we can do this differently than in the past, but we have to do something. There are some ‘good’ example of government containment: We have stiff regulations worldwide in the automotive industry, yet millions of people die every year. This has become an acceptable consequence considering the benefit.
This is the norm.
What’s missing there? Common sense. Regulation, guardrails, rules, laws, and protocols do not work if they are not based on common sense. We need “…norms, structures of ownership, unwritten codes of compliance and honesty, arbitration procedures, contract enforcement, oversight mechanisms.” That must be part of society, more importantly —people must buy into that. It involves governments, public and private businesses, industry leaders, scientists, and people coming together.
How we define containment must also change. In the first part of the book, Mustafa defines containment as “…a foundation for controlling and gathering technology, spanning technical, cultural, and regulatory aspects.” But, towards the end, he redefines it, “…more as a set of guardrails, a way of keeping humanity in the drivers seat when a technology risks causing more harm than good.”
Woah. Is AI that big of a deal? Yes.
Technology is not just a way to store our selfies, buy something online, make reservations for a taxi, and play video games—-technology “represents access to the world’s accumulated culture and wisdom. It’s not a niche; it is a hyper-object dominating human existence.”
Yes, we have attempted some guardrails and regulations, but it is also political, profitable, and hard to enforce.
It’s not gonna be easy. Look at another global issue: climate change. It’s right in our face, yet we have not really done much as a global society. Yes, we have attempted some guardrails and regulations, but it is also political, profitable, and hard to enforce. There are still people who do not accept that it is even a thing. At least we are trying, but containment is far off.
So, a first step for us to begin containment for AI is simply to recognize and understand it. Get people together to have real discussions. Great news, this is already happening! The major players in AI worldwide already gather to discuss, share, challenge, debate, and learn from each other. Just having these cross culture relationships means we can talk to each other, which is vital. We saw in the movie Arrival, what happens when countries close off their communication with each other.
Yes, people disagree and have different norms; countries use different economic models, and have different approaches to the unknown. Regardless of those differences, the wave is coming, whether we want it to or not. At this point, all we can really do is buy ourselves some time. Time to understand, make attempts at guardrails and setting norms, time to develop defenses, time to build alliances, time to partner and harmonize to determine the right decisions.
Could vs. Should. We have to make thoughtful decisions.
In Jurassic Park, Malcolm says, “the scientists were so preoccupied with whether they could, they never stopped to consider whether they should.”
This book is thought-provoking, and highly readable for the layperson, but also technical enough so those in the trenches can find value. It’s the beginning of a discussion that MUST HAPPEN.
I suggest that the coming wave of AI is already here.