Overcoming Existential Risks – HackerNoon.com
The World Needs More “Real” Superheroes…
Whether you see the glass as half full or half empty, I think we can both agree, sometimes it makes sense to just fill the damn cup. And that’s the focus of today’s article, part two in our series on Existential Risk. If you missed that, check that out here before continuing.
And now that you’re sufficiently depressed, let’s take that discomfort and channel it towards something productive, solutions.
A quick recap, the seven deadly apocalyptic horsemen:
- Well-intentioned genetic engineering
- Climate Change
- Nuclear War
- Superintelligent AI
- Lethal Autonomous Weapons
- Space-Related Catastrophe
And now the all-important question, how to tackle these seemingly insurmountable issues.
1. Well-intentioned genetic engineering
If a billionaire or a bold company trying to cure cancer could accidentally wipe out most—or all —of humanity, how do we prepare for the eventual altruism that could kill us all? This will depend on the proliferation and cost of the technology, but it seems obvious to assume genetic engineering and biotech will become increasingly democratized— cheaper, easier, faster…
This has certain advantages: individuals everywhere have the ability to counteract wrongs and keep an eye on the status quo. And yet, like a mass murderer in a Texas town filled with .45s, the idiot with the gun always gets off a few shots before someone drops him with a well-placed double tap… Biotech bullets spread much faster.
A moratorium on genetic engineering seems equally unlikely, the promises are just too great. So, what do we do? Are these technologies safer hidden in some lab, or open-sourced for all to see?
Then again, look at the public outcry against GMOs, primarily due to misinformation. Is involving the public in something as important as our collective defense against the unintended consequences of gene-editing really a good idea? Questionable, at best.
But most countries and many companies will want to capitalize on the most transformative industry of this coming century. So, nuclear’s anti-proliferation strategy isn’t going to work. This club is too easy and too important to be a member of.
An initial framework:
Step 1) An international think tank: Get the smartest biotech (and related) minds together and incentivize them to brainstorm unintended consequences and possible synthetic bio risks, creating a risk index of all forms of genetic and biological research, to be used later in randomized monitoring.
Step 2) An international standard MUST be implemented—with threat of retaliation for any who violate the terms — where certain controversial areas — such as chemical/biological/viral weapons and gene drives — are banned and unlike the social media free-for-all, companies and countries are held MASSIVELY responsible for the consequences of their actions.
Step 3) Creation of a biotech-specific investigative journalism organization dedicated to uncovering problems with the industry and bring the darkness to light.
NOTE: This is anything but a complete plan, but as the scariest and most probable existential risk, it’s a tough nut to crack.
At least terrorism is more straightforward. But, it also suffers from many of the same issues listed above. There are a few saving graces, chief among them being reduced resources and predictability.
Even if the Saudi Royal Family — the largest funder of terrorism worldwide— dedicated their entire estimated $1.4T net worth to the pursuit of bioterrorism, that’s only 0.65% of the $215T wealth of the world. And there are a lot fewer folks to worry about and ways for things to go wrong.
Unfortunately, even metal detectors in schools and sports stadiums have proven inadequate at stopping attacks. Would bioweapon detectors be any more effective?
A few ideas:
Step 1) SSRIs have been implicated in nearly 86% of school shootings (as analyzed by Dr. Bill Walsh), and given the fact that no testing is done prior to prescription, this seems a good place to start. And it would decrease gun crime/school shootings… seems like an obvious win.
Step 2) The trickier part of the problem are true psychopaths. It only takes one monster to engineer a super virus, and vaccines can take months to develop and mass-produce. If costs come down, it’s fair to assume the mass production part is as genetic/pharma 3D printers would be easily distributed, likely in every pharmacy. It’s even possible everyone owns their own one day.
But that’s only half the battle. We need either: to be able to develop vaccines — or cures — exponentially faster, or we need sufficient immunity—or biological resilience — to survive as yet uncreated/imagined diseases and weapons. It’s almost impossible to predict all possible failure modes, so speedier drug/vaccine development is probably our only option — possibly combined with distributed gas-mask-esque protection.
For my money, I’d want several international think tanks competitively funded (or something akin to the XPRIZE) with the goal to reach the one or two day mark for diagnosis and vaccine development after disaster strikes. A sufficiently catchy superflu covers most of the world in about a week. Time is ticking.
Step 3) In addition to the aforementioned think tanks — working in collaboration with the threat think tanks from #1 — robust incentives for private market solutions need to be addressed. The market viability is currently too early, no one’s willing to pay for prevention until it’s too costly to avoid doing so. This is something governmental venture arms should invest heavily in, seeding the companies that could save all of us.
3. Climate Change
Governments have proven ineffective at reigning in the effects of climate change, or our untenable CO2 emissions. This is the tragedy of the commons working its awful game theory. Everyone acts in their own immediate self-interest because everyone else is, and the pains of inaction are small, at least for the time being.
There are two schools of thought here: technology will save us and technology is the enemy. Both are right, and wrong.
To build a sustainable future, we have to embrace technological solutions to reduce emissions: more efficient farming, electric vehicles, renewable energy; while cutting back on overall consumption.
If we’re already at the carrying capacity — the total amount of resource extraction our planet can sustain — allowing 3rd world countries to match our level of wasteful opulence will doom all of us, even if their economies and infrastructures are greener and more renewable than ours.
The fact is, we make and consume too much shit. But, there’s an answer, and it doesn’t have to suck.
Step 1) Courtesy of Douglas Rushkoff: Invert dividends and capital gains tax rates. Today, capital gains are ~10% and dividends taxed at the much higher income tax percentage.
This incentivizes companies — and investors — to aim for growth at all costs, profits be damned.
Only one thing grows forever, cancer.
An economy predicated on growth consumes more and more as it destroys everything around it. By inverting tax rates, companies — and thus investors — could be more focused on sustainable business growth/practices. Apple wouldn’t need an iPhone to wear out every two years, or to slow down your OS to make you upgrade. Instead of an economy of one-and-done, it would incentivize a longer term, more minimalist existence where everyone wins, profits, and less shit ends up in the landfills of life. For more, see this interview:
Step 2) Courtesy of Jarl Jensen: Eliminate subsidies on fossil fuels, cars and high emission industries and create subsidies for development/deployment of renewables.
This one is pretty straightforward and universally applicable. Renewables are already on par price-wise with fossil fuels and coal in most areas of the world. A little extra push makes the pieces topple faster and drives us towards a sustainable future.
If additional subsidies were created for other greener businesses: clean meat (animal agriculture is responsible for ~13-18% of emissions — depending on how you measure and who you source — more even than global transportation), electric vehicles, net-zero construction, etc…, we’d be well on our way to mitigating worse of climate change.
Wild cards: Carbon tax — has yet to work/be implemented but may in fact prove necessary.
Step 1) Local agriculture/coops: The farther your food travels, the greater the carbon footprint and chance of spoilage/waste (which is approximately 24% of all food).
The future of farming — with the possible exception of lab-grown meat, due to the value of scale — is local. City governments would do well to incentivize local farming, both creating jobs at home, reducing dependence on foreign forces and greatly reducing emissions.
Step 2) Local tax hikes on high emission industries/products can ice this cake. Many initiatives are almost impossible on a global or even national basis, but municipalities and states have the ability to pass hard laws and bills that larger legislatures cannot. Some states are already leading the way and more will follow as consumers demand action.
4. Nuclear War
For all its faults, mutually assured destruction (MAD), has been a relatively successful strategy, despite many near catastrophes. On the whole, the world has done a good job mitigating — or at least preventing — all-out war up to this point. But, the spread of nuclear power makes this harder daily; much like a country filled with guns, more isn’t merrier.
The progression towards renewable energy and away from nuclear — given the accidents at Fukushima — is a positive development, at least from the prospect of nuclear war, but greater efforts MUST be made at disarmament, especially among mutually hostile countries: US/Russia, Pakistan/India, etc…
But power is rarely given up, so more stringent monitoring procedures need to be implemented to prevent accidental leakages of technology/weapons, etc… For my part, I’d like to see overhauls of missile command systems.
Is a thirty second — or however long the incredibly short snap judgment time is — delay really enough contemplation before hitting the big red button?
Twitter would seem to suggest otherwise.
5. Superintelligent AI
The consensus among AI researchers is there’s no consensus on if/when we’ll achieve superhuman AI. Assuming we will — a necessary assumption as not doing so would be like riding the Titanic without lifeboats or a vest — it begs the question: how the heck can we contain and/or control the all-powerful mystery that is AI?
Thankfully, more and more AI researchers are working on this very problem today — many because they’re terrified of the implications. The best theory we’ve come up with is robust value alignment, ie building artificial intelligences in line with human values and morals.
This is problematic at best. What exactly are human values? Sharia Law? The 10 Commandments? Hammurabi’s code? The US Constitution? If anything, assigning a set of values for an AI to follow is an exercise in folly. HP didn’t break any laws when they made Hitler’s death camp punch cards…
Well, we could try to have AI’s learn our values through observation, but data’s biased… and we’re even worse.
Or, we could hope a superintelligence would just ignore us, but then again, we accidentally step on ants all the time, and kill the ones who crawl into our homes.
So, what can we ants do? The best approaches I’ve seen involve either: limiting the capabilities of AI systems — which again, seems impossible given the risk/reward ratio and tragedy of the commons problem discussed earlier, let alone being able to predict bugs/failures — or counter-defensive AI, ie fantasy sports for society. I sure can’t cover LeBron James or stop a Steph Curry 3-pointer, but with other elite athletes (or AIs) playing on my behalf, at least there’s a fighting chance.
Step 1) International regulations on proper development/containment of AI systems. The last thing we need is another NSA-esque leak where XYZ government or corporation accidentally release the end-of-times source code to anyone with an internet connection. Software is pretty easy to repurpose.
Step 2) Sizable defense budget allocations to a non-governmental think tank/initiative focused on threat detection/forecasting. The only way to avoid disaster in the uber-exponential world of AI advancement is to plan for failure LONG before it happens. Two seconds is an eternity in ones and zeroes.
6. Lethal Autonomous Weapons
It took a dozen terrorists, $500,000 and a few months of planning for Al-Queda to fly 747s into the World Trade Center. The world will never be the same. And yet today, any idiot can buy a drone and attempt to take out a jet or blow up a building, all from the comfort of their couch.
Drones are scary, but autonomous drone weaponry is downright terrifying as costs drop and access rises. It only takes one country to develop said tech before others feel forced to play catch up.
And how do you stop an autonomous army? Trick question, you don’t.
It’s nearly impossible to shoot down —or suicide bomb —a drone. And to date, robust anti-drone solutions have proven ineffective. Democratized mass murder is something humanity CANNOT afford to let happen, there’s no way to put the cat back into the bag with 3D printers, global supply chains and open-source tech.
We NEED international efforts, now, before it is too late, with 1st-strike retaliation against any country or company found to be in violation of the accords. I and many others I’ve spoken with are worried anything less would ultimately prove insufficient.
7. Space-Related Catastrophe
In theory, easy — put money towards monitoring, fundamental research and design of asteroid path-altering technologies. It doesn’t really matter who does this, or why they do. We’re all on this little blue spaceship together.
Unfortunately, every country hopes another will do the job for them. In the US, we’ve massively slashed NASA’s budget and people question the value of space and space exploration. That may mean international agreements are needed to ensure sufficient funding and advancement in the all-important space-missile defense system for all of us. This would be ideal for an ultra-billionaire to start a foundation around… hint, hint Jeff and Elon.
Existential risks are a tough nut to crack. The list and suggestions above are by no means complete, a million challenges lie around the corner of every idea. But we don’t need perfect solutions to get started, just basic frameworks and smart, committed folks working towards the perfect good. And in a lot of areas, we have that already. And in a lot of ways, that’s you guys.
Here’s where I ask for your help. I don’t have the answers, not even close. I just want to get the conversation started. How could we make these plans more robust and idiot-proof, because that’s what it will take.
In the game of survival, it pays to rig the deck. Any great ideas? Would love to learn from other more knowledgeable folks, or, if you have any great suggestions for Disruptors podcast guests, mention them below. Any of these topics would make great episodes.
Learned something? Click the 👏 to say “thanks!” and help others find this article.
Hold down the clap button if you liked the content! It helps me gain exposure .
Clap 50 times and follow me on Twitter: @mattwardio