“On a gathering storm / comes a tall handsome man / in a dusty black coat with / a red right hand”. Those lyrics are from the Nick Cave & The Bad Seeds song, “Red Right Hand”, and have been used in several TV shows to signify the unseen hand of God. The New Latin term deus ex machina, God in the machine, is a commonly used plot device in theatre, whereby a once unsolvable problem is suddenly resolved by an unexpected intervention. Given all of this conditioning from popular culture to see troublesome plot holes fixed through divine intervention, it’s not hard to imagine the same when envisioning future technology.
As I detailed in my December blog post, Uncovering Hidden Efficiency, reaching “the singularity” anytime soon would require some of that fateful intervention. What A.I. might provide, though, is the potential for humanity to grow to a point where it could survive some of the cold hard threats in an unforgiving Universe.
Youneeq is a company that prides itself on its smarts, and like so many other companies operating in this space, questions are often raised to us about the future of “machine learning”. In particular, What if an A.I. goes rogue? OpenAI was started to construct a framework for “discovering and enacting the path to safe artificial general intelligence”, which is a loftier goal than just keeping us safe from a sinister A.I.
We live in the nuclear age—an age where a small rogue nation can potentially unleash more destructive force than all of the legions of the Roman Empire at their peak. We also have lifted the veil off some of the greatest mysteries of the Universe. We now know that all it would take is one large gamma ray burst occurring in our neck of the Milky Way to wipe out all terrestrial life. Fortunately, we exist in a relatively peaceful backwater neighborhood of the Galaxy, but even it will likely be a lot more crowded in a few billion years when we collide with the Andromeda Galaxy.
Earth, like all things in the Universe, will eventually come to an end, long before then human intervention could make the planet uninhabitable. Stephen Hawking is among many of the great scientists who believes that the short-term survival of our species could be contingent on us figuring out how to colonize new worlds. To do this we need to move beyond human intelligence. Interplanetary and interstellar guidance systems require precision and reliability that’s beyond our intellectual limits, not to mention building a transport system capable of getting us to and from our destinations. I’d like to think all companies pushing the limits in the machine learning space should be thinking towards those loftier goals. At Youneeq, we like to think about where our technology can take people.
Earlier this month, Elon Musk sent his first Tesla into orbit with a “starman” on board. Elon’s efforts with OpenAI to steer us away from developing destructive A.I.’s is admirable. While he might envision it as the most probable extinction level destructive force we will first encounter, I like to think of such precautions as just plain common sense, like raising a child that’s beginning to take its first steps. To quote Bowie, “There’s a starman waiting in the sky / he’s told us not to blow it / cause he knows it’s all worthwhile.”