As a society, we’re still trying to figure out what intelligent machines will mean for us. In this post, we’ll talk about the future of our relationship with AI and how it could impact us all.
Butlers and partners
In the future, will we have a robot butler to take out the trash? Will robots be our friends and companions? What about a robot president to protect us from the threat of a rogue AI? Or a robot as our romantic partner?
These questions have been raised by science fiction writers for decades. But now they’re becoming a reality.
Ray Kurzweil has predicted that a personal computer will reach human intelligence by 2020. With language models from companies such as OpenAI, AI21 and Cohere, we are closer than ever to seeing that prediction become a reality. In fact, this very post was written using one such model.
No one knows for certain what the future holds. But iit is bound to make us rethink some of our assumptions about what it means to be intelligent—or human. As AIs become more intelligent and autonomous, the more complex these questions will become.
Take, for example, questions like, “Who is really held accountable when a bad AI decision leads to disaster?” or “What legal rights should be accorded to artificially intelligent robots?”
These are questions that affect us not just as employees or employers, but as members of society more broadly. They’re not questions for a single company to worry about in isolation, but for all of us. No one will avoid having to become intimately familiar with these issues.
The answers to these questions will change dramatically as time passes. Consider that the right to marry a person of a different race was once unthinkable. Now the idea of NOT allowing them to marry strikes many as inherently unjust.
The pace of change in the technological paradigm is a magnitude higher than that in the political paradigm. Some changes are going to occur faster than the laws and policies can keep up with. It means that we shouldn’t just rely entirely on the law to give us ready-made solutions and answers. We need to be coming up with our own answers together as a society.
Let’s be clear: we’re not trying to promote a more conservative, fear-driven, or moralistic approach to AI. We’re saying these questions matter to us as a society.
At Edged, our motto is “making machines intelligent.” At this point in time this might sound overly lofty—after all, we’re helping things like harvesters tell the difference between farmland and grass, not engage in advanced conversations about the meaning of life. But fast-forward 5 or 10 years, and intelligent machines will be just that—intelligent machines. It will be a given. This isn’t a debate. It’s an eventuality.
Many of you contribute to making intelligent machines a reality too. This can be in NLP, vision, robotics, or any other auxiliary technology which goes into the AI industry at large. In this case, these are the kinds of issues you and we need to talk about. We don’t only need to discuss them for the sake of our own businesses. We need it for the sake of our shared future as a species. And in doing so, we should deliberately broaden our perspectives here to take into account the broadest outlook possible.
Artificial intelligence is going to become a story we’re all part of. Let’s make it a story that helps us become better versions of ourselves, not drown us in misanthropic dystopias.