The Future of AI
Artificial Intelligence has been a hot topic around the world for years now, and it seems like it will only become more talked about, and more controversial. So what exactly does the future of AI hold for us?
Let’s just clarify what exactly it is. Artificial Intelligence (AI), sometimes referred to as machine intelligence, is ultimately the intelligence demonstrated by machines, in contrast to the “natural” intelligence displayed by humans and other animals. Some people might argue that to achieve true AI, one must create an AI that is self-conscious and learns over time. The next question is how one quantifies self-consciousness.
The study of AI has many different states and uses, such as reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects. General intelligence, being the field's long-term goals, includes statistical methods and computational intelligence. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy and more.
If asked what the latest breakthrough was in AI, most people will point to Sophia the Humanoid Robot, who (or which) was here in Tbilisi this week. It (she) was controversially granted Saudi Arabian citizenship last year, with some commentators wondering if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder. Let’s be honest here, we know it is not human, and so human rights do not apply to it: no need for technicalities in this matter, as we already know it is not actually self-conscious, this is evident in its code.
Experts who have reviewed Sophia’s open-source code state that Sophia is best categorized as a “chatbot” with a face. Ben Goertzel, the chief scientist for the company that made Sophia, Hanson Robotics, acknowledges that it is "not ideal", and also stated that “If I show them a beautiful smiling robot face, then they get the feeling that 'AGI' (artificial general intelligence) may indeed be nearby and viable... None of this is what I would call AGI, but nor is it simple to get working.” Sophia does utilize AI methods, including face tracking, emotion recognition, and robotic movements generated by deep neural networks. Sophia’s dialogue is generated via a decision tree, and integrated with these outputs uniquely for the situation in order to portray uniqueness.
So what does this mean for the future of AI? Firstly, we must recognize the benefits of AI in a domestic, medical or civil setup. Helping people in need and solving complicated problems that humans can’t or would take years to do, is where it would be most useful. Understanding that, we also realize how bad AI could become [think Terminator], however, politicians and companies have started to take steps to avoid this.
Hundreds of organizations and thousands of well-known people in this field, including Elon Musk, Demis Hassabis from Google's DeepMind, and the head of the Chocolate Factory's AI lab Jeff Dean have promised never to support the development of AI designed to harm humans, or rather, autonomous weapons.
The Future of Life Institute, an outreach group focused on tackling existential risks, organized this pledge. It was co-founded by a group of researchers, including Max Tegmark, a physics professor at the Massachusetts Institute of Technology, Viktoriya Krakovna, a scientist at DeepMind, and Jann Tallinn, co-founder of Skype.
“We will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons,” the pledge reads.
This removes most of the danger and threats AI might pose in the future, and should it become a more solid agreement between countries, it could completely remove the future threat AI might pose.
“Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage,” the pledge reads.
The pledge doesn’t go into much detail about any possible weapon systems or the level of autonomy needed to classify it as a lethal autonomous weapon. But it does briefly note other pressing issues about how the technology could be used oppressively for surveillance purposes.
At this point, 172 organizations, including DeepMind, ClientEarth and University College London, have signed it. There is also another column dedicated to the 2,492 individual signatories, including all the co-founders of DeepMind, and notable researchers like Stuart Russell at UC Berkeley and the University of Montreal's Yoshua Bengio.
Now there have been stumbles, but AI is rapidly growing smarter, and the future does not allow for a world without it. This technology is already becoming central to the way we live. Be it the apps and assistants on our phones or even the background processes of things we do not even know of or recognize yet, such as facial recognition for police, traffic light algorithms, power station monitoring and control, etc.
AI and the data that fuels it are also changing the way we do business and often with significant human rights implications. Our personal data is now a currency and smart businesses are already transforming themselves into data dealers so as to profit from this. The media itself might even get a huge makeover with the use of AI, starting with a company called AlphaNetworks.
AlphaNetworks combines components of online video services such as Netflix, online video aggregators such as YouTube, interactive platforms such as Twitch, and premium cable models such as HBO. It adds AI-powered recommendation and compensation mechanisms for revolutionary dynamic pricing and accountability. All programming consumption data is made available on a transparent ledger, providing content owners equitable and transparent compensation. They announced its video infrastructure powered by AI and blockchain for the new era of media. The company’s digital framework provides creators, media companies and advertisers with applications for better video monetization, management and analytics.
AI is in our future, there’s no doubt about it. AI is expected to transform the global economy within the next decade and has been forecast to add 40% to the world's GDP by 2030.
AI has huge potential to help the world, so there is only one way to go about it; making sure here and now, with any legal means necessary, that we can prevent its abuse in the future.
By Shawn Wayne