A major new report on the state of artificial intelligence (AI) identified where AI is at today, and the promise and perils in view. AI has begun to permeate every aspect of our lives, from language generation and molecular medicine to disinformation and algorithmic bias. The report argued that the current state is at a critical point where researchers and governments must think and act carefully to contain the risks AI presents and make the most of its benefits.
The report came out of the AI100 project, which aims to study and anticipate the effects of AI rippling out through our lives over the next 100 years. The report highlighted the remarkable progress made in AI over the past five years. AI is leaving the laboratory and has entered our lives, having a real-world impact on people, institutions, and culture.
For example, in Natural Language Processing (NLP), computers can now analyse and even generate realistic human language. A second example is an AI program that provides a huge step forward in our ability to predict how proteins fold. This will likely lead to major advances in life sciences and medicine, accelerating efforts to understand the building blocks of life and enabling quicker and more sophisticated drug discovery.
The AI100 report argued that worries about super-intelligent machines and wide-scale job loss from automation are still premature, requiring AI that is far more capable than available today. The main concern the report raises is not malevolent machines of superior intelligence to humans, but incompetent machines of inferior intelligence.
At this vital juncture, people need to think seriously and urgently about the downsides and risks the increasing application of AI is revealing. The ever-improving capabilities of AI are a double-edged sword. Harms may be intentional or unintended, like algorithms that reinforce racial and other biases.
AI research has traditionally been undertaken by computer and cognitive scientists. But the challenges being raised by AI today are not just technical. All areas of human inquiry, and especially the social sciences, need to be included in a broad conversation about the future of the field. Minimising negative impacts on society and enhancing the positives requires consideration from across academia and with societal input.
Governments also have a crucial role to play in shaping the development and application of AI. Indeed, governments around the world have begun to consider and address the opportunities and challenges posed by AI.
A greater investment of time and resources is needed to meet the challenges posed by the rapidly evolving technologies of AI and associated fields. In addition to regulation, governments also need to educate. In an AI-enabled world, our citizens, from the youngest to the oldest, need to be literate in these new digital technologies.
At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries.
U.S. researchers wanted to find out if AI can collaborate with people well. As reported by OpenGov Asia, In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it has never met before. In single-blind experiments, participants played two series of the game: one with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.
The results revealed that the scores were no better with the AI teammate than with the rule-based agent. However, humans consistently hated playing with their AI teammate as they found it to be unpredictable, unreliable, and untrustworthy and felt negative even when the team scored well.