The newest conversational artificial intelligence (AI) model developed by a tech giant was recently unveiled. Language Model for Dialogue Applications (LaMDA) aims to replace artificial, robotic conversations with AI, with more natural dialogues. LaMDA can engage a conversation in a free-flowing way about a seemingly endless number of topics. It is an ability that could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.
LaMDA’s conversational skills have been years in the making. The neural network architecture produces a model that can be trained to read many words and pay attention to how those words relate to one another and then predict what words it thinks will come next. However, unlike most other language models, LaMDA was trained on dialogue. During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language. It learned to be more nuanced and understand the context of a conversation.
LaMDA builds on the company’s earlier research, that showed transformer-based language models trained in dialogue could learn to talk about virtually anything. Since then, the researchers also found that once trained, LaMDA can be fine-tuned to significantly improve the sensibleness and specificity of its responses.
The researchers are developing several qualities in LaMDA, including sensibleness, specificity, “interestingness” by assessing whether responses are insightful, unexpected or witty. They also want LaMDA to stick to facts and are investigating ways to ensure LaMDA’s responses are not just compelling but correct.
While developing LaMDA, the researchers also adhere to their AI principles which seek to avoid internalising biases, mirroring hateful speech, or replicating misleading information. This is to avoid ill use of their technology. Therefore, minimising those risks is the company’s highest priority. The researchers have also scrutinised LaMDA at every step of its development.
The development of LaMDA is in line with the company’s AI principles where they aim to develop AI which should:
- Be socially beneficial: The researchers will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where they operate.
- Avoid creating or reinforcing unfair bias: The researchers will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
- Be built and tested for safety: The researchers will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm.
- Be accountable to people: The researchers will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Their AI technologies will be subject to appropriate human direction and control.
- Incorporate privacy design principles: The researchers will incorporate our privacy principles in the development and use of our AI technologies. They will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
- Uphold high standards of scientific excellence: The researchers aspire to high standards of scientific excellence as we work to progress AI development.
- Be made available for uses that accord with these principles: The researchers will work to limit potentially harmful or abusive applications.
As reported by OpenGov Asia, U.S. researchers have adopted AI to solve several problems, including countering the spread of disinformation. They created a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks, called the Reconnaissance of Influence Operations (RIO) programme.
The RIO system helps determine not only whether a social media account is spreading disinformation but also how much the account causes the network as a whole to change and amplify the message. RIO classifies these accounts by looking into data related to behaviours such as whether the account interacts with foreign media and what languages it uses. This approach allows RIO to detect hostile accounts that are active in diverse campaigns.