Getting your Trinity Audio player ready...
|
The Infocomm Media Development Authority (IMDA) of Singapore and the United States Federal Communications Commission (FCC) have signed a Memorandum of Understanding (MoU) to strengthen their cross-border efforts to combat scams, with the aim of reducing the risks associated with fraudulent activities.
The MoU, signed by Lew Chuen Hong, Chief Executive of IMDA, and Jessica Rosenworcel, Chairwoman of the FCC, solidifies a commitment to work together in regulatory enforcement activities related to scams and mutual information exchanges regarding regulatory frameworks, technical solutions, and policy matters concerning unsolicited and unlawful communications.
Chairwoman Jessica emphasised that robocall scams are a global issue, transcending international boundaries and affecting consumers and businesses worldwide. The MoU signifies a shared commitment to tackling robocall scams and unmasking the wrongdoers behind them.
This partnership serves as a testament to the close and productive relationship between the FCC and IMDA. Together, they aim to prioritise the prevention of illegal robocalls and protect consumers. Both IMDA and the FCC understand the necessity of a coordinated approach on a global scale to combat the growing threat of scams.
The MoU builds on its ongoing efforts to collaborate with international regulators, particularly in addressing scams facilitated through various communication channels, such as calls and mobile messaging. By joining forces with like-minded partners, both IMDA and the FCC aspire to bolster their anti-scam measures, ensuring that citizens and businesses are shielded from fraudulent activities.
In another significant development, IMDA together with the US Partnership for Growth and Innovation (PGI) and the US National Institute of Standards and Technology (NIST) have completed a joint mapping exercise between IMDA AI Verify and NIST AI RMF, culminating in the publication of a crosswalk.
AI RMF or Artificial Intelligence Risk Management Framework, aims to provide organisations that use and deploy AI systems with a resource to manage and mitigate risks associated with AI usage. The goal is to promote the responsible development and use of AI.
AI Verify, on the other hand, is an AI governance testing framework and software toolkit that aims to enhance transparency regarding AI systems, thus building trust. Both frameworks share common objectives in promoting trustworthy and responsible AI.
The development of this crosswalk marks a significant step toward harmonising international AI governance frameworks. This harmonisation not only streamlines industry efforts but also reduces costs associated with meeting multiple requirements. Also, it reflects the shared goal of Singapore and the US to balance AI innovation while maximising the benefits of AI technology and mitigating its associated risks.
These initiatives undertaken by Singapore and the US serve as poignant reminders of the fundamental importance of international collaboration. Both nations have recognised that many of the challenges they are facing, especially in the realms of cybersecurity and technology governance, transcend national borders and require a unified global response.
Singapore and the US demonstrate their commitment to using collective resources and strengths by entering into partnerships and cooperation arrangements. They recognise that addressing concerns such as fraud, cybersecurity risks, and responsible AI governance is a massive task that no single government can effectively manage in isolation.
Through such collaborations, the two nations aim to pool their expertise, share critical intelligence, and develop standardised frameworks and strategies. This collective approach not only amplifies the effectiveness of measures taken but also sends a powerful message to the international community.