Singapore is looking for a broader governance framework for advanced AI technology.

0
457


ai-data-centergettyimages-997419812

XH4D/Getty Images

Singapore has released a draft governance framework on generative artificial intelligence (GenAI) that it says is necessary to address emerging issues, including incident reporting and content provenance. 

The proposed model builds on the country’s existing AI governance framework, which was first released in 2019 and last updated in 2020. 

Also: How generative AI will deliver significant benefits to the service industry

GenAI has significant potential to be transformative “above and beyond” what traditional AI can achieve, but it also comes with risks, said the AI Verify Foundation and Infocomm Media Development Authority (IMDA) in a joint statement. 

There is growing global consensus that consistent principles are necessary to create an environment in which GenAI can be used safely and confidently, the Singapore government agencies said. 

“The use and impact of AI is not limited to individual countries,” they said. “This proposed framework aims to facilitate international conversations among policymakers, industry, and the research community, to enable trusted development globally.”

The draft document encompasses proposals from a discussion paper IMDA had released last June, which identified six risks associated with GenAI, including hallucinations, copyright challenges, and embedded biases, and a framework on how these can be addressed. 

The proposed GenAI governance framework also draws insights from previous initiatives, including a catalog on how to assess the safety of GenAI models and testing conducted via an evaluation sandbox.

The draft GenAI governance model covers nine key areas that Singapore believes play key roles in supporting a trusted AI ecosystem. These revolve around the principles that AI-powered decisions should be explainable, transparent, and fair. The framework also offers practical suggestions that AI model developers and policymakers can apply as initial steps, IMDA and AI Verify said. 

Also: We’re not ready for the impact of generative AI on elections

One of the nine components looks at content provenance: There needs to be transparency around where and how content is generated, so consumers can determine how to treat online content. Because it can be created so easily, AI-generated content such as deepfakes can exacerbate misinformation, the Singapore agencies said. 

Noting that other governments are looking at technical solutions such as digital watermarking and cryptographic provenance to address the issue, they said these aim to label and provide additional information, and are used to flag content created with or modified by AI. 

Policies should be “carefully designed” to facilitate the practical use of these tools in the right context, according to the draft framework. For instance, it may not be feasible for all content created or edited to include these technologies in the near future and provenance information also can be removed. Threat actors can find other ways to circumvent the tools. 

The draft framework suggests working with publishers, including social media platforms and media outlets, to support the embedding and display of digital watermarks and other provenance details. These also should be properly and securely implemented to mitigate the risks of circumvention. 

Also: This is why AI-powered misinformation is the top global risk

Another key component focuses on security where GenAI has brought with it new risks, such as prompt attacks infected through the model architecture. This allows threat actors to exfiltrate sensitive data or model weights, according to the draft framework. 

It recommends that refinements are needed for security-by-design concepts that are applied to a systems development lifecycle. These will need to look at, for instance, how the ability to inject natural language as input may create challenges when implementing the appropriate security controls. 

The probabilistic nature of GenAI also may bring new challenges to traditional evaluation techniques, which are used for system refinement and risk mitigation in the development lifecycle. 

The framework calls for the development of new security safeguards, which may include input moderation tools to detect unsafe prompts as well as digital forensics tools for GenAI, used to investigate and analyze digital data to reconstruct a cybersecurity incident. 

Also: Singapore keeping its eye on data centers and data models as AI adoption grows

“A careful balance needs to be struck between protecting users and driving innovation,” the Singapore government agencies said of the draft government framework. “There have been various international discussions pulling in the related and pertinent topics of accountability, copyright, and misinformation, among others. These issues are interconnected and need to be viewed in a practical and holistic manner. No single intervention will be a silver bullet.”

With AI governance still a nascent space, building international consensus also is key, they said, pointing to Singapore’s efforts to collaborate with governments such as the US to align their respective AI governance framework. 

Singapore is accepting feedback on its draft GenAI governance framework until March 15.