In association with

Trusted data and the future of information sharing

July 9, 2019
How policy innovation is promoting data sharing and AI.

Data in some form underpins almost every action or process in today’s modern world. Consider that even farming, the world’s oldest industry, is on the verge of a digital revolution, with AI, drones, sensors, and blockchain technology promising to boost efficiencies. The market value of an apple will increasingly reflect not only traditional farming inputs but also some value of modern data, such as weather patterns, soil acidity levels and agri-supply-chain information. By 2022 more than 60% of global GDP will be digitized, according to IDC.

Governments seeking to foster growth in their digital economies need to be more active in encouraging safe data sharing between organizations. Tolerating the sharing of data and stepping in only where security breaches occur is no longer enough. Sharing data across different organizations enables the whole ecosystem to grow and can be a unique source of competitive advantage. But businesses need guidelines and support in how to do this effectively.

This is how Singapore’s data-sharing worldview has evolved, according to Janil Puthucheary, senior minister of state for communications and information and transport, upon launching the city-state’s new Trusted Data Sharing Framework in June 2019.

The Framework, a product of consultations between Singapore’s Infocomm Media Development Authority (IMDA), its Personal Data Protection Commission (PDPC), and industry players, is intended to create a common data-sharing language for relevant stakeholders. Specifically, it addresses four common categories of concerns with data sharing: how to formulate an overall data-sharing strategy, legal and regulatory considerations, technical and organizational considerations, and the actual operationalizing of data sharing.

For instance, companies often have trouble assessing the value of their own data, a necessary first step before sharing should even be considered. The framework describes the three general approaches used: market-, cost-, and income-based. The legal and regulatory section details when businesses can, among other things, seek exemptions from Singapore’s Personal Data Protection Act.

The technical and organizational chapter includes details on governance, infrastructure security, and risk management. Finally, the section on operational aspects of data sharing includes guidelines for when it is appropriate to use shared data for a secondary purpose or not.

Perhaps most significant is the framework’s potential to accelerate the development of artificial intelligence (AI), which relies on trusted data as its bedrock. “We want to have AI tools and services; we imagine that these will transform our lives,” Puthucheary said. “It means that our personal data at some point needs to be collected, used, processed, and shared.”

The framework is the latest pillar in Singapore’s strategy for promoting the AI industry. Earlier this year Singapore introduced its Model AI Governance Framework, which provides guidelines on the development of AI solutions that are human-centric, and whose decisions are explainable, transparent, and fair. This complements two initiatives set up in 2018: an advisory council on the ethical use of AI and data; and a research programme on the governance of AI and data use established in partnership with the Singapore Management University.

Elements of Singapore’s approach to AI governance—including the inclusiveness in terms of collating feedback broadly from all market participants; and the desire to continuously update the living frameworks and documents—are analogous to the EU’s approach, said Ieva Martinkenaite, vice president, AI and IoT business development, at Telenor Group, and one of 52 experts appointed to the EU’s High-Level Expert Group on Artificial Intelligence.

Both jurisdictions are navigating a course between promoting innovation in the sector and protecting the rights of citizens and organizations. After all, it is not just the misuse and overuse of AI that is unethical, said Martinkenaite, but also the underuse, as there is a moral imperative to employ AI and automation to lessen the burden of workers and improve the lives of citizens.

While a fairly light-touch governance structure will boost innovation in AI, Martinkenaite said it is also important for legislatures to review existing laws with a risk-based and proportional approach: “You cannot apply the same rules of ethics and legal implications to all different applications of AI.”

Martinkenaite and Puthucheary conclude that while it is unlikely for a single global framework to ever emerge for the governance of data and AI, different jurisdictions are aiming to achieve interoperability of their frameworks. The breadth and depth of the international collaborations happening in data and AI provide hope that digital economies will grow inclusively, transparently, and responsibly, and from which everyone can prosper.