Global Risks Report: Technology Sector Faces Three Key Risks
The World Economic Forum's Global Risks Report has identified three key risks that the technology sector may face:
Risks
1. Misinformation and Disinformation
The increasing use of AI-generated content and platforms can lead to the spread of misinformation, either unintentionally or deliberately, causing confusion between facts and fiction.
2. Algorithmic Polarization
Algorithmic polarization may become more common due to political and social polarization, leading to the amplification of biased information and further dividing societies.
3. Deepening Surveillance
The increasing digitalization of society can facilitate surveillance by governments, corporations, and malicious entities, which becomes more risky as societies become more polarized.
Recommendations
To mitigate these risks, the report suggests three key recommendations:
1. Enhance Skills for AI Developers and Users
Organizations should use AI models that minimize polarization and mitigate unintended consequences in content creation and sharing. While technical solutions to depolarize AI algorithms exist, their consistent application remains a challenge. To address this, it is essential to continually update the skills of developers, data analysts, and decision-makers through comprehensive, frequent, and regular training programs that focus on both technical capabilities and ethical decision-making.
2. Promote Digital Literacy and Education
The report estimates that public awareness and education can potentially stimulate long-term action to reduce risks and prepare for them. Public awareness campaigns should educate citizens on the risks associated with digital spaces and provide them with tools and practices to protect themselves and reinforce trust in their online activities.
3. Improve Accountability and Transparency
The digital trust framework defined by the World Economic Forum outlines key themes for governance to ensure the responsible adoption of AI. These include accountability and transparency. Accountability may involve the creation of oversight councils and AI councils, as well as human supervision processes. Transparency requires organizations to inform consumers about AI-generated content and its use through appropriate labeling and disclosures.