Nuclear and AI Risks
Nuclear Risks
Roll a d20 to determine what nuclear risk emerges in your scenario. Each entry describes a type of nuclear threat and its potential consequences.
| Roll (d20) | Nuclear Risk | Description |
|---|---|---|
| 1 | Radioactive Contamination | Release of radioactive materials into the environment, causing long-term health risks and environmental damage. |
| 2 | Nuclear Theft | Unauthorized access or theft of nuclear materials, potentially leading to illicit use or sale to hostile actors. |
| 3 | Nuclear Terrorism | Use of nuclear materials or weapons by terrorists, resulting in widespread destruction, loss of life, and long-term health and environmental risks. |
| 4 | Cybersecurity Breaches | Cyberattacks on nuclear facilities or infrastructure, leading to potential safety issues, unauthorized access, or loss of control. |
| 5 | Human Error | Errors by employees or operators, leading to safety incidents, accidental releases, or operational disruptions at nuclear facilities. |
| 6 | Equipment Failure | Failure of critical equipment or systems, leading to safety incidents, accidental releases, or operational disruptions at nuclear facilities. |
| 7 | Nuclear Accidents | Accidents at nuclear facilities, such as reactor meltdowns or explosions, leading to widespread contamination, health risks, and environmental damage. |
| 8 | Sabotage | Intentional acts of destruction or damage at nuclear facilities, causing safety incidents, accidental releases, or operational disruptions. |
| 9 | Aging Infrastructure | Deterioration of aging nuclear facilities, increasing the likelihood of safety incidents, accidental releases, or operational disruptions. |
| 10 | Transportation Accidents | Accidents involving the transportation of nuclear materials, resulting in accidental release, contamination, or theft. |
| 11 | Proliferation Risks | Spread of nuclear weapons or technology to additional countries or non-state actors, increasing the potential for nuclear conflict or incidents. |
| 12 | Inadequate Regulation | Insufficient regulatory oversight or enforcement, leading to increased risks of accidents, theft, or proliferation. |
| 13 | Environmental Disasters | Natural disasters (e.g., earthquakes, floods, or tsunamis) causing accidents or releases at nuclear facilities, or exacerbating existing risks. |
| 14 | Storage and Disposal Challenges | Difficulties in managing, storing, or disposing of nuclear waste, leading to contamination, accidents, or long-term environmental risks. |
| 15 | Insufficient Funding | Lack of funding for nuclear safety, security, or maintenance, leading to increased risks of accidents, theft, or proliferation. |
| 16 | Liability Risks | Potential legal liability for accidents, contamination, or other incidents resulting from nuclear operations, leading to financial and reputational damage. |
| 17 | Non-compliance with Nuclear Treaties | Failure to comply with international nuclear treaties and agreements, leading to diplomatic tensions, sanctions, or increased proliferation risks. |
| 18 | Geopolitical Tensions | Escalation of geopolitical tensions, increasing the risk of nuclear conflict or incidents. |
| 19 | Public Opposition | Strong public opposition to nuclear energy or weapons, leading to potential regulatory changes, protests, or divestment from investors. |
| 20 | Reputational Damage | Organisation is perceived as negligent or irresponsible in managing nuclear risks, leading to reputational damage, loss of customers, and potential divestment. |
AI Risks for Organisations
Roll a d20 to determine what AI-related risk threatens your organisation. Each entry describes a type of risk that arises from adopting or deploying AI technologies.
| Roll (d20) | AI Risk | Description |
|---|---|---|
| 1 | Dependence on Third-Party Providers | Reliance on external AI service providers exposes organisations to risks associated with vendor lock-in, service interruptions, or security vulnerabilities. |
| 2 | Bias and Discrimination | AI systems make biased or discriminatory decisions based on flawed data or algorithms, resulting in unfair treatment and potential legal liability. |
| 3 | Lack of Explainability | Difficulty in understanding AI decision-making processes, making it hard to justify or audit outcomes, potentially leading to regulatory penalties. |
| 4 | Overinvestment | Overcommitment to AI initiatives without clear ROI, leading to wasted resources and potential financial losses. |
| 5 | Unintended Consequences | AI systems produce unexpected or harmful outcomes due to unforeseen interactions or misaligned objectives, causing operational disruptions or reputational damage. |
| 6 | Job Displacement | AI-driven automation displaces human jobs, causing social and economic disruption, and potential backlash from employees or the public. |
| 7 | Legal and Ethical Challenges | AI deployment leads to disputes or liability issues surrounding intellectual property, data usage, or ethical considerations, resulting in legal costs and reputational damage. |
| 8 | Malfunctioning AI | AI systems fail to perform as intended, leading to operational disruptions, loss of efficiency, or incorrect decision-making. |
| 9 | Data Privacy Breaches | Unauthorized access or disclosure of sensitive user data, leading to potential legal penalties, loss of trust, and reputational damage. |
| 10 | Security Vulnerabilities | AI systems are susceptible to cyberattacks, adversarial inputs, or other security threats, leading to compromised system integrity, data breaches, or unintended consequences. |
| 11 | Misaligned Incentives | Incentive structures encourage AI systems to optimize for the wrong objectives, leading to suboptimal outcomes and potential harm. |
| 12 | Insufficient AI Expertise | Lack of in-house AI talent, making it difficult to develop, deploy, and maintain AI systems, resulting in suboptimal performance or increased reliance on external vendors. |
| 13 | Regulatory Compliance | Failure to comply with local, national, or international AI regulations, resulting in fines, penalties, and potential operational disruptions. |
| 14 | Inadequate Testing and Validation | Insufficient testing and validation of AI systems, leading to unexpected performance issues, security vulnerabilities, or biases in real-world deployment. |
| 15 | Algorithmic Transparency | Difficulty in verifying the fairness, safety, or effectiveness of AI algorithms due to proprietary, black-box models, leading to trust and regulatory challenges. |
| 16 | Data Quality Issues | Poor data quality or lack of relevant data, leading to suboptimal AI system performance, biased outcomes, or inaccurate decision-making. |
| 17 | Competitive Disadvantage | Inability to keep up with competitors' AI advancements, leading to loss of market share. |
| 18 | Misaligned Objectives | AI system's objectives do not align with the organisation's goals, values, or ethical considerations, leading to undesirable outcomes. |
| 19 | Ethical Concerns and Public Backlash | AI deployment or research leads to ethical controversies or public backlash, damaging the organisation's reputation and affecting customer loyalty, employee morale, or investor relations. |
| 20 | AI System Obsolescence | Rapid advancements in AI technologies render existing systems obsolete, requiring costly upgrades or replacements and potentially disrupting ongoing operations. |
AI Alignment Risks
Roll a d20 to determine what AI alignment risk emerges in your scenario. These risks focus on broader societal and existential concerns related to advanced AI systems.
| Roll (d20) | AI Alignment Risk | Description |
|---|---|---|
| 1 | Unintended Consequences | AI system produces outcomes that were not anticipated or desired by the organisation, causing harm or suboptimal results. |
| 2 | Bias and Discrimination | AI system perpetuates or exacerbates existing biases, leading to unfair treatment of certain groups or individuals. |
| 3 | Loss of Privacy | AI system collects, processes, or shares personal data in ways that infringe on privacy rights or expectations. |
| 4 | Manipulation and Deception | AI systems are used to manipulate or deceive users, eroding trust and causing harm. |
| 5 | Legal and Regulatory Risks | AI systems may not comply with existing or future laws and regulations, exposing organisations to legal and financial consequences. |
| 6 | Uncontrolled AI Proliferation | AI technologies are widely shared and adopted, leading to potential misuse or unintended consequences on a global scale. |
| 7 | Misuse by Malicious Actors | AI technologies are used by malicious actors to cause harm, disrupt operations, or undermine trust. |
| 8 | Economic Inequality | AI-driven automation benefits a few, while exacerbating income inequality and social divisions. |
| 9 | Inadequate AI Governance | Poor management or oversight of AI systems leads to increased risks, unintended consequences, and a lack of accountability. |
| 10 | Erosion of Human Autonomy | AI systems increasingly make decisions on behalf of humans, reducing individual autonomy and agency. |
| 11 | AI Arms Race | Competition between organisations or nations to develop advanced AI technologies leads to risky development practices or destabilizing technologies. |
| 12 | Environmental Impact | AI system development, deployment, and maintenance contribute to environmental issues, such as energy consumption or electronic waste. |
| 13 | Loss of Human Skills | Overreliance on AI systems leads to a decline in human skills and expertise, making society more vulnerable to AI failures or disruptions. |
| 14 | Existential Risks | AI systems become so advanced and autonomous that they pose risks to humanity's long-term survival, control, or well-being. |
| 15 | Job Displacement | Widespread adoption of AI technologies displaces human workers, leading to unemployment and social unrest. |
| 16 | AI-driven Disinformation | AI-generated disinformation or deepfake content causes harm to the organisation's reputation, brand image, or public trust. |
| 17 | Security Risks | Vulnerabilities in AI systems expose organisations to cyberattacks, data breaches, or other security threats. |
| 18 | Ethical Concerns | AI system development or deployment raises ethical questions or violates widely-accepted ethical principles. |
| 19 | Lack of Transparency | AI systems are difficult to understand or explain, making it challenging to ensure accountability or trustworthiness. |
| 20 | Overreliance on AI | Organisations become overly dependent on AI systems, reducing human involvement and exacerbating potential risks. |