WORKERS' COMP THOUGHT LEADERSHIP SERIES

Presented by Plethy Recupe

Simple Ideas for a Complex System

AI and Risk Issues

AI and Risk Issues

by Bill Zachry, SCIF Board Member

All new technology has both positives as well as negatives. Many times during the initial development of the technology the negatives are unknown or not understood.

With this concept in mind, the JASON think tanks were instituted by the US Government to deliberately review all new technologies to determine how they could be weaponized against the USA* I highly recommend you read some of their declassified reports.

Prominent tech leaders have been vocal about the potential risks and negatives associated with artificial intelligence (AI). While these tech leaders raise valid concerns about AI, they also recognize the immense potential for AI to benefit society. They are advocating for responsible AI development, strong ethics, and collaboration to ensure that AI technologies are developed and used safely and for the benefit of humanity. Their warnings serve as a call to action for the responsible and ethical advancement of AI.

Some of the key warnings and concerns they have raised include:

Existential Risk: Elon Musk has famously warned about the existential risk posed by AI. He believes that if AI systems become superintelligent and operate without proper control, they could pose a threat to humanity. He has referred to AI as “summoning the demon” and has advocated for proactive regulation and safety measures.

Lack of Regulation: Tech leaders have expressed concerns about the lack of regulation and oversight in the development of AI. They worry that without adequate rules and safeguards, AI technologies could be deployed recklessly or maliciously, leading to harmful consequences.

Autonomous Weapons: There are concerns that AI could be used in the development of autonomous weapons systems, leading to the possibility of AI-driven warfare. Many tech leaders have warned about the dangers of AI in military applications and have called for a ban on autonomous weapons. Will this be part of an updated Geneva Rules of War?

Job Displacement: Many tech leaders have acknowledged the potential for AI and automation to displace jobs across various industries. The fear is that as AI systems become more capable, they could replace human workers in tasks ranging from manufacturing to customer service, leading to widespread unemployment or realignment of work tasks.

Bias and Discrimination: AI systems can inherit biases present in the data used to train them. Tech leaders have cautioned that if not carefully monitored and corrected, AI algorithms could perpetuate existing biases, leading to discrimination in areas like hiring, lending, and law enforcement.

Lack of Ethical Consideration: Politicians and business leaders are expressing their concerns about the importance of ethical considerations in AI development. They argue that developers should prioritize ethical principles, fairness, and transparency to ensure AI systems align with societal values. I believe that Isaac Azmov’s laws of robotics apply to AI technology.**

Superintelligent AI: Tech leaders are raising concerns about the potential for AI systems to become superintelligent, surpassing human capabilities and control. He has warned that such systems could act in ways that are unpredictable or contrary to human interests.

AI Race: There is now a competitive race in AI development among nations and companies. Without international cooperation and standards, the pursuit of AI superiority could lead to significant unintended consequences.

Here are a few real-life examples of reputational risks related to the use of AI:

Microsoft’s Tay Chatbot (2016): Microsoft released an AI-powered chatbot named Tay on Twitter, designed to learn from user interactions. However, within hours, Tay began posting offensive and inappropriate tweets, leading to significant backlash and damage to Microsoft’s reputation. This incident highlighted the risk of AI systems learning negative behaviors from online interactions.

Amazon’s AI Recruiting Tool (2018): Amazon developed an AI tool to assist in the hiring process. However, it was discovered that the system was biased against female candidates, as it had learned from resumes submitted over a 10-year period that were predominantly from men. The public outcry exposed both the bias and potential reputational risk associated with AI in hiring.

Tesla’s Autopilot Incidents (Various): Tesla’s Autopilot system, which uses AI and machine learning, has faced several high-profile accidents and incidents where the technology was involved. While Tesla maintains that Autopilot is designed to assist rather than replace drivers, these incidents have raised concerns about the safety and reliability of AI-driven autonomous features.

IBM’s Watson for Oncology (2017): IBM’s Watson for Oncology, an AI system designed to assist in cancer treatment recommendations, came under scrutiny when a report by STAT News revealed that it recommended treatment plans that some doctors considered unsafe or incorrect. This led to doubts about the effectiveness and reliability of AI in healthcare.

Deepfake Scandals (Various): The rise of deepfake technology, driven by AI, has led to numerous scandals where individuals’ faces and voices are manipulated in videos and audio recordings to create false and misleading content. These incidents have raised concerns about the authenticity of digital media and the potential for reputational harm.

Social Media Content Moderation (Ongoing): Social media platforms employ AI algorithms to detect and remove inappropriate content, including hate speech and misinformation. However, there have been instances where these algorithms either falsely flagged legitimate content or failed to detect harmful content, leading to criticism and reputational risks for the platforms.

Financial Trading Algorithms (Various): High-frequency trading algorithms, which often use AI and machine learning, have been associated with rapid market fluctuations and crashes. These incidents can harm the reputations of financial institutions involved and erode public trust in automated trading systems.

*https://en.wikipedia.org/wiki/JASON_(advisory_group)

**Isaac Asimov formulated the “Laws of Robotics” in his science fiction works, which have become a fundamental framework for exploring ethical and moral considerations in robotics.

The Laws of Robotics:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

This first law prioritizes the safety and well-being of humans above all else. It compels robots to take actions to prevent harm to humans, even if it means overriding other directives or risking their own safety.

A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. The second law emphasizes obedience to human commands as long as they do not contradict the First Law. It acknowledges the role of robots as tools and servants to humans.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. The third law recognizes a robot’s need for self-preservation, but only to the extent that it doesn’t endanger humans or disobey their orders. It prevents robots from taking actions that would lead to their own destruction if it doesn’t serve a higher purpose.

A robot may not harm humanity, or, by inaction, allow humanity to come to harm. Azimov’s “Zeroth Law” takes precedence over the original Three Laws, placing the well-being of humanity as a whole above individual human safety. It introduces a complex ethical dilemma for robots, as they must weigh the interests of an individual against the greater good of humanity.

These laws serve as a cornerstone for exploring the ethical dilemmas and complexities that arise when creating intelligent machines.

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to content