AI Safety Summit Peers Down Conspiracy Theorist’s Rabbit Holes on Threat of AI
Depending on how much of a conspiracy theorist you might be, or how far down the rabbit hole you are willing to go, an article from gizmodo.com covers a lot of reasons why conspiracy theorists may be accurate when they say we should be worried about the power of AI. The story laid out 11 reasons why.
Sharing the Doom
The U.K. government on Nov. 1-2 hosted the world’s first AI safety summit in Bletchley Park, the home of the codebreakers who cracked the code that ended World War II. The meeting convened international governments, leading AI firms, and research experts to discuss the “safe development and use of frontier AI technology.” At the event, experts:
“considered the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action.”
High Tech Guests Invited
A week before the conference a paper was released to brief the participants.
The 45-page paper, titled “Capabilities and risks from frontier AI,” gives a relatively straightforward summary of what current generative AI models can and can’t do. Where the report starts to go off the deep end, however, is when it begins speculating about future, more powerful systems, which it dubs “frontier AI.”
“The paper warns of some of the most dystopian AI disasters, including the possibility humanity could lose control of ‘misaligned’ AI systems.”
Some AI risk experts entertain this possibility, but others have pushed back against glamorizing more speculative doomer scenarios, arguing that doing so could detract from more pressing near-term harms. Critics have similarly argued the summit seems too focused on existential problems and not enough on more realistic threats.
Gizmodo.com included a list of AI-empowered problems. Many of them we are all already dealing with on a daily basis. Below are some of the 11 titles of current or future issues with AI.
1. Evil-doers could Use AI to Create Deadly Biological or Chemical Weapons
2. AI Models Could Saturate the Internet With Unreliable Information ( sound familiar?)
3. Scammers Could Use Fake AI Kidnappings to Torment People
4. AI Could Generate Bespoke, Personalized Disinformation Campaigns (As Seeflection.com covered last week with the Jill Biden deep fake video that was released on social media.)
The other threats vary in the amount of danger they pose, but they include such matters as being concerned over like having AI take over humankind. You know like the movies—which is the rabbit hole many of us have already been down.
read more at gizmodo.com