AI goes off the rails
This episode of the Cyberlaw Podcast opens with a look at some genuinely weird AI behavior, first by the Bing AI chatbot – dark fantasies, professions of love, and lies on top of lies – and then by Google’s AI search bot. Chinny Sharma and Nick Weaver explain how we ended up with AI that is better at BS’ing than at accurately conveying facts. This leads me to propose a scheme to ensure that China’s autocracy never gets its AI capabilities off the ground.
One thing that AI is creepily good at is faking people’s voices. I try out ElevenLabs’ technology in the first advertisement ever to run on the Cyberlaw Podcast.
The upcoming fight over renewing section 702 of FISA has focused Congressional attention on FBI searches of 702 data, Jim Dempsey reports. That leads us to the latest compliance assessment of how agencies are handling 702 data. Chinny wonders whether the only way to save 702 will be to cut off the FBI’s access – at great cost to our unified approach to terrorism intelligence, I point out. I also complain that the compliance data is older than dirt. Jim and I come together around the need to provide more safeguards against political bias in the intelligence community.
Nick brings us up to date on cyber issues in Ukraine, as summarized in a good Google report. He puzzles over Starlink’s effort to keep providing service to Ukraine without assisting offensive military operations.
Chinny does a victory lap over reports that the national cyber strategy will recommend imposing liability on the companies that distribute tech products – a recommendation she made in a paper released last year. I wonder why Google thinks this is good for Google.
Nick introduces us to modern reputation management. It involves a lot of fake news and bogus legal complaints. The Digital Millennium Copyright Act (DMCA) and European Union (EU) and California privacy law are the censor’s favorite tools. What is remarkable to my mind is that a business taking so much legal risk charges its
Article from Reason.com