How worried should we be about “existential” AI risk?
The “godfather of AI” has left Google, offering warnings about the existential risks for humanity of the technology. Mark MacCarthy calls those risks a fantasy, and a debate breaks out between Mark, Nate Jones, and me. There’s more agreement on the White House summit on AI risks, which seems to have followed Mark’s “let’s worry about tomorrow tomorrow” prescription. I think existential risks are a real concern, but I am deeply skeptical about other efforts to regulate AI, especially for bias, as readers of Cybertoonz know. I revert to my past view that regulatory efforts to eliminate bias are an ill-disguised effort to impose quotas, which provokes lively pushback from both Jim Dempsey and Mark.
Other prospective AI regulators, from the FTC’s Lina Khan to the Italian data protection agency, come in for commentary. I’m struck by the caution both have shown, perhaps a sign they recognize the difficulty of applying old regulatory frameworks to this new technology. It’s not, I suspect, because Lina Khan’s FTC has lost its enthusiasm for pushing the law further than it can reasonably be pushed. This week’s example of litigation overreach at the FTC include a dismissed complaint in a location data case against Kochava, and a wildly disproportionate ‘remedy” for what look like Facebook foot faults in complying with an earlier FTC order.
Jim brings us up to date on a slew of new state privacy laws in Montana, Indiana, and Tennessee. Jim sees them as business-friendly alternatives to the EU’s General Data Protection Regulation (GDPR) and California’s privacy law.
Mark reviews Pornhub’s reaction to the Utah law on kids’ access to porn. He thinks age verification requirements are due for another look by the courts.
Jim explains the state appellate court decision ruling that the NotPetya attack on Merck
Article from Latest