• AI that can create videos of world leaders—or anyone—saying things they never said;
• Laser phishing, which uses AI to scan an individual’s social media presence and then sends “false but believable” messages from that person to his or her contacts, possibly obtaining money or personal information; and
• AI that analyzes data sets containing millions of Facebook profiles to create marketing strategies to “predict and potentially control human behavior.”
The last technique was reportedly used in the 2016 American presidential election. The Facebook profiles in question were supposedly obtained through illicit means, giving Cambridge Analytica, the entity creating the marketing strategies, a wealth of personal data to feed to its AI for analysis.
The problems created by AI doing this work is immediately apparent, particularly to those involved with the technology. “The dangers of not having regulation around the sort of data you can get from Facebook and elsewhere is clear. With this, a computer can actually do psychology, it can predict and potentially control human behavior . . . It’s how you brainwash someone. It’s incredibly dangerous,” notes Jonathan Rust, the director of the Psychometric Centre at the University of Cambridge, which did much of the research Cambridge Analytica relies on. He goes on to warn, “It’s no exaggeration to say that minds can be changed . . . People don’t know it’s happening to them. Their attitudes are being changed behind their backs.”
Massachusetts Attorney General Maura Healey has announced that her office will investigate how Facebook and Cambridge Analytica obtained and used the personal data. However, did Facebook and Cambridge Analytica actually break Massachusetts, or any state, law in a way that is enforceable? If not, what does that say about the state of AI regulation and legal protection for individuals in this country.
To read the full article, please click here.