A nice Quanta discussion with Arvind Narayanan
snip
...
You also gave a talk on detecting AI snake oil, which was also well received. How does that relate to fairness in machine learning?
So the motivation for this was that there’s clearly a lot of genuine technical innovation happening in AI, like the text-to-image program DALL·E 2 or the chess program AlphaZero. It’s really amazing that this progress has been so rapid. A lot of that innovation deserves to be celebrated.
The problem comes when we use this very loose and broad umbrella term “AI” for things like that as well as more fraught applications, such as statistical methods for criminal risk prediction. In that context, the type of technology involved is very different. These are two very different kinds of applications, and the potential benefits and harms are also very different. There is almost no connection at all between them, so using the same term for both is thoroughly confusing.
People are misled into thinking that all this progress they’re seeing with image generation would actually translate into progress toward social tasks like predicting criminal risk or predicting which kids are going to drop out of school. But that’s not the case at all. First of all, we can only do slightly better than random chance at predicting who might be arrested for a crime. And that accuracy is achieved with really simple classifiers. It’s not getting better over time, and it’s not getting better as we collect more data sets. So all of these observations are in contrast to the use of deep learning for image generation, for instance.
...
Comments