a minipost
With all the talk about large language models and applications like ChatGPT, I recalled a post from about two and a half years ago: eliza the next generation. I'd say much the same now - some of the predictions have come about. Of course it's only been a short while and I played with prototypes, so there's that:-)
It was high pandemic and I was spending quite a bit of time talking with friends who were working on interesting things. Among other things I started to become aware of progress in large language models. While many who work in AI (a poor label, but you get the idea without me having to talk about a variety of fields and subfields as well as what's unrelated), believed them a diversion, there was progress in LLMs and some interesting applications were emerging. We chatted about where it could go and what techno-social issues might arise. (disclaimer: I'm not a techno-solutionist) About the time I posted, Emily Bender at the University of Washington wrote a terrific paper on issues with natural language understanding and if we can believe what comes out of LLMs. Recently an article on her work appeared in The Intelligencer. It's a necessary read if only to understand her octopus analogy.
This morning Gregg Vesonder and I exchanged some text messages. He's a serious AI guy with decades of experience and is someone who thinks deeply about social technical issues. One of his comments:
We’ve seen this scene before, blinded by the hype. My concern this time is that explicit or inadvertent bias in the data can mask some nasty behavior that can subtly or not so subtly mask really tragic manipulations.
Aka speech acts.
I hadn't thought about speech acts much before. The definition and examples given in the link are excellent tools for thinking about these things.
I'm not against LLMs and there are some useful applications beyond writing horoscopes in white bro-speak, but the dimensionality of the space for abuse strikes me as large. As Suw Charman-Anderson notes, we're already seeing it in publishing.
'AI' is a very large and diffuse collection. Some areas are useful and potentially revolutionary. Others are potentially dangerous. We really need to think about these things rather than just trying them out and seeing what breaks. The things that break may be far too valuable.
Comments