Today’s Worst Idea
It’s early yet, so we need to keep in mind that this is just a contender.
Here’s the Wired article. And a bit about the people involved from the CulturePulse website.
An agent model that relies on AI. Whatever the sellers mean by AI. The danger is that users will take the output to be meaningful.
Agent models are interesting, and, I think, kind of fun. They’ve been used to help interpret North American archaeological sites where a few hundred people lived, for example.
Agent models model people individually, usually with a few variable characteristics. They are then allowed to interact in various scenarios. Those scenarios are combined by calculations called Monte Carlo simulations. This takes a lot of computing power, but today’s desktops can handle Monte Carlo.
This model calls for 15 million agents, each of which has 80 characteristics. AI – whatever that means in this case – is somehow being thrown into the mix.
AI requires a lot of computing power. Multiply that into the requirements of agent models. It’s hard to see how a consulting firm consisting of two guys (and presumably employees) can muster up that computing power.
Further, AI depends on its input. A couple of suspect quotes in the article:
“We actually deliberately want to make sure that those materials that are biased are being put into these models. They just need to be put into the model in a psychologically real way,” Lane says.
It’s hard to know what that means.
The multi-agent model developed by Lane and Shults relied on distilling more than 50 million articles from GDelt, a project that monitors “the world’s broadcast, print, and web news from nearly every corner of every country in over 100 languages.” But feeding the AI millions of articles and documents was not enough, the researchers realized. In order to fully understand what was driving the people of Northern Ireland to engage in violence against their neighbors, they would need to conduct their own research. [emphasis mine]
Ah yes, the fudge factor. The results didn’t come out right, even when we distilled those articles, so we had to add our own research. Perhaps what they did is more responsible than this, but reliable training sets have not been a characteristic of AI so far.
“And if the data doesn’t exist, we’ll go out and we’ll do our own experimentation with universities to see if there is evidence, and then we will build that into our project,” Lane says.
One success for the model is cited.
A year later, Lane wrote that the model he had developed predicted that measures introduced by Brexit—the UK’s departure from the European Union that included the introduction of a hard border in the Irish Sea between Northern Ireland and the rest of the UK—would result in a rise in paramilitary activity. Months later, the model was proved right.
The second link is to a New York Times article that says that violence increased in Northern Ireland. I’m not sure a coin toss couldn’t have done this well.
It’s extremely important in modeling to know what’s going on inside the model. The AI promoters have draped their output in folksy tones that suggest a reliable source. But AI has been shown to be unreliable in many ways, including making stuff up.
What this project does is complexify agent modeling far beyond anything that’s been done and then dumps the uncertainties of AI into it. I suspect, because the computing needs are so enormous, that a great many simplifications have been made. They’re not saying what those simplifications are.
And this is going to be used to aid decision-making at the United Nations? I hope not.