Home / General / AI and Me

AI and Me

/
/
/
2705 Views

[This post is an expansion of a tumblr reply I posted yesterday. Thanks to Murc for suggesting I expand and republish it here. This may also, finally, light a fire under me to finish the AI-focused Political History of the Future essay I’ve been contemplating for ages.]

The companies that make AI—which is, to establish our terms right at the outset, large language models that generate text or images in response to natural language queries—have a problem. Their product is dubiously legal, prohibitively expensive (which is to say, has the kind of power and water requirements that are currently being treated as externalities and passed along to the general populace, but which in a civilized society would lead to these companies’ CEOs being dragged out into the street by an angry mob), and it objectively does not work. All of these problems are essentially intractable. Representatives of AI companies have themselves admitted that if they paid fair royalties to all the artists whose work they’ve scraped and stolen in order to get their models working, they’d be financially unfeasible. The energy requirements for running even a simple AI-powered google query are so prohibitive that Sam Altman has now started to pretend that he can build cold fusion in order to solve the problem he and others like him have created. And the dreaded “hallucination” problem in AI-generated text and images is an inherent attribute of the technology. Even in cases where there are legitimate, useful applications for AI—apparently if you provide a model a specific set of sources, it can produce accurate summaries, which has its uses in various industries—there remains the question of whether this is a cost-effective tool once its users actually have to start paying for it (and whether this is even remotely ethically justifiable given the technology’s environmental cost).

The solution the AI companies have come up with to this problem is essentially fake it until you make it. Insist, loudly and repeatedly, that AI is “inevitable”, that anyone who resists it is standing in the path of technological progress, no different from anyone who futilely resisted the automation of their labor in the past. That non-technology industries are falling for this spin is perhaps unsurprising—motivated, obviously, by the dream of dumping those pesky human employees and freelancers and replacing them with cheap and uncomplaining machines (though, again, I must stress that if AI was priced realistically—and if water and energy for server farms were sanely priced—there is no AI tool that would be cheaper than a human doing the same job). What’s more interesting is that other Silicon Valley companies are doing the same, even though, again, the result is almost always to make their product worse. Google has essentially broken its key product, and Microsoft is threatening to spy on all its users and steal their data, all because a bunch of CEOs have been incepted into the idea that this technology is the future and they cannot afford to be left behind. (This desperation must be understood, of course, in the context of a Silicon Valley that hasn’t come up with a new killer app that genuinely revolutionizes users’ lives since maybe as far back as the smartphone, and where advances in screens, cameras, disk sizes, and computing power have plateaued to a point that no one feels the need to upgrade their devices every year.)

One expression of this industry-wide FOMO is the way that the term AI has started being used to describe any algorithmic tool with even the vaguest connection to image or text (and sometimes, not even that). Anything from photo correction tools like red-eye elimination, to graphics programs that can recognize specific image elements and manipulate them, to CGI tools used, for example, in George Miller’s Furiosa is now being slapped with the AI label, presumably in order to make it look cutting edge. This is a particularly insidious form of normalization because it seems to be rising organically from companies that are not in the AI business. And because it contributes to the slippage around the term “AI” itself, which on the one hand, borrows significance from its original use in science fiction to describe machines with consciousness and human-equivalent intelligence like HAL, Data, or Skynet that the actual technology does not, in any way, resemble, and on the other hand, is a complete misnomer—there is no “intelligence” involved in these processes, and they are only “artificial” if you ignore the legions of third-world data taggers who made their stolen datasets usable. But if we can call any algorithmic or machine learning tool “AI”, then we can avoid a conversation about how the specific technology that got crowned with that title around two years ago has very severe drawbacks and limitations.

I got a close-hand look at this phenomenon just recently at an all-hands, state of the company talk for my employer. The VP of R&D announced that we would be looking into incorporating AI into our product. This raised a lot of eyebrows among us engineers, since our products are high speed network routers that we sell to telecoms and other internet service providers. Not only do they have no use for text or image generation, it’s unlikely that any randomly deployed device will have access to a service like ChatGPT (just because a device passes network traffic doesn’t mean it has access to the open internet; your home wifi router doesn’t necessarily have the ability, or the correct configuration, to surf the web). Not to mention that our customers, who are increasingly security-conscious, would probably have an issue with us passing their customer data to a third-party server on the open net. And since these are all firmware-based devices, we don’t have the spare memory or computing capabilities to run our own LLM engine on each device.

When the VP elaborated, it turned out he was talking about some kind of fancy algorithm to evaluate network load and adjust capacity dynamically. Currently we offer fairly sophisticated tools that allow customers to designate specific bandwidth channels for each of their customers, and prioritize different types of traffic (for example, if you’re a telecom, voice traffic is always prioritized over data traffic, because a dropped packet in voice registers in a way that a dropped data packet doesn’t). But this all has to be configured ahead of time, and the idea with this new tool is to be able to analyze the traffic in real time and respond to it without operator input. They’ve apparently brought in some researchers from academia to develop the algorithm and see if it can be applied to our product—we will have to see if it can be translated into code that can be efficiently run on our devices while handling the extremely high bandwidth loads that they are able to process. But even at this early stage, it sounds really neat. I wouldn’t be surprised if there’s a machine learning component to the technology, but it’s not what AI as the term is currently being used actually means. It’s honestly a bit sad to me that if we do get this kind of genuinely exciting, innovative tool off the ground, we’ll have to slap the name of an environmentally wasteful plagiarism engine on it to get customers interested.

(Another suggested application raised in this talk was incorporating LLMs in coding. Apparently there’s a team working on tools for that, though they haven’t presented them to us yet. Personally, I’m dubious. Most of my work involves plugging into preexisting code, some of it decades old. It requires understanding systems, and the coding part is actually the smallest aspect of it. And while developing features from scratch does include an element of repetition that a text generator might be able to help with, I already have tools that do some of that work, generating, for example, the code for new user commands from a definition document. But those tools are called compilers—the actual definition of “compiler” is any program that translates from one machine language to another—and I find it hard to believe that an AI-based tool would offer a significant saving in time compared to them.)

What this means is that it is up to all of us to fight back against the induced semantic slippage that the AI industry is trying to get us to accept. When someone calls a tool “AI”, you should ask them what they mean by that. When they insist that a tool that has existed for a decade or more is the same as ChatGPT or MidJourney, you should push back. When they tell you that a new product has AI in it, you should expect them to be able to clarify, and if they can’t, or give a nonsensical answer, you should point and laugh at them. AI as it is currently being used in the tech industry means a specific thing, and that thing has very few viable use cases, and a great many drawbacks. We shouldn’t allow the desperation of its creators, or the gullibility of other business leaders and politicians, or the vagaries of language, to obscure these facts.

In August 2024, Briardene Books will publish my first collection of reviews, Track Changes: Selected Reviews. The collection is available for pre-order, in paperback and ebook, at the Briardene shop, and will be launched at the 2024 Worldcon in Glasgow, Scotland.
  • Facebook
  • Twitter
  • Linkedin
This div height required for enabling the sticky sidebar
Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views : Ad Clicks : Ad Views :