Kait

Primary differentiators in AI predictions

Though I am no great fan of AI or its massively over-hyped potential, I also do not think it's useless. As Molly White put it:

When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can't do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial.

I wholeheartedly agree with those claims, and don't want to get into the specifics of them too much. Instead, I wanted to think out loud/write about why there's such a wide range of expectations and opinions on the current and future states of AI.

To get the easy one out of the way: Many of the most effusive AI hype people are in fit for the money. They're raising venture capital by saying AI, they're trying to get brought in as consultants on AI, or they're trying to sell their AI product to businesses and consumers. I don't think that's a particularly new phenomenon when it comes to new technology, though perhaps there is some novelty in how many different ways people are attempting to get their slice of the cake (companies cooking up AI models, apps trying to sell AI generation to consumers, hardware and cloud providers selling the compute necessary to do all of the above, etc.).

But once we take pure profit motive out of the way, there are I think two key areas of difference in people who believe in AI wholeheartedly and those who are neutral to critical.

The first is software development experience. Those who understand what it actually means when people say "AI is thinking" tend to have an overall more pessimistic view of the pinnacle of current AI generation strategies. In a nutshell, all of the current generative models try to ingest as much content of whatever thing they're going to be asked to output. Then, they are given a "prompt," and they are (in simplistic terms) trying to piece together an image/string of words/video that looks most likely based on what came for.

This is why these models "hallucinate" - they don't "know" anything specifically in the way you know that Washington, DC is the capital of the United States. It just knows that when a sentence starts "The capital of the United States is" it usually ends with the words "Washington, DC."

And that can be useful in some instances! This is why AI does very well on low-level coding tasks - a lot of the basics of programming is pretty repetitive and pattern-based, so an expert pattern-matcher can do fairly well at guessing the most likely outcome. But it's also why AI developer assistants produce stupid mistakes, because it doesn't "understand" the syntax or the language or even the problem statement as a fundamental unit of knowledge. It simply reads a string of text and tries to figure out what would most likely come next.

The other thing you learn from experience are edge cases, and specifically what doesn't work. This type of knowledge tends to accumulate only through having worked on a product before, and understanding how different pieces come together (or don't). AI lacks this awareness of context, focusing only what immediately surrounds the section it's working on.

But the other primary differentiator is for the layperson, who can best be understood as a consumer and it can be condensed to a single word: Taste.

I'm reminded of a quote from Ira Glass I heard on some podcast:

... all of us who do creative work … we get into it because we have good taste. But it’s like there’s a gap, that for the first couple years that you’re making stuff, what you’re making isn’t so good, OK? It’s not that great. It’s really not that great. It’s trying to be good, it has ambition to be good, but it’s not quite that good. But your taste — the thing that got you into the game — your taste is still killer, and your taste is good enough that you can tell that what you’re making is kind of a disappointment to you ...

I think this is true, and I think it's the biggest differentiator between people who think what AI is capable of right now is perfectly fine and those that think it'll all wind up being a waste of time. People who can't or are unwilling create text/images/videos on their own think that AI is a great shortcut. This is either because the quality of what the AI can produce is better than what they can do unassisted, or they don't have the taste to see the difference in the first place.

I don't know that I think there's a way to bridge that gap any more than there is to explain to people who think that criticism of any artform is "unfair" or that "well, could you do any better?" is a valid counterpoint to cultural criticism. There are simply those people whose taste is better than that what can be created only through an amalgamation of data used to train a model, and those who think that a simulacrum of art is indistinguishable (or better) than the real thing.

It's amazing how short my attention span for new fads is anymore. I don't want to blame Trump for this one, but my eagerness to ignore any news story he was involved in definitely accelerated the decline of my willingness to cognitively engage with the topic du jour significantly.

#AI