> I’m hesitant to even call this moving the goal posts. Intelligence has never been solidly defined even within humans (see: IQ debate; book smart vs street smart; idiot savants).
> It’s unsurprising that creating machines that seem to do some stuff very intelligently and some other things not very intelligently at all is causing some discontent with regard to our language.
I think I agree about the language.
I don’t have a definition of intelligence. I don’t work in one of those fields that would need to define it, so my first attempt probably wouldn’t be very good, but I’d say intelligence isn’t a single thing, but a label we’ve arbitrarily applied to a bunch of behaviors that are loosely related at best. So, trying to say this thing is intelligent, this thing is not, is basically hopeless, especially when things that we don’t believe are intelligent are being made to exhibit those behaviors, one behavior at a time.
> I see a whole lot more gnashing of teeth about goalposts moving than I do about people proposing actual solid goalposts.
I might not see a ton of explicit “here are the goalpost” type statements. But, every time someone says “I’m using the term AI, but actually of course this isn’t intelligence,” the seem to me at least to be referencing some implicit goalposts. If there isn’t a way of classifying what is or isn’t intelligent, how can they say something isn’t it? I think the people making the distinction have the responsibility to tell us where they’ve made the cutoff.
Maybe I’m just quibbling. Now that I’ve written all that out, I’m beginning to wonder if I just don’t like the wording of the disclaimer. I’d probably be satisfied if instead of “this isn’t intelligence, but I’m going to call it AI,” people would say “Intelligence is too hard to define, so I’m going to call this AI, because why not?”