> Let's think and communicate more clearly regarding intelligence. Stuart Russell offers a nice definition: an agent's ability to do a defined task.
Maybe something about my comment got you riled up? What was it?
You wrote:
> What you're doing, with this whole "let's make a bullshit word logical" is more similar to medieval scholasticism, which was a vain attempt at verbal precision.
Again, I'm not quite sure what to say. You suggest my comment is like a medieval scholar trying to reconcile dogma with philosophy? Wow. That's an uncharitable reading of my comment.
I have five points in response. First, the word intelligence need not be a "bullshit word", though I'm not sure what you mean by the term. One of my favorite definitions of bullshitting comes from "On Bullshit" by Harry Frankfurt:
> Frankfurt determines that bullshit is speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care whether what they say is true or false. - Wikipedia
Second, I'm trying to clarify the term intelligence by breaking it into parts. I wouldn't say I'm trying to make it "logical" (in the sense of being about logic or deduction). Maybe you mean "formal"?
Third, regarding the "what you're doing" part... this isn't just me. Many people both clarify the concept of intelligence and explain why doing so is important.
Fourth, are you saying it is impossible to clarify the meaning of intelligence? Why? Not worth the trouble?
Fifth, have you thought about a definition of intelligence that you think is sensible? Does your definition steer people away from confusion?
You also wrote:
> We never would have been able to create science, if it weren't for focusing on the kinds of thinking that can be made logical.
I think you mean _testable_, not _logical_. Yes, we agree, scientists should run experiments on things that can be tested.
Russell's definition of intelligence is testable by defining a task and a quality metric. This is already a big step up from an unexamined view of intelligence, which often has some arbitrary threshold.* It allows us to see a continuum from, say, how a bacteria finds food, to how ants collaborate, to how people both build and use tools to solve problems. It also teases out sentience and moral worth so we're not mixing them up with intelligence. These are simple, doable, and worthwhile clarifications.
Finally, I read your quote from Dijkstra. In my reading, Dijkstra's main point is that natural language is a poor programming interface due to its ambiguity. Ok, fair. But what is the connection to this thread? Does it undercut any of my arguments? How?
* A common problem when discussing intelligence involves moving the goal post. Whatever quality bar is implied has a tendency to creep upwards over time.*
I reacted negatively to the idea earlier that agency should be considered an aspect of intelligence. I think separating the concepts helps me better understand people, their unique strengths, and puzzles like why sometimes people who aren't geniuses who know everything and can rotate complex shapes are sometimes very successful, but most importantly, why LLMs continue to feel like they're lacking something, compared to people, even though they're so outrageously intelligent. It's one thing to be smart, another thing entirely to be useful.
In the hopes of clarifying any misunderstandings of what I mean... I said "agent" in Russell's sense -- a system with goals that has sensors and actuators in some environment. This is a common definition in CS and robotics. (I tend to shy away from using the word "agency" because sometimes it brings along meaning I'm not intending. For example, to many, the word "agency" suggests free will combined with the ability to do something with it.)
I recommend Russell to anyone willing to give him a try. I selected part of his writing that explains why his definition is important to his goals. From page 2 of https://people.eecs.berkeley.edu/~russell/papers/aij-cnt.pdf
> My own motivation for studying AI is to create and understand intelligence as a general property of systems, rather than as a specific attribute of humans. I believe this to be an appropriate goal for the field as a whole...
To say my point a different way, intelligence is contextual. I'm not using "contextual" as some sort of vague excuse to avoid getting into the details. I'm not saying that intelligence cannot be quantified at all. Quite the opposite. Intelligence can be quantified fairly well (in the statistical sense) once a person specifies what they are talking about. Like Russell, I'm saying intelligence is multifaceted and depends on the agent (what sensors it has, what actuators it has), the environment, and the goal.
So what language would I use instead? Rather than speaking about "intelligence" as one thing that people understand and agree on, I would point to task- and goal-specific metrics. How well does a particular LLM do on the GRE? The LSAT?
Sooner or later, people will want to generalize over the specifics. This is where statistical reasoning comes in. With enough evaluations, we can start to discuss generalizations in a way that can be backed up with data. For example, might say things like "LLM X demonstrates high competence on text summarization tasks, provided that it has been pretrained on the relevant concepts" or "LLM Y struggles to discuss normative philosophical issues without falling into sycophancy, unless extensive prompt engineering protocols are used".
I think it helps to remember this: if someone asks "Is X intelligent?", one has the option to reframe the question. One can use it as an opportunity to clarify and teach and get into a substantive conversation. The alternative is suboptimal. But alas, some people demand short answers to poorly framed questions. Unfortunately, the answers they get won't help them.