> One of the reasons Cantwell Smith believes that our oversimplified assumptions about ontology can explain the failure of the symbolic logic employed by what’s sometimes called Good Old-Fashioned AI, or GOFAI, is that more recent, and more successful, approaches to AI don’t depend on this kind of symbolic reasoning.
This puts forth a false dichotomy. The fact that classical first order logic assumes the rules are perfect does not invalidate all symbolic approaches. There are several types of modal logic that can be used to manage the idea of imperfect rules and imperfect, or subjective, knowledge about the world, even before starting with fuzzy or probabilistic logic.
In general, everything that can be talked about by humans, in principle, should be considered at the reach of symbolic approaches. That’s tautological, I know, but people seemingly have difficulties seeing the forest for the trees, as they restrict their mental model of symbolic logic to a 70s brand of prolog / Horn clause based predicate logic.
> everything that can be talked about by humans, in principle, should be considered
> at the reach of symbolic approaches
This doesn’t seem obvious to me — do you have any references that support this idea?If you read War and Peace, and I read War and Peace, even if our experience of the reading might be slightly different, it will be due to how we react to it, and to which passages we tend more attention; and we will generally agree on what Tolstoy wanted to communicate - not only on the words themselves, but on the world constructed, and the implications of it that are not explicitly described.
Another way to look at it: we don’t have any evidence that we need more than a symbolic approach to replicate what we do with language: and we have built a whole civilization based on written education, which is based in symbols.
It gives example after example of how we repeatedly attempt to make classical AI work and it fails. The goal overall of the article seems to be to claim that ML is the correct path because hard logic based AI is too complex and will never work.
I think this is a bad view because it doesn't acknowledge that both are necessary.
It is painfully obvious that humans have an internal dialog, and that the subconscious also engages in a sort of thought process involving tokens as well.
There are clearly many possibilities that we consider as people. These possibilities follow something close to logic but heavily influenced by notions that have been built through a non-logical process ( our "beliefs" )
What is hopelessly inadequate is pure neural networks and/or trained ML. To ever approach anything similar to human thought we will have to use tokens.
Doing that is very difficult but not insanely so. Using the word insane implies that such a process is not guided by some predetermined logical system. I agree it is not fully logical, but not that it is insane.
The article is too dismissive of classical AI and seems to imply that those pursuing it are wasting our time.
This is the most common way for humans to experience thought but it is by no means necessary. [1] Weirder still, we know that only half of a brain is needed to operate a human body + consciousness since people with hemispherectomies exist and basically the same as before their operations.
I think you could roughly characterize capsule networks [2] as approaching something like the communication between sub networks you described.
Honestly I don't think you're wrong wrt token passing, but I think it probably looks like those tokens are actually compressions of sub network data rather than expressions themselves.
[1] https://www.dazeddigital.com/science-tech/article/44494/1/li...
Information I've seen/heard/read indicates that people are able to understand language and have an area of the mind dedicated to it from very early in life. The same is true for other sorts of behavior.
This to me indicates that DNA has hardcoded "instructions" on how to configure a mind to be able to process tokenized information from the start.
It is entirely possible that every section of the mind is just a huge freeform fpga that is initialized by DNA, but I think it is more hardwired than that.
Game of life has certain structures that can replicate themselves or eat other pieces though, so it is entirely possible that certain initialization leads to things that appear to be hardwired structure.
I agree that the mind is distributed, and have also seen the experiments where people continued to function with disparate brains when the connection between the halves was separated.
My thought on it is that there are many subconscious thought streams that we are normally unaware of. People who become aware of them are thought to be crazy, but really I think the normal blocks from their conscious reasoning are just weak.
I agree also that some people don't have the same "internal dialogue" and the stream of tokens are not equal to words like we communicate with others. I don't think that invalidates that there is some sort of token stream though.
The question I've always had is "how much of consciousness shuts down when you sleep" and/or "does staying awake longer mean your conscious thought process gains more access to the rest of your mind". My speculation is that the latter is true, because if you stay awake too long you will appear to be "crazy".