Sorry for the lag. Productive day yesterday and today my friendly neighbourhood
rock band was in a great mood early in the bloody morning.
>> No, we can't even do that. (...)
OK well I'm very confused. I thought our disagreement was on whether our brains
actually calculate actual kinematic equations, or just the same results by some
other means. It feels to me like we're arguing the same corner but we don't have
a common language.
>> No, I don't think you do. (...)
"I can't put my finger on it, but I know it when I see it". My claim is that
there is a difference between tacit knowledge, and articulable knowledge. I can
not articulate the knowledge I have of how I am catching a ball; but I certainly
know how I catch a ball, otherwise I wouldn't be able to do it. In machine
learning, we replace explicit, articulable knowledge with examples that
represent our tacit knowledge. I might not be able to manually define the
relation betwen a set of pixels and a class of objects that might be found in a
picture, but I can point to a picture that includes an image of a certain class
and label it, with the class. And so can everyone else, and that's how we get
tons of labelled examples to train image classifiers with, without having to
know how to hand-code an image classifier.
Here's a little thing I'm working on. Assume that, in order to learn any concept
we need two things: some inductive bias, background knowledge of the relevant
concepts; and "forward knowledge" of the target concept. In statistical machine
learning the inductive bias comes in the form of neural net architectures,
function kernels, Bayesian priors etc. and the knowledge of a target concept
comes in the form of labelled examples. Now, there are four learning settings;
tabulating:
Background Target Error
---------- -------- -----
Known Known Low
Known Unknown Moderate
Unknown Known Moderate
Unknown Unknown High
Where "Error" is the error of a learned hypothesis with respect to the target
theory. In the first setting, where we have knowledge of both the background and
the target, and the error is low, we're not even learning anything: just
calculating. We can equally well match the first three settings to deductive,
inductive, and abductive reasoning. You can also replace "known" and "unknown"
with "certain" and "uncertain".
Now, I'd say that the invention of kinematic equations by which we can model the
way we move our hands to catch balls etc is in the setting where the background
theory and the target are both known: the background being our theory of
mathematics, and the target being some obsrvations about the behaviour of humans
catching balls. I don't know if the kinematic equations you speak of where
really derived from such observations, but they could have. Humans are very good
at modelling the world in this way.
We're in deep trouble when we're in the last setting, where we have no idea of
the right background theory nor the target theory. And that's not a problem
solved by machine learning. We only make progress in that kind of problem very
slowly, with the scientific method, and it can take us thousands of years,
during which we're stuck with bad models. For 15 centuries, the model is
epicycles, until we have the laws of planetary motion and universal gravitation.
And, suddenly, there are no more epicycles.
This also adressses your earlier comment about betting against a scientific
upheaval in the science of computation.
Cool machine, btw, in that video. So you're a roboticist? I work on machine
learning of autonomous behaviour for mobile robotics.