https://lean-lang.org/doc/reference/latest/Basic-Types/Subty...
A crucial difference between type theory (as its known in Lean) and set theory is that an inhabitant/element is of exactly one type.
Object-oriented programming languages are not that different: The types induced by classes can easily be viewed as sets: A child class is a specialized version of its parent's class, hence a subtype/subset thereof if you define all the sets by declaring `instanceof` to be their predicate function.
https://bertrandmeyer.com/2015/07/06/new-paper-theory-of-pro...
Technically speaking the elements in the supertype are all distinct from the elements in the subtype and viceversa. They are not a subset of the other, hence why it's improper to consider one a subtype of the other.
> For an arbitrary n, compute the table full of values and their proofs, and just pull out the nth proof
if you thought harder about it you'd realize what you're suggesting is impossible
No, this is what would happen _if you ran the proof_, but proofs are not meant to be ran in the first place! The usual goal is proving their correctness, and for that it's enough for them to _typecheck_.
> For an arbitrary n, compute the table full of values and their proofs, and just pull out the nth proof
if you thought harder about it you'd realize what you're suggesting is impossible
1. build a table tab[n]
2. check that for every i, tab[i] == maxDollars_spec[i]
if you take the latter approach i proposed (bottom up) there is nothing to check the totality of.
Well, you at least need dependent types just to state the theorem, which eliminates nearly all other languages.
let rec helperMemo : Nat → HashMap Nat Nat → Nat × HashMap Nat Nat
is a big turnoff to me. I find it annoying to parse mentally. I can do it but I have to concentrate or it's easy to gloss over an important detail. let rec helperMemo (n : Nat) (map : HashMap Nat Nat) : Nat × HashMap Nat Nat
This is how it would usually be written. I will update the post accordingly. def MemoMap := HashMap Nat Nat
def MemoResult := Nat × MemoMap
let rec helperMemo : Nat → MemoMap → MemoResultTupples don't really indicate what I can expect from the members.
Defn. f: X → Y is flat ⇔ O_{Y,f(x)} → O_{X,x} flat ∀ x.
Then immediately I dropped that class. Turns out I like real analysis much moreInduction is often taught as something you do with natural numbers. But it's actually something you do with sets that are defined inductively. Any time you have a set that is defined like so:
1. x is in the set.
2. for all y in the set, f(y) is in the set
(where x and f are constants), you can apply induction. The base case is that you show some property is true of x. The inductive step is that you show that when the property is true of y, it is necessarily true of f(y).If you have multiple elements that you guarantee will be in the set, each of them must be dealt with as a base case, and if you have multiple rules generating new elements from existing ones, each of them must be dealt with as another inductive step.
For the case of the natural numbers, x is 0, and f is the successor function.
If you wanted to apply this model to the Fibonacci sequence, you could say that the tuple (0, 1, 1) is in the set [representing the idea "F_1 is 1"] and provide the generator f((a, b, c)) = (b, a+b, c+1). Then since (0, 1, 1) is in the set, so is (1, 1, 2) [or "F_2 is 1"], and then (1, 2, 3) ["F_3 is 2"], and so on.
(Or you could say that the two tuples (1, 1) and (2, 1) are both in the set, and provide the generator f( (i, x), (i+1, y) ) = (i+2, x+y). Now your elements are simpler, your generator is more complex, and your set has exactly the same structure as before.)
The approach taken by the author's "improved solution" is to define a set consisting of the elements of the memoization table, with the generator being the function chain that adds elements to the table. He annotates the type of the elements to note that they must be correct (this annotation is administrative, just making it easy to describe what it is that we want to prove), and then does a proof over the addition operation that this correctness is preserved (the inductive step!).
https://live.lean-lang.org/#codez=JYWwDg9gTgLgBAZRgEwHQBECGN...
Doing the proof inside the algorithm (i.e. doing inline induction over the algorithm recursion structure) has the advantage that the branching structure doesn't have to be duplicated as in an external proof like the one I did.
In my proof I didn't struggle so much with induction, but much more with basic Lean stuff, such as not getting lost in the amount of variables, dealing with r.fst/r.snd vs r=(fst, snd) and the confusing difference of .get? k and [k]?.