My end goal is to use it as one phase of IR for a functional language compiler.
[1]: https://www.amazon.com/Practical-Foundations-Programming-Lan...
While I cannot find the exact quote from Martin Odersky[0], I do believe he once said something along the line of, "it takes about ten years to make a complete typed language."
If anyone also recalls this and has a link to the quote, I would much appreciate the pointer to it.
So is there a technical reason why education and implementations always intertwine it with a larger client language, rather than treating it as a complete, self-contained entity in its own right? Or is that lack of decomposition just oversight?
With only a logic, you can't show properties about the language. In particular, we like to show that programs that are well typed avoid some errors (like segfaults), or that if an expression ( `log2(length(s))`) has a type (`int`), when you evaluate it to a value, the value (`4`) has the same type. If you want to consider such (nice!) properties, you need to consider the type system and the language together.
For instance the type ∀ A. (A → A) → A would be empty in the set theoretical semantics, but in many programming languages there are terms of this type – the fixed point operator.
A better framework for semantics of many programming languages is domain theory[0]. But it also has its limitations.
The type system types the terms of the language. Thus it has to be intertwined with them. One of the purposes is to allow only well-typed terms, which then have a better understood semantics.
Basically, type systems == science (figuring out new things), programming languages = engineering (making new findings useful).
Huh? Rice's Theorem states that any static semantic analysis must either reject valid programs or accept invalid programs. This says nothing about whether a type system is rigorous or hacky.
> common ones include specifying function parameter types to allow type inference to function
It's true that type inference becomes undecidable without annotations for things like dependent types, and there's nothing hacky about that.
I vaguely remember that there is some maths that tells us that types aren't sets, but can't recall the details. Something to do with Russel's Paradox and function types that refer to themselves as an argument? (I'm not a mathematician.)
Thanks for the inputs, folks. Will cogitate further.