You can execute an ordinary functions at compile-time to read a DSL from a string or read attributes (reflective metaprogramming) on your program's classes. Take the string it outputs, use mixin(), and you have code. For example:
// Sort a constant declaration at Compile-Time
enum a = [ 3, 1, 2, 4, 0 ];
static immutable b = sort(a);
"a" only appears in the compiler's memory. "sort" is a normal function that runs at compile-time. "allowing accessto the full language at compile-time" is similar to what dynamic languages such as Python and JavaScript give you, except D is a static language with GCC and LLVM backends.Proper reflective metaprogramming would be a fairly big step though - right now, the macro systems happen well before the type system even gets a chance to look at the code, so the data to play with types in an interesting way isn't there at the right step.
Uh oh. I'd hoped the language would settle down.
The Go crowd knows when to stop. Go is mediocre, but stable.
When you need to run an unbounded program each time you want to provide real time feedback, like type inference or in Rust case lifetime inference, you make the language tooling much less simple and accessible.
How so?
> When you need to run an unbounded program
How is a program that provably terminates but takes 2 years to finish any better for compile-time computation? You want timeouts in either case.
For interpreted languages, there are no excuses; hooking into the interpreter at compile time is trivial. Full macros [0] are nice, but depends a lot on the syntax. A way to evaluate expressions at compile time [1] would go a long way.
If the result of my `sort ` function is dependent on `sizeof(void )` being 8, when I compile from my x86 hardware for a 32-bit only architecture, assumptions go awry.
Note that this isn't likely with something like sort, but definitely is* likely with precomputing values/structs etc.
You have procedural macros, which are literally "Rust code is the input, which you write rust code to manipulate, and output more rust code".
You have const fns, which are interpreted at compile time, and are closer to what you're referring to.
Fortraith cleverly re-purposes existing features, doing some violence to language usage conventions to achieve it.
Can you please explain what you mean by this?
In addition C++ has constexpr which marks an expression to be evaluated at compile-time.
When the capabilities of the constant evaluation system get improved and stabilized, we are going to see the exact feature you are describing. (With the caveat that non-deterministic functions are not allowed to ensure deterministic builds.)
Because "full use of the programming language" implies Turing completeness, which means compilation may require unbounded time and compute resources. You can allow use of a non-Turing complete subset, and this is something that dependently-typed languages can do quite elegantly.
It’s a valid point. Of compilation already is Turing complete, why not just drop the pretense and allow arbitrary compile time expressions.
I really don't want Rust to be crippled with a 1000 of DSL just because devs could write them easily.
fn immutable foo() long:
asm(long x:"rax") "mov rax 7"
return x
Now try cross-compiling that from a 32-bit ARM machine.Aside: D is kind of weird in this regard because most of it is designed to work as a interpreted language as well as a compiled one. To the extent the D is good, it's not compiled[0]; to extent that it's compiled[0], it's not good.
0: in the language design sense, not the language implementation sense.
For example, in CASE ... OF ... ENDOF ... ENDCASE, the input value is consumed when OF is entered. But this means that the default result, positioned before ENDCASE, has to move the input value to the top of stack so that it can be consumed before the result. The examples in the ANS standard prefer using >R ... R> as a scratchpad for this purpose, so that any number of results may be pushed into the data stack, while the input is moved onto the result stack and then pushed back to the data stack. But the whole existence of the return stack and words that use it is a curious detail that doesn't come up if you start from an RPN calculator instead of a complete Forth system. Someone writing a RPN system in an applicative language, upon seeing this semantic, might be tempted to beef up the syntax rather than add this idiom. And in Forth this idea likely was only arrived at through the numerous iterations Moore made to achieve better expression in fewer words, since the return stack is useful everywhere.
// create a word
forth!(: inc 1 + ;);
// ask user for $input
type Stack = forth!($input inc return);
// if you keep the stack around it can be used again
forth!({ Stack } inc .);
// should print $input + 2C++ is actively replacing its compile-time ML with core language features, but isn't there yet. Still, it has been many years since I needed to code any of my own TMP.
has anyone tried implementing rust in rust's trait system
Turing completeness means using the syntax of your favorite programming language is "merely" a matter of writing the appropriate program.
What GP is proposing is completely impractical, of course. But not unreasonable, and certainly not impossible.
It is neat to see something more practical than an Brainf* interpreter.