Smaller include Lua and maybe Hedgehog Lisp. Below that there is ulisp but that is sort of a toy. Also there is Javacard and J2ME depending on device size.
Below that you're probably best off forgoing garbage collection and using Forth or a cross compiled language.
hmmm, transpiling to C/C++ def makes sense! I'd wonder if haxe (lang) would work fine too then..!
very cool project!
That is larger than the average computer through about 1990 ...
(I know people will say I'm being pedantic. However, RAM and Flash define most of the price of a microcontroller. So, a factor of two--especially in RAM--means a significantly smaller and cheaper chip.)
Using a microcontroller to interrogate itself is a really valuable debugging technique.
Generally, you have attach to a UART port and attach a terminal emulator. This is especially important when considering the plethora of sleep modes that now exist for modern SoCs. Debug probes generally put the chip in maximum performance mode which often wipes out the state your trying to debug (especially important for sleep bugs).
I mean, it's very difficult to compare those things. It's a high level language runtime weighting a very small fraction of the JavaScript we're loading on any silly webpage, and we can run in on an stamp sized microcontroller that costs a couple dollars and you'd typically need to program using a relatively low level language.
I think it's neat.
Edit: I was trying to remember our computer then. I think we already had an Amstrad PC1512 with 512KB of RAM by '87 or '88, and by 1990 the 286s with 1 or 2MB of RAM were already common.
And by the time of Amstrad PC1512, BASIC compilers became an option as well, although originally Darthmound BASIC was compiled, CP/M systems also had compilers available, it just did not fit into 8 bit home computers, which had to wait for the 16 bit wave of home computers.
There were other high level dynamic languages with compilers like Lisp subsets, xBase/Clipper.
So if we managed back then, there is no reason to not have a tiny Ruby version nowadays for similar environments.
on the other hand, nowadays, we can just generate C code using ai.. as long as the project doesn't get too big to grasp without abstractions. ;)
also, it's very cool they're still being maintained!
At very least, C++.
"C++14 For The Commodore 64"
This presupposes that you're doing string manipulation and raw memory moves. I don't know about other people's code, but in my own experience neither of those things are likely to be happening in embedded code. You're much more likely to pre-allocate all your memory, and use zero-copy patterns where ever possible. I never used a C-string in relation to talking to peripherals (i2c, spi, register mapped) or when doing dsp. Moreover, if you do need to use memcpy it's probably because you are doing something very low level for which memcpy is the best choice.
I will leave it to others to fill out the cons. One obvious one is that C does not reflect the parallel compute nature of modern targets (e.g. SIMD) but neither do most serious alternatives.
I do think the time is coming (if not already here) where it would be judged legally negligent to professionally employ C in new systems for certain use-cases (network facing, high assurance, ...). At least not without complete formal verification. I'll leave it to the actuaries to work out where the tipping point is.
C trades very fast execution for rather slow development, especially of larger code bases, if you want correctness. It's fine for tiny programs though.
(Ruby, of course, has its own set of jolly footguns, as any dynamically typed language.)