http://www.cs.trinity.edu/About/The_Courses/cs301/math-for-t...
Or to put it another way, a person can know J in the same sort of way a 7th grader knows math...or a 3rd grader or a Master's candidate. J is an exploratory language and will meet a person where they are. It is useful whether the person is only capable of basic arithmetic or convex hulls.
A programmer may expect < to perform less-than. In J it does, when used as a dyad. Used monadically, it acts as a boxing operator (similar to Scheme's box). That's strange, but livable. The problem, though, is in J, <. means 'lesser of' (min), and <: means 'less than or equal'. Again, this is a potentially livable arrangement. Except that . means determinant (or dot product, depending on monadic or dyadic appearance) and : means either 'explicit' or 'monadic/dyadic', a sort of combinator that accepts a monadic and dyadic operator and yields a new one. That is, in Scheme, dyadic usage of : is akin to a case-lambda that checks if it uses one or two arguments.
If I present to you the program <., what does it do? Well, it certainly does something different from < ., which does something different from < :, which does something different from <:, which does something different from < . :, which again differs from <. :, and so on. This is exacerbated by operations such as 'table', written /, which bleed into other places. For example, </ inserts box, while <:/ inserts a decrement! And < :/ boxes up the result of an 'explicit' being 'inserted', whatever that means. And this syntactic problem plagues the entire vocabulary of the language:
http://www.jsoftware.com/help/dictionary/vocabul.htm
A clear optimization would be to just give these things their real names, use text, and write S-expressions. At least then the programs would be readable!
The largest problem, however, is that for being a functional language, J does little if anything to convey the important idea that have been developed in functional programming: recursion. It simply isn't how things 'are done' in J. So we have a language with bad syntax, good semantics, and a horrible problem from an educational perspective. It has some nice additions above and beyond other APLs, but after that it falls flat. I'd urge anyone who has read this far to learn APL instead of J; it will at least be readable by other APL programmers.
(I didn't mean to go on a rant, but I suffered through Howland's class on J at Trinity, and it wasn't until I had a professor show me Scheme in ~3 hours that I saw why functional programming was truly worthwhile.)
< x
is no less intuitive than box x
in regard to the mathematical concepts. Maybe the latter is slightly easier to google...if we can filter out the use of "box" in all sorts of other contexts including those in computing where "box" is some sort of graphic element. Which is to say that "box" is far more overloaded than a monadic use of "<".We didn't learn Σ intuitively.
"bad syntax" in case of J triggers begging to differ.
Yes, it's hard to decipher without the dictionary. However, math notation - and J descended from there - isn't that obvious either, unless learned.
A crucial idea of APL is "good notation enables thinking". This is quite valid for at least some J programmers. And J was born as a better APL, evolving from latest APL versions of the time with some problems visible in the retrospect fixed. Though I think had Unicode developed earlier J could look differently.
Yes, there is the recursion in J; yes, it's not used too often. Why that should be a problem?
I too suffered at the hands of Howland. That's why I sat in on Eggen's FP class after he took over the course (he's probably the one who taught you Scheme?).
He was a good guy but he scared years of students away from FP with that course.
The jqt environment that comes with the standard package is polished, and encourages an interactive workflow where you progressively build your sentences while looking at the output. This way also greatly eases the learning curve.
Another useful tool to help learning is the cheat sheet [2]. I'm not really into dead trees but I still printed it on paper because I'm not surrounded by monitors, and it really helps to have a quick reference to the very rich and concise J vocabulary. Applying unexplored verbs to problems that interest you is a fun exercise.
One thing that struck me it that it's difficult to look up examples to help you as you go, in part because the one-letter name is not search engine-friendly, but more importantly because of the small size of the J corpus out there on the web. This is good because it force you to find solutions yourself, but it's bad because you never know if you're limiting yourself to a sub-optimal solution.
[1]: https://news.ycombinator.com/item?id=1041500
[2]: http://www.jsoftware.com/jwiki/HenryRich?action=AttachFile&d...
ps: I'm not sure it was J .. maybe K. And I can't find the C source anymore. Haa...
pps: check this http://lambda-the-ultimate.org/node/5075 they discuss performance.
I'm learning to build interpreters for a toy relational language (in F#). So a fast intepreter appeal to me.
I look at http://keiapl.org/rhui/remember.htm#incunabulum but a)I don't know C b)Don't know J so wonder if exist a easy-to-understand interpreter of J somewhere? The ones I look (in the link above) are very dense. Full of code-golf.
First, what exactly is native code? Is it the PythonVM/JVM byte code? The x86 code it is JITted into? The microcode that interprets the "native" x86 code? You probably mean the x86/ARM/whatever bytecode, but that's not a trivial definition.
Practically speaking, compiling to "native code" is mostly useful given a known architecture. "rep movsb" used to be the fastest way to copy memory around; and then it was slow as mollasses; and then it was fast again. Table lookups were the fastest way to do 6-bit by 8-bit multiplications up until the 386 or 486 - and then the multiplier became faster than memory.
Somewhat surprisingly, J (and APL and K) tend to focus on promoting those things that have always been true - e.g. sequential access is much faster than non sequential; branch free code is better than branching code; small code/data that fit in cache/main memory is better than larger code/data that doesn't. The computation model fits modern CPUs and GPUs very well, despite being approximately the same one since the 1950s.
It's a different take on the "better algorithms beat faster implementations" - a good language that makes a good fast solution intuitive is often, in practice, better than one that can be faster with a lot of intricate work.
Now, if you need recursively search through a directory full of text files, would you use fgrep or would you write your own "high performance" search utility with a hard-coded (none of that slow run-time interpreting!) list of files and needle?
s =: ({. , }. /: 12"_ o. }. - {.) @ /:~
in my code base, you are going to get smacked.
Expressability of programming languages has a cognitive limit and J is flirting with it. s←{
sp←⍵[⍋ ⍵]
h t←(1↑sp) (1↓sp)
h , t[⍋ 12○(t-h)]
}
or I can make it ugly as hell:
s←{h,t[⍋ 12○(t←1↓p)-h←1↑p←⍵[⍋ ⍵]]}
In Dyalog (and most APLs now) ←{...} creates function that automatically has ⍵ as it's left argument (so invoking "s points" ⍵ is points )⍋ give the indexes in sorted order and [ ] is how you access elements so ⍵[⍋ ⍵] is how you sort an array
↑ and ↓ are take and drop so (1↑sp) takes the head and (1↓sp) drops the head leaving the tail "h t←(1↑sp) (1↓sp)" is just multiple assignments
J's verb trains are cute but kind of awful
"h , t[⍋ 12○(t-h)]" sorts the tail according to the value of each elements phase (I agree the circle functions are particularly bad) and cats its back on to the head (also strictly speaking i don't need the parens around "t-h" but I like them).
[1] well no actually I wouldn't because Dyalog's grade up (⍋) function doesn't work on complex numbers (though how to write a function ( gu ) that DOES is covered here http://dfns.dyalog.com/c_le.htm )
However, this list is incomplete.
H - http://dcoj.wmh3.com/cscos/h/
M is sometimes used to refer to MUMPS (see http://en.wikipedia.org/wiki/MUMPS and http://www.mumps.org/ ). It was also a language at Microsoft, http://www.theregister.co.uk/2008/10/10/dial_m_for_microsoft .
N - http://link.springer.com/chapter/10.1007%2F978-3-642-76153-9...
Z - https://github.com/chrisdone/z
In other words, the answer to your question is either "yes" or "that's already happened."