I think the idea was to make it look "familiar" to engineers by looking like C (Verilog) or Ada (VHDL). But FPGAs are nothing like CPUs, and what you end up with instead is an unfitting language where you use a whole lot of "common constructs", knowing how they will be synthesized into hardware. And worse: Practically no good way to do abstraction.
Functional languages are a much, much better match, because that's what FPGAs are: Combining functions together. This works on higher orders as well, and it works well with polymorphism!
So privately at least, for anything substantial I've since been using Clash, which is essentially a Haskell subset translated to Verilog or VHDL: https://clash-lang.org
The learning curve is steep, it definitely helped that I was already proficient in Haskell. But then the code is so enormously concise and modular, and I now have a small library of abstractions that I can just reuse (for example, adding AXI4 to my designs). It's a joy.
It's worth taking a quick glance at all of them (maybe making a toy or two) to see if any of them strike your fancy. Being able to do metaprogramming and parameterization in an actual programming language rather than a not-quite C-preprocessor pass makes a lot of things easier.
Just about anything feels "nicer" to me than VHDL or Verilog, but I particularly like nMigen, and find SpinalHDL to be at least somewhat readable.
I agree, metaprogramming and parameterization (both enabled by polymorphism) is the key, and also what enables me to just "plug in" a function that essentially implements e.g. an AXI4 enabled register, or a whole burst transfer.
I shudder having to do that by hand every time in Verilog/VHDL now, tightly coupled to the rest of the logic.
However even for nMigen, and doubly so for Verilog/VHDL, it feels very 90s when it comes to engineering practices. The tooling is often lacking (module management, automated testing, CI and so on), cryptic acronyms are used everywhere as if each source code byte cost a LUT, you have to keep various things (eg. signal bit widths) in sync in multiple places and many more things which make C99 feel modern.
I'll have to introduce Clash, it seems like a big step in the right direction - thanks for mentioning it.
[1] https://m-labs.hk/gateware/nmigen/ [2] https://www.youtube.com/playlist?list=PLEeZWGE3PwbbjxV7_XnPS...
Verilog has a vaguely C-ish syntax, and VHDL has a vaguely Ada-ish syntax, but in my experience neither of them feel "imperative" in any real way once you get a handle on them. The issue with the syntax appearing imperative is superficial: imagine adding a C-like syntax to Haskell, perhaps to spur adoption the way Reason did for Ocaml - it wouldn't make it any less functional.
The real issue isn't functional vs. imperative - it's just that VHDL and Verilog are still painfully primitive in terms of their capacity for abstraction.
No, I stand by what I said, the syntax itself is essentially imperative, and it is a bad fit. A purely functional language (like Haskell, not like ocaml) with a C-like, so imperative, syntax would be a bad fit as well. And FPGA designs are naturally pure functions within an applicative functor, which languages like Clash demonstrate well.
I'm not so sure. I'm fairly certain that at least VHDL existed before FPGAs were invented. I think they are programming language-like so that they can be somewhat easily verified for syntax errors and undergo some static analysis.
You are 100% correct that these languages don't match up with the type of work you use these languages for. I find C# and Go (and other languages) to be pretty intuitive in a lot of ways, and HDLs are the exact opposite of intuitive, to me. I am sure it would be easier for me to learn to speak, write, and read Icelandic than it would be for me to learn VHDL or Verilog.
The concept here is 'inference' or 'synthesis' and it is the fundamental difference between and HDL and an imperative programming language. When you write general purpose statements in an HDL, the tools have to infer what sort of hardware would give you that behavior, and in a funny twist, the more you lean on high level language features the more likely you are to run into something that cannot be synthesized into hardware gates. Perfectly valid HDL with no valid solution!
The GPA approach is really efficient at what it does but ultimately it doesn't scale. For one, it needs 1 bit per cell in the 2D torus, but FPGA have kilobytes or low-megabytes amounts of memory. That makes it hard to simulate a 10,000 x 10,000 grid, let alone a 1,000,000 x 1,000,000 grid. For two, the FPGA explicitly calculates each iteration one-by-one. This is pretty fast in the beginning, and it means you can use it to calculate a billion iterations in a few seconds or a trillion iterations in a few hours, but you can't scale past that.
Hashlife can probably be sped up by GPUs a bit, but it processes a symbolic representation and consequently is quite suited to CPUs. It spends a lot of its time doing hash table lookups (hence the name) which is not a good fit for GPUs and a terrible fit for FPGAs.
(i.e. tree methods like Barnes-Hut for gravity and perturbation theory for Mandelbrot)
I'm not an expert though. Maybe GPUs have some way of mitigating the high cache latencies.