I understand there is an ongoing major chip shortage but it has been over 2 years and historically these were available next day. I'm sure that I'm not the only one waiting for years with finished designs and no information.
I asked Digikey and they told me that Intel do not provide them with good communications, such that they cannot even populate this, never mind to provide a lead time per product. https://www.digikey.ch/en/resources/reports/lead-time-trends
They had to have a meeting with higher uppers at the distributor to scramble about 200 pieces industrial grade from them (that was basically a year's worth of production for that product, low volume production).
So I have no doubt that whatever mil contractors need they will get it.
Well, according to these news from Intel, that's not the case. Intle announces new mid-level FPGAs and even new FPGA to compete with Xilinx RFSoCs
It would of course have been preferable for any of them to directly support the open stack (yosys, nextpnr, and the various bitstream generators); Lattice's ECP5 is nice, but not very fast.
Another option, do you want to see by yourself how caching improves code performance? You can make that on an FPGA also.
I have done some time ago a very nice course which is called From Nand to Tetris, that teaches how to go all the way from gates to a processor. My plan for the near future is to implement the NAND2Tetris processor on a Basys 3 board, and make some of the examples I mentioned (like, probing internal registers using HW tools, or seeing by yourself how a cache improves performance). I hope to make some of these things available on my site. I am the OP, but just in case, my website is https://fpgaer.tech
The "problem" is really that you can do a LOT with MCUs, and programming FPGAs is very different than microcontrollers serially executing your code so they are pretty niche.
Like, if you for example needed to drive few thousand neopixels techncially FPGA would be perfect use case, just make a blob of memory and a bunch of serializers that feed each chain of LEDs.
... or you can just use some DMA magick on some fast STM32 and hack it out without having to learn new programming language.
My only relatively reasonable suggestion is making perfect hardware emulators for old hardware, like vintage computers and their parts.
Alas, FPGA programming model is neither easy nor intuitive, and the tools are mostly proprietary, expensive, and (as I heard) have a steep learning curve.
Yes, there is, if you're willing to learn.
>Always game to learn a new tech.
Great. A good starting point is NANDLand[0].
Their board is not at all a bad purchase at <$100, particularly to follow their tutorials, but you can do better[1] if you are adventurous. e.g. iCEBreaker or iCESugar.
I'd still recommend to stick to iCE40 and the open source yosys/nextpnr stack, until you become competent enough to actually know what you want/need.
Many years ago, that would have been the only solution to avoid paying thousands of dollars for a professional instrument.
However, nowadays there exist a number of commercial solutions for such cheap instruments, which are made exactly like this, i.e. with an FPGA surrounded by appropriate I/O circuits and connectors, some USB or Ethernet interface for the PC and some capture software.
These have the advantage that are ready to use out of the box, but making yourself such an instrument can be instructive.
With a bit of luck, someone will reply with better options.
Edit: or look at this list: https://hackaday.io/search?term=fpga
Though most of what you can do with the FPGA would be covered by built-in timer/capture modules or the PICO PIO modules on other chips (though IIRC, some of the boards have analog support in the FPGA if you wanted to do real-time audio processing). I have not found a particular hobbyist use for them, only know it from my father teaching some computer engineering courses and picking since he could use the same board for digital logic and microcontroller programming.
ADD: last time I witnessed something actually new and interesting: https://www.efinixinc.com/technology.html
I mean not a scam.
But IMHO this is is far more interesting:
https://hardwarebee.com/gowin-semiconductor-new-22nm-high-pe...
This is a real contender from China. Not just some new cheap chip, but actually competitive with Intel/Lattice/Xilinx offerings.
there are even few in stock https://eu.mouser.com/ProductDetail/GOWIN-Semiconductor/GW1N...
You likely aren't going to see "consumer-oriented" Agilex-powered devices in similar form factors anytime soon; most of the compute power of the fabric would be completely wasted without fast peripherals/storage anyway (the fabric can be clocked very high if you pipeline enough) and the shipping volume to make the boards attractive/low-price enough would need to be pretty decent. This means the board would actually end up expensive; why pay so much for the FPGA if all the I/O sucks? But good I/O is expensive. The Agilex parts are some of Intel's most complex devices on their latest "Intel 7" process too, it's not cheap.
I guess it's maybe something that would be possible to crowdfund in low volumes, but it's a truly huge amount of work even before you get to price. Designing the board, getting the software and tools working, documentation, that's before you even manufacture. And realistically there are probably other FPGA options that would be cheaper and more readily available. Something like a Kintex-7, or Arria 10 part would probably do just as well for significantly cheaper, I'd think, but it's hard to guess unless you have more concrete requirements in terms of LEs/memory/clock speeds.
Maybe if you pray they will make an Agilex-D PCIe developer kit that "only" costs $2k USD, requires tiny fans that sound like jet engines, and has 1 HDMI out.
The MiSTER project is already inspiring a generation to learn how to program FPGAs, those skills could be applied to improve performance in a lot of applications.
The Mister uses the Cyclone V 5CSEBA6U23I7 built on TSMC 28nm. It has 110000 logic elements and also a hard ARM.
These are on Intel 7, so there a significant process improvement.
The smallest Agilex D has the same number of logic elements(1) and the largest has 6x as many (2). Note that Agilex tends to be much more expensive than Cyclone.
Realistic maximum clock rates on the Cyclone V are 50-200MHz, depending on the length of the combinational logic chains. For the Agilex I think that it is more like peak 600MHz. So I would guess say 25% faster and say peak 750MHz?
As to fitting all the logic for n64/PS2, I don't know!
1) Comparing these directly is not quite accurate.
2) https://www.intel.com/content/www/us/en/products/docs/progra...
Also, regarding point 1 -- and I'm sure you know this already, but this is for everyone else -- you have to be careful when talking about "Logic Elements" with most of the industry because, even when describing 6-LUT parts, Intel/Xilinx often list "Logic Element Count" to mean "Number of logic elements when considered on a normalized 4-LUT basis." Both Intel and Xilinx do this and I guess it kind of makes sense; only their highest end parts are non-4LUTs so they're the odd ones out when compared to everything else in the industry. So when you read "Number of LEs", they normalize it to 4-LUTs since that's what your competitors (or older parts) would use, when accounting for density.
For example I have a Stratix 10 card in my server that has "Millions of logic elements" when you look at the product tables, but that is taken on a normalized 4-LUT basis. There are actually "only" about 900,000 ALMs on the device, with each ALM being 8-input and fracturable. So TL;DR the numbers between the low-end Agilex-D part and the Cyclone part is maybe not that far off.
If you really meant 25% you probably were not reffering to the clock speed/MHz, what were referring to?