If you want to know about how chips are made then I'd highly recommend the book "CMOS Circuit Design and Simulation" by Baker. It's starts off telling you how silicon is etched to make chips, then goes through how MOSFETs work and how to simulate them using SPICE. By the time you're half way through the book you'll know how a static CMOS logic gate works (down to the electrons).
If you'd rather learn something that you'll be able to apply yourself (without building a chip fab) then the place to start is Verilog (or VHDL). asic-world.com has some good tutorials. You can simulate what you've written using Icarus verilog and look at the results using GTKwave. If it works in simulation and you want to put it onto a real FPGA it's then just a matter of fighting the Xilinx/Altera/Lattice tools until then give you a bitstream. If you have enough money (a lot) you could even get a physical ASIC manufactured.
Haha! So true :-) I have to give some credit to Altera tho because after sort of step learning curve you can start generating "IP Cores" with QSys and Tcl scripts to stop clicking on the same icons all day long!
(Though if you're just making an LED blink when you push a button, you don't need to write good HDL)
This is a ~$120 book. Do you (or anyone else) have a recommendation for something a little easier to tell hobbyists they should get?
Or buy a used second edition at $20: https://www.amazon.com/gp/offer-listing/047170055X/
Adanced chip design by kishore Mishra is very good and like 50 bucks last I remember.
If I were you, I would just look at Amazon reviews and buy whatever looks decently rated and is avalible for cheap used.
[1] http://nand2tetris.org/ [2] https://www.coursera.org/learn/build-a-computer [3] https://www.coursera.org/learn/nand2tetris2
Open Source Needs FPGAs; FPGAs Need an On-Ramp | https://news.ycombinator.com/item?id=14008444 (Apr 2017)
GRVI Phalanx joins The Kilocore Club | https://news.ycombinator.com/item?id=13448166 (Jan 2017)
What It Takes to Build True FPGA as a Service | https://news.ycombinator.com/item?id=13153893 (Dec 2016)
Low-Power $5 FPGA Module | https://news.ycombinator.com/item?id=9863475 (Jul 2015)
The chip design and EDA industry are very closeted and niche - there is so much knowledge there that is not part of any manual.
For example for a newcomer, you wouldn't even know what testing and validation in chip design would be - or how formal verification is an essential part of testing.
You wouldn't know what synthesis is, what is place and route, and GDS masks for foundries.
There is seriously no place to learn this. The web design or the AI world works very different - you can be a very productive engineer through Udacity. Not with ASIC.
You need to find a job in the chip design or EDA industry. There is seriously no other way.
If I had to make a wild parallel - the only other industry that works like this are people who make compilers for a living. Same technology, same problems, similar testing steps I guess.
Among the few software systems that need rigor are control systems for physical installations and trading/finance systems for example.
I thought the thread was really about domains where the bulk of knowledge is locked up in industry rather than being about rigor, but I'd put control systems in that category as well. Also information retrieval (Google's search algorithms are about 2 decades ahead of the academic state-of-the-art...the folks at Bing/A9/Facebook know them too, but you aren't going to find them on the web), robotics, and aerospace.
In all honesty though, a significant amount of the complexity can be hidden in the eda layers nowadays, and instead of dealing with the er,"tenacious"- synthesis tools, you can use good alternatives like Chisel.
Does something like Chisel give you competitive apples-to-apples performance ? All the way to GDS?
The skills to do front end work are similar but an ASIC design flow generally doesn't use an FPGA to prototype. They are considered slow to work with and not cost effective.
IP cores in ASICs come in a range of formats. "Soft IP" means the IP is not physically synthesised for you. "Hard IP" means it has been. The implications are massive for all the back end work. Once the IP is Hard, I am restricted in how the IP is tested, clocked, resetted and powered.
For front end work, IP cores can be represented by cycle accurate models. These are just for simulation. During synthesis you use a gate level model.
Hardware emulators are expensive, but a single mask respin at 7, 10, or 16nm is even more expensive.
Nobody in their right mind would produce an ASIC without going through simulation as a form of validation. For anything non-trivial, that means FPGA.
The ability to perform constrained randomised verification is only workable via UVM or something like it. For large designs that is arguably the best verification methodology. Without visibility through the design to observe and record the possible corner cases of transactions, you can't be assured of functional coverage.
While FPGAs can run a lot more transactions, the ability to observe coverage of them is limited.
I have worked on multiple SoCs for Qualcomm, Canon and Freescale. FPGAs don't play a role in any SoC verification that I've worked on.
In this case, I'd guess its got a lot to do with cost vs relevance of the simulation. If you're Intel or AMD making a processor, I bet FPGA versions of things are not terribly relevant because it doesn't capture a whole host of physical effects at the bleeding edge. OTOH for simpler designs on older processes, one might get a lot of less formal verification by demonstrating functionality on an FPGA. But this is speculation on my part.
Process -- This is the science of creating circuits on silicon wafers using lithography, etching, and doping. There is a large body of knowledge around the physics involved here. Materials science and Physics and Silicon Fabrication are all good places to start.
Chip Design -- This is creating circuits which are run through a tool that can lay them out for you automatically. HDLs teach you to describe the semantics of the hardware in such a way that a tool can infer actual circuits. Generally a solid understanding of digital logic design is a pre-requisite, and then you can learn the more intimate details of timing closure, or floor planning, signal propagation and tradeoffs of density and speed.
IP -- Clearly all of the intellectual property law is a huge body but most of the IP around chips is patent law (how the chips are made) and copyright law (how they are laid out).
https://lagunita.stanford.edu/courses/Engineering/Nano/Summe...
No substitute for learning the physics, but at least it kind of gives you some idea of what's involved. In addition to all the crazy technology involved in fabricating the chips, the packaging technology has gotten really sophisticated. It can be very confusing about what's the difference BGA, WLCSP, stacked dies, etc. Anyway the course covered a lot different types of processing with examples.
Sadly I never got to hear the full story on how they came to be but I did get a good look at the packaging pipeline that Intel used at the time. It was extensive even then with half a dozen entities providing steps in the path.
[1] https://electronics.stackexchange.com/questions/7042/how-muc...
In fact personally I think this is a much better route for open source hardware. Reverse engineering FPGA bitstreams impressive, but you're swimming against the tide. If we had good open source tooling for synthesis/place-and-route/DRC checking and good open source standard cell libraries (and these things exist, eg. qflow, they're just not amazing currently) then truly open source hardware could actually be a thing. Even on 10 year old fabs you'll get much better than you could do on an FPGA (you just have to get it right first time).
Europractice has prices online: http://www.europractice-ic.com/general_runschedule.php
How is the cost calculated? I presume size of final wafer (ie, number of chips produced) at least; does transistor count per chip influence anything too?
Finally, is it possible to produce and maintain a fully open-source design that's the chip-fab equivalent of the book publishing industry's "camera-ready copy"? I get the idea that this is specifically where things aren't 100% yet, but, using entirely open tools, can you make something that is usable?
Unfortunately someone who has no access to the EDA tools by cadence / synopsis or standard library files from foundries, cannot really follow along all that far, you are limited to working at the RTL level.
There are several good open source RTL simulators available, I have personally used mostly verilator, which supports (almost) all synthesizable constructs of system verilog and has performance close to the best commercial simulators like vcs. It compiles your design to C++ code which you can then wrap in any way you like.
You should also check out https://chisel.eecs.berkeley.edu/, which is a hardware description language embedded in scala, the nice thing about it is that it has a relatively large number of high quality open source examples, designs (https://github.com/freechipsproject/rocket-chip) and a library of standard components available, something which can't really be said of verilog / vhdl unfortunately. As an added bonus you can actually use IntelliJ as an IDE, which blows any of the commercial IDEs available for system verilog or vhdl out of the water.
Another thing I can recommend is to get yourself a cheap FPGA board, some of them are programmable purely with open source tools, see http://www.clifford.at/icestorm/. Alternatively the Arty Devkit comes with a license for the Xilinx Vivado toolchain.
I'm by no means a veteran but my understanding is that "IP core" refers to a design you buy from someone else. Say you want a video encoder on your smart fridge SoC. You can either spend a whole lot of time, manpower and money developing one yourself or you can license the design from someone else who already has one and just dump it in.
You'd only do this when you want to integrate the design into your own (likely mass manufactured) chip. You can also often buy a packaged chip that serves the same function for much less but doing that is a tradeoff. You can do it at very low volume and cost but you potentially lose a bunch of efficiency in terms of space and power.
I've seen that some FPGA boards have HDMI transceivers that will decode TMDS and get the frame data into the FPGA somehow. That got me thinking about various possibilities.
- I want to build a video capture device that will a) accept a number of TMDS (HDMI, DVI, DisplayPort) and VGA signals (say, 8 or 10 or so inputs, 4-5 of each), simultaneously decode all of them to their own framebuffers, and then let me pick the framebuffer to show on a single HDMI output. This would let me build a video switcher that could flip between channels with a) no delay and b) no annoying resyncs and c) because everything's on independent framebuffers I can compensate for resolution differences (eg, a 1280x1024 input on the 1920x1080 output) via eg centering.
- In addition to the above, I also want to build something that can actually _capture_ from the inputs. It's kind of obvious that the only way to be able to do this is via recording to a circular window in some onboard DDR3 or DDR4 (256GB would hold 76 seconds of 4K @ 144fps). My problem is actually _dumping_/saving the data in a fast way so I can capture more input.
I can see two ways to build this
1) a dedicated FPGA board with onboard DDR3, a bunch of SATA controllers and something that implements the equivalent of RAID striping so I can parallelize my disk writes across 5 or 10 SSDs and dump fast enough.
2) A series of FPGA cards, each which handles say 2 or 3 inputs, and which uses PCI bus mastering to write directly into a host system's RAM. That solves the storage problem, and would probably simplify each card. I'd need a fairly beefy base system, though; 4K 144fps is 26GB/s, which is uncomfortably close to PCI-e 3.0 x16's limit of 32GB/s.
I'll admit that this mostly falls under "would be really really awesome to have"; I don't have a commercial purpose for this yet, just to clarify that. That said, my inspiration is capturing pixel-perfect, perfectly-timed output from video cards to identify display and rendering glitches (particularly chronologically-bound stuff, like dropped frames) in software design, so there's probably a market for something like this in UX research somewhere...
No FPGAs have anything to decode TMDS. What they have is transceivers that will take the multi-gigabit serial signal and give it to you 32-bits or 64-bits at a time. Then it's your job to handle things like bit alignment, 8b10b decoding, etc. You need 6MB of RAM to hold one frame at 1080p, and to do this you'll need to hold one for each input. That requires external RAM (there's no enough on the FPGA), which means you'd need something like 30Gbit/sec of RAM bandwidth, which means you need DDR2 RAM before you've even stored anything (DDR3 is really awkward, I'd stick to DDR2 or LPDDR2, DDR4 is going to be impossible).
I'm not even going to comment on the feasibility of implementing SATA.
Another comment mentioned the HDMI2USB project, and the main FPGA target of that project uses DDR3 FWIW. I get the impression that managing RAM is just really really really _unbelievably_ hard; not impossible, just pɹɐɥ ʎllɐǝɹ ʎllɐǝɹ.
https://hdmi2usb.tv/home/ - Open video capture hardware + firmware
I would also recommend this project if someone doesn't have an idea of their own, and/or doesn't have any fpga experience. So if someone is looking for people with both of those doing work in the open who might be willing to mentor them, finding a way to help this project is a great way to get started.
It uses an FPGA with DDR3 incidentally, huh.
Thanks very much for this, this is very close to what I'm trying to get at. I agree that figuring out how to help out on this project would be very constructive, I've filed it away.
Actually, you can do this just fine. There are no 'non-commercial' clauses or anything like that in the GPL. You just have to release your own sources.
Hmm, I just found a teardown video: https://www.youtube.com/watch?v=TLs2CNhbKAs
I spy a Xilinx Spartan 6 at 2:16!
It's Altera Cyclone III-based. The MiST wiki says the free "web edition" of the Altera "Quartus II" development environment is sufficient to develop for the unit (albeit I haven't actually gotten around to doing anything with it yet).
I can't say how the MiST board stacks-up to dev boards from FPGA manufacturers. I may be going about this the most wrong way possible, but here was my rationale: I was attracted to MiST because tutorials were available for it (https://github.com/mist-devel/mist-board/tree/master/tutoria...), and because the device could be usable as a retro-computing platform if it ended up being unusable for me for anything else. (Chalk that up to rationalization of the purchase, I guess.)
Logic gates are essential of learning digital circuits. After you understand logic gates, you can use it to build many things that are really related to the application. There are many tools to verify the logic gates works as designed.
Then based on the project requirement ($$, time, performance, ...), you can choose to use FPGA or ASIC to implement logic gates. FPGA use array of logic gates while ASIC uses CMOS to implement the logic gates. FPGA is easier to learn and much cheaper. You can buy some development board which costs only several hundred dollars. While ASIC needs much domain knowledge and people involved. ASIC needs you to understand the electronics in order to build something that is useful. You need to understand how the CMOS are implemented (= how semiconductor becomes conductive), how the resistance and capacitance affect the performance, the number of wafer layout which affects the cable layout and more. And don't forget manufacturing can introduce defeats which cause the IC to malfunction in unexpected ways. Each step in ASIC needs a specialist for them
For my computer architecture class (i.e. the class where we learn how to design basic CPUs -- and ultimately implemented one in Verilog), we used Computer Organization and Design by Patterson and Hennessy. That might also be of interest.
When you want to understand CMOS and integrated circuits, you need some electronics experimenter kit, and a lot of practice in ohms law. Then read up on multiple gate transistors and (here my experience stops) lithography and small scale challenges (tunneling loss is a thing, I suppose?)
Of course, "ip core" can mean different things, might be some Verilog source, might be some netlists, might be a hard macro for a particular process. You really need to work with it to get the specifics. (Subscribing to EETimes, going to trade shows, and otherwise keeping up might help)
But at the end of the day, you're asking "how can I become and experienced ASIC engineer," and the truth is that it takes time, education, and dedication.
However, I couldn't find a chip-design job in a country other than the US or UK that wasn't related to military applications.
Now I work in Taiwan, and I see the chips being made! But my work is related to control systems for the testing equipment, which is software instead of hardware design.
I got a Virtex-II FPGA board from a recycling bin, and I wanted to find a good personal project for it. Even now, I'm at a loss for ideas. I can do everything I need with a Raspberry Pi already.
Please can someone suggest some good projects I could only do with an FPGA?
open source FPGA workflow. ASICs is more tough
You can think of "IP cores" as bundled up (often encrypted/obfuscated) chunks of Verilog or VHDL that you can license/purchase. Modern tools for FPGAs and ASICs allow integrating these (often visually) by tying wires together -- in practice you can typically also just write some Verilog to do this (this will be obvious if you play around with an HDL enough to get to modular design).
Just writing and simulating some Verilog doesn't really give you an appreciation for hardware, though, particularly as Verilog can be not-particularly-neatly divided into things that can be synthesized and things that can't, which means it's possible to write Verilog that (seems to) simulate just fine but gets optimized away into nothing when you try to put it on an FPGA (usually because you got some reset or clocking condition wrong, in my experience). For this I recommend buying an FPGA board and playing with it. There are several cheap options out there -- I'm a fan of the [Arty](http://store.digilentinc.com/arty-a7-artix-7-fpga-developmen...) series from Digilent. These will let you play with non-trivial designs (including small processors), and they've got lots of peripherals, roughly Arduino-style.
If you get that far, you'll have discovered that's a lot of tooling, and the tooling has a lot of options, and there's a lot that it does during synthesis and implementation that's not at all obvious. Googling around for each of the phases in the log file helps a lot here, but given what your stated interest is, you might be interested in the [VLSI: Logic to Layout](https://www.coursera.org/learn/vlsi-cad-logic) course series on Coursera. This talks about all of the logic analysis/optimization those tools are doing, and then in the second course discusses how that translates into laying out actual hardware.
Once you've covered that ground it becomes a lot easier to talk about FPGAs versus ASICs and what does/doesn't apply to each of them (FPGAs are more like EEPROM arrays than gate arrays, and for standard-cell approaches, ASICs look suspiciously like typesetting with gates you'd recognize from an undergrad intro-ECE class and then figuring out how to wire all of the right inputs to all of the right outputs).
Worth noting: getting into ASICs as a hobby is prohibitively expensive. The tooling that most foundries require starts in the tens-of-thousands-per-seat range and goes up from there (although if anyone knows a fab that will accept netlists generated by qflow I'd love to find out about it). An actual prototype ASIC run once you've gotten to packaging, etc. will be in the thousands to tens of thousands at large (>120nm) process sizes.
For <100k, yes, you can absolutely do a small run in that range.
Honestly, you might be better off just buying functional ICs (multi-gate chips, flip flops, shift registers, muxes, etc.) and making a PCB, though. Most crypto stuff is small enough that you can do a slow/iterative solution in fairly small gate counts plus a little SRAM.
I was about to write a thorough answer but jsolson's very good answer mostly covered what I had to say. I'll add my 2 cents anyway
- To understand the manufacturing process you need to know the physics and manufacturing technology. That's very low level stuff that takes years to master. There are books and scientific papers on the subject but it's a game very few players can play (ie. big manufacturers like intel and TSMC, maybe top universities/research centres). You can try some introductory books like Introduction to Microelectronic Fabrication [1] and Digital Integrated Circuits [2] by Rabaey if you're curious
- You can design ASICs without being an "expert" on the low level stuff, but the tools are very expensive (unless you're a student and your college has access to those tools). You need to learn VHDL/Verilog, definitely need the right tools (which, I'll say it again, are too expensive for hobbyists) and extra money to spend for manufacturing.
- FPGAs are different. No need to know the physics and you don't have to bother with expensive tools and foundries, the chip is already manufactured. My advice is to
(a) get an FPGA board. Digilent boards are a good choice as jsolson said, but there are other/cheaper solutions as well [3][4]. You'll still need software tools but (for these boards) they are free
(b) learn VHDL and/or Verilog. Plenty of books and online resources. A good book I recommend is RTL Hardware Design Using VHDL [5]
(c) I assume you know some basic principles of digital design (gates, logic maps, etc.). If not you'll need to learn that first. A book is the best option here, I highly recommend Digital Design [6] by Morris Mano
[1] https://www.amazon.com/Introduction-Microelectronic-Fabricat...
[2] https://www.amazon.com/Digital-Integrated-Circuits-2nd-Rabae...
[3] https://www.scarabhardware.com/minispartan6/
[4] http://saanlima.com/store/index.php?route=product/product&pa...
[5] http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0471720925...
[6] https://www.amazon.com/Digital-Design-3rd-Morris-Mano/dp/013...
Btw you ask about 4 almost totally separate areas(RTL UVM TB etc), only managers/execs/veterans/architects know the whole process from raw silicon to packaging.
I spoke to the creator today and he's planning a tutorial/example IP series - probably open to suggestions if there's anything you're particularly interested in.
1. Processes, which involves lots of materials science, chemistry, and low-level physics. This involves the manufacturing process, as well as the low-level work of designing individual transistors. This is a huge field.
2. Electrical circuits. These engineers use specifications given to you by the foundry (transistor sizes, electrical conductance, etc.) and using them to create circuit schematics and physically laying out the chip in CAD. Once you finish, you send the CAD file to the group from #1 to be manufactured. Modern digital designs have so many transistors that they have to be laid out algorithmically, so engineers spend lots of time creating layout algorithms (called VLSI).
3. Digital design. This encompasses writing SystemVerilog/VHDL to specify registers, ALUs, memory, pipelining etc. and simulating it to make sure it is correct. They turn the dumb circuit elements into smart machines.
It's worth noting that each of the groups primarily deals with the others through abstractions (Group 1 sends a list of specifications to group 2, Group 3 is given a maximum chip area / clock frequency by group 2), so it is possible to learn them fairly independently. Even professionals tend to have pretty shallow knowledges of the other steps of the process since the field is so huge.
I'm not super experienced with process design, so I'll defer to others in this thread for learning tips.
To get started in #2, the definitive book is the Art of Electronics by Horowitz & Hill. Can't recommend it enough, and most EEs have a copy on their desk. It's also a great beginner's book. You can learn a lot by experimenting with discrete components, and a decent home lab setup will cost you $100. Sparkfun/Adafruit are also great resources. For VLSI, I'd recommend this coursera course: https://www.coursera.org/learn/vlsi-cad-logic
To learn #3, the best way is to get a FPGA and start implementing increasingly complicated designs, e.g. basic logic gates --> counters --> hardware implementation of arcade games. This one from Adafruit is good to start: https://www.adafruit.com/product/451?gclid=EAIaIQobChMIhKPax..., though if you want to make games you'll need to pick up one with a VGA port.
Silicon design & manufacturing is super complicated, and I still think that it's pretty amazing that we're able to pull it off. Good luck with learning!
(Source: TA'd a verilog class in college, now work as an electrical engineer)
Example: http://www.ebay.com/itm/ALTERA-FPGA-Cyslonell-EP2C5T144-Mini...
This uses the EP2C5T144. Some basic getting started info here: http://www.leonheller.com/FPGA/FPGA.html [Leon Heller]
You can run a free copy of Quartus and use it to run Verilog / VHDL cpus, or anything else you want to do.
Same applies to the Xilinx FPGAs.
A really cool example project: http://excamera.com/sphinx/fpga-j1.html
I would not worry about vendor lock-in for now -- there are some quite affordable dev boards (like the Arty (http://store.digilentinc.com/arty-a7-artix-7-fpga-developmen...) I've mentioned elsewhere), and no matter what you pick there's a ton of tooling. The concepts from the tools will translate between vendors, though, even if the commands and exact flows change.
That said, for a novice there are no significant differences in features between Altera and Xilinx.
The place where you might see problems is mostly in simulation, where the big three tool vendors support different language features in SystemVerilog and VHDL-2008. Again, this is not likely to be a problem for a hobbyist/novice.
GHDL has good support for VHDL-2008, I don't know how good Icarus supports the corresponding SystemVerilog.
This might help you:
MOSIS Integrated Circuit Fabrication Service
https://www.mosis.comIf you could tell us your background it might be helpful to get started.
I've actually done all-of-the-above, from making SOI wafers to analog circuit design for CMOS image sensors to satellite network simulation in FPGAs to supercomputer architecture and design to XBox GPU place-and-route.
It will honestly take you at least a few years to be able to understand all of this, and I can't even begin to tell you where to begin.
My track started with semiconductor fabrication processing in college - lots of chemistry, lithography, physics, design-of-experiments, etc.. I guess that's as good a start as any. But before that I did get into computer architecture in high-school, so that game me some reference goals.
What are you ultimately trying to do? Get a job at a fab? That's a lot of chemistry and science. Do you want to design state-of-the-art chips, as your IP-core question hints at? That's largely an EE degree. Do you want to build a cheap kickstarter product, as your FPGA question suggest? That's EE and Computer Engineering as well.