Holy cow this is awesome! The integration of project context with an LLM enabled more than the sum of its parts when it comes to information access.
When designing circuits for fast iterations, you're constantly managing risk and time expenditure; this leads to copying a lot of "flight heritage" circuits so to speak.
With the LLM, I'm able to get highly condensed starting points for better circuits when it comes to simplicity, understandability, or performance. As of right now, all I have to do is fact check the solution which is much easier than synthesizing one myself. I'm citing the example where I got it to design me a viable high gain analog frontend that meets my gain, bandwidth, noise, and biasing requirements.
GPT-4 produces a lot of my code now.
I'm leap-frogging my team a bit in productivity because they still don't like it, but it's so close to being undeniable.
Which one of us is lying?...
Also, I find myself having a conversation as I would with a real person, so it's less of spitting out the right answer for any technical question I might have, but more of a design buddy to consult when you're blocked on your search through the massive electronics solution space. Of course, this is a very preliminary use case and I can't wait to see what other people use it for.
1) How to communicate the circuit connections to ChatGPT in the text prompt. We ended up just using the standard netlist format. Apparently it knows about netlists from its training data. It also knows about datasheets.
2) How to get the needed accuracy and precision for questions like "calculate the resistance needed between component A and B". For that, we adopted the popular langchain framework to integrate specific "tools" into the AI for specific recognizable problems.
It then also recommends a LiPo battery with a nominal 3.7V which obviously wouldn't be compatible with its LM7805 recommendation.
It also recommends an ESP8266 which is a 3.3V part so neither of the recommended regulators, nor the battery alone, would be appropriate.
Also, it recommends a 168MHz STM ARM micro - but it already has an ESP8266 with GPIO.
I'm not sure what it's suggesting I build but it doesn't really sound like a working environmental radiation logger ;)
How should I think about this?
We've had good results by asking follow up questions like "what power supply IC could I use to power a 3.3v microcontroller from a 3.7V nominal LiPo battery over the whole battery voltage range?".
I asked that question and I got: LM3940 and AMS1117-3.3 (LDO, high drop out but probably ok for low current - one isn't in stock but it doesn't have stock info yet), and TPS62203 and ADP2108 (bucks, both seem appropriate). These answers should get better as Copilot gets access to more and more tools.
Some personal questions it refused to answer or give suggestions because "only you can decide" was the response.
I'm very suspicious of all this. I'm thinking the only real benefit--some may say--is the chat aspect. You can carry on a conversation rather than having to determine a proper Google search over and over again.
Which makes me think there is some hype involved.
I'm overall bullish on AI systems but I'm not sure this space has enough detailed context to train. Much if this knowledgeis in-house/proprietary. We'll see how fast these are able to improve over time.
For many applications such as talking to pdfs you can use a vectors db like pinecone
But that doesn’t work well for schematics because for of the interesting use cases you need access to the whole thing.
Here is what’s been working for us
1. Extreme compression and summarization of the data we dynamically put in the context 2. Leverage the extreme amount of world knowledge the cutting edge models have
Can you demo something with a little more bite to it? Seeing that you can help me save time on actually-meaty pieces of design work would make a lot of difference for me. Maybe something like "design a correctly biased class-AB output stage, with rail-to-rail swing on +/-5V rails, intended for use boosting op-amp output current"? That is an interesting one because (1) it's a small but real, useful, meaty problem with (2) a lot of examples out there in the world to copy from with (3) a nontrivial added constraint in rail-to-rail out but (4) even solutions that aren't all the way there are useful.
The thing about LLM is that it answers based on data it has seen before. So if enough posts on stackoverflow in 2021 claimed mps340 is the best CPU for beginners, that's what the AI will recommend.
There are 3 problems with this: it can and will be gamed, it is usually more conservative and not on the bleeding edge, and finally you can't ask questions you did't know you needed to ask so feature products built by random dude using chatgpt will have reliablity and safety issues.
Modern LLMs are able to perform web searches to make decisions on contemporary data. Once they have proper API support your concerns should be resolved, hopefully in a few weeks.
> reliablity and safety issues.
The solution to this is fine tuning / RHLF. OpenAI have done a pretty extensive job at getting political safety for ChatGPT with RHLF. It seems reasonable that RHLF could achieve a similar result in the hardware domain.
> you can't ask questions you did't know you needed ....
Solvable by prompt engineering. You can wrap user input in a prompt. As a toy example: "Here is user input $userInput if you have safety concerns about their project please respond with questions you think the user forgot to ask". Might also be possible to tweak with fine tuning/RHLF.
a) has there been anything like this before?
b) give it a year and see what it does
But the moment you're asking LLMs to reason about the specs or the applications of specific chips, they will give made-up answers around 80% of the time. This is probably a matter of the data being a bit too sparse. Pick a chip and ask about its supply voltage range, and it will probably get it wrong, even for the most popular stuff.
But isn't your supply voltage example exactly the sort of stuff they should get right? It's just regurgitating data sheets.
I kind of expect a tool like this to be able to operate in one of two, theoretically distinct (but maybe not practically distinct) modes. One is basically "blueprints", very much in line with the example I suggested earlier: draw me up one of these that fits in here. I wouldn't expect creativity, just more or less rote execution of a common plan. The other mode let's call "tuning": continuing with the class-AB stage example, this would be things like setting the biasing resistors correctly. That's a tedious task, possibly requiring simulation to do well, possibly just copyable, but with a large margin for usable results. I may not care exactly what I get as long as it's somewhat workable, which would be a good place for machine help. Or I might just want a decent starting point for my own simulation. I think "AI" techniques could handle either of those general modes and produce useful (if imperfect) results that save overall time.
What I don't expect AI to do is design architectures for me, or pick approaches. A tremendous amount of my value-add as an engineer is just me saying "no, don't choose approach A, I know it looks nearly equivalent to B on paper, but B is going to work out better". AI is not there now and I don't see LLM-style AI getting there any time soon, just by its nature. (At least not when it's a genuine judgement question and not an education/"A is never right" situation.)
What I don't think AI can do but I really, really wish it could is help me pick parts. All it needs to do for this is read and halfassedly-understand a bunch of datasheets, more datasheets than I can read myself. I think LLMs can do that! Though they might need to also be able to read graphs or simple diagrams ("Find me an EEPROM in the 208-mil SOIC package" is a great prompt and seems thoroughly actionable! But sometimes that dimension is only provided as part of the package drawing, which might be an issue.)
Recently I needed a PMOS FET with particular specs and was getting frustrated not finding quite what I wanted. So I asked ChatGPT. It was great at making me laugh and giving me a little break from datasheets! It did not actually help find a part. (It kept suggesting NMOS parts when I needed and asked for PMOS.)
Incredibly stoked to announce this today!
This is a dream a long time in the making. When we first thought of rebuilding the hardware design tooling stack from scratch, AI driven design seemed like the dream at the end of the tunnel...only 3+ years later we are here
Given the open ended nature of AI Expert agents we are launching a community preview. We are committed to keep building http://flux.ai in the open and are excited to discover this new realm with all of you.
Hit me with questions
I have a personal project similar in concept to flux, but rather than working at the circuit level, it's at the device, firmware, and infrastructure level. I've been trying to use GPT to intuit how to network devices using different protocols, such that it might be easier for lab techs (for example) to discover how to manage and maintain their experiment tooling and infrastructure. It's really hard, though. The outputs and level of integration are mind-bendingly good, but... It's not perfect, which makes it too clumsy or even dangerous for what I want to build.
I think I envision something like GPT transforming human language into prompts for a narrower model with more predictable outputs, which is far more aware and capable with the application's logic and data. The user experience could then give you the conversational ease, yet far better application integration as well.
Anyway, just curious what you're thinking and very excited about projects like this. This is really cool.
We already use a multi model approach and mix of non deterministic and deterministic tooling in the pipeline to optimize for a given task
Lots of more work to be done here
It makes sense if you're just shopping it around to the EDA incumbents since they'd have you redesign those parts anyway, but the fact that you currently offer a paid subscription to users makes me think that either you've underestimated or I've overestimated the importance of having those features by now. I guess that could also make sense if you want a more manageable user base for now.
We do have symbol and footprint creation and editing capabilities…in fact the vast majority of parts on the platform have been created by users.
Now what we don’t do is offer a modal editor like legacy tools do…instead it happens all in the same mode, which simplifies a lot of the UX and also provides powerful new abilities such using a schematic to create simulation models.
Now that said I think we can do a lot more here to further improve the experience and we def will but you are right in that it hasn’t been much of an issue which is because you have access to a vast preexisting library in the platform
If it's wrong, you can determine that during simulation. You could also assert that in the background, I suppose — prune the bunk suggestions by the language model before presenting them.
In my own experiments I find GPT can succeed in complex tasks if you prompt it multiple times at different steps of the task, and have some level of recursive validation and re-prompting to keep it in line. In a case like this I have a feeling it would do a great job, and would be especially useful for the discovery/exploration phase of a new project.
For starters: From our testing it’s really extremely good at providing the right formulas to solve the problem at hand.
But you are right, these models are inheritely non deterministic and therefore can struggle with things like math. The solution is to provide it access to tools such as a calculator which works really well.
e.g. most recently, I needed a boost converter with at least 300mA sustained I at 3.3V coming from the lowest V source possible (alkaline battery powered gadget) After an hour or so of searching I had to choose between TLV61220 and TPS61021A . Maybe there is a better third chip? At this point I ran out of time to research and just figured this the TPS61021A would do.
Digikey/Mouser/etc have parametric filtering (when it works), but optimization among three variables is not the strong suit there, especially when you have, say, an 8mm square to fill and cost is a factor (and availability/flexibility between footprints)
I would love to use this as a fair recommendation engine (that isn't sponsored by TI, skewing all results towards their products) where I can give it my circuit's minimum/recommended/maximum parameters and have it suggest applicable parts, or even take my finished designs and suggest cheaper alternatives for parts that match the design's footprints but have lower capabilities, if that's all my circuit needed in the first place.
(just threw my query into chatgpt and it suggested https://www.ti.com/product/TPS61221) not bad.
I worked at Meta for years and knew first hand that ML/AI can only ever get good with loooots of training data.
So we needed a way to produce lots of high quality training data first and a way for users to be able to reason about results and edit them.
There is a graveyard of failed AI hardware design projects and one thing they all have in common is that they are black boxes…its very obviously the wrong approach. But every week a new one pops up…I don’t even understand why anyone funds this stuff anymore frankly…it can’t work.
So we embarked on building our own eCAD editor first…that was gonna be step number one. It was also the hardest and most expensive way to go about it… but if you have big dreams then you just have to go and do big things. I recent these get rich quick schemes personally anyways…I love doing hard things!
LLMs just starting to get good around the time we had the editor ready for public launch (after years of beta) was a happy coincidence that makes us look like geniuses now. But I take it lol.
We had been actively playing on the side for the past 6 months on ideas for first AI features as we geared up for launch of the core editor but the critical insight to use our existing projects comments surface was a last minute brain fart that came to us about 6 weeks ago.
Things went pretty quick from there and we are moving extremely fast now that we have the right foundation.
Excited times!
This kind of talk is tacky and repels technical users.
I guess it doesn't matter if a few people are annoyed, if the end result of these things is a new increase in conversions. But I wish "don't annoy people" was actually in the product requirements.
Me: Is there a digital to analog converter with an 8V analog range and serial input?
CGPT: Yes, there are several digital-to-analog converters (DACs) available with an 8V analog range and serial input. One example is the AD5541A DAC from Analog Devices. It has a 16-bit resolution and can accept serial input in the form of SPI or QSPI signals. The output voltage range is programmable and can be set to 0 to 8V using an external reference voltage.
Another example is the MAX5389 DAC from Maxim Integrated. It has an 8-bit resolution and can accept serial input in the form of I2C signals. The output voltage range is also programmable and can be set to 0 to 8V using an external reference voltage.
These are just a couple of examples, but there are many other DACs available from different manufacturers that meet your specifications.
Me: Those parts are limited to 6v max
CGPT: I apologize for the mistake in my previous response. You are correct that the AD5541A and MAX5389 DACs have a maximum output voltage range of 5V, and they cannot meet your requirement of an 8V analog range.
However, there are other DACs that can meet your specifications. For example, the DAC5578 from Texas Instruments is a 16-bit DAC that can operate from a single 5V supply and has a programmable output range up to 10V. It also supports a serial interface with SPI or MICROWIRE protocols.
Another option is the MCP49xx series from Microchip, which includes several DACs with output voltage ranges up to 10V and a serial interface with SPI or QSPI protocols.
I hope this information is helpful, and please let me know if you have any further questions.
It would be interesting if circuit simulation could be used to vet out suggestions, but you'd need a vast database of components and their pinouts, GPT or something else would need to know how to arrange the circuit in order to simulate them properly, and so on. That's the only way I can think to validate outputs at the moment, though.
It's clearly missing experience with magic smoke.
Just signed up for a free account, will give it a try, but expecting hallucination to be a major issue.
Promising, but not quite ready for prime time yet.
I thought ML needs huge datasets? This is great for GitHub's Copilot as there is plenty of high quality, production, open source code they've used without authorisation from the creators (and possibly proprietary code).
But this dataset just doesn't exist for hardware development on the internet. Almost everything on the net is hobbyist stuff. Great for hand building 10 on a desk but absolute rubbish for a (even horribly) manufacturable and COMPLIANT product.
Further: an anecdotal, but to me very telling, one liner from the video "the component is "not use for new designs" but we can ignore that because there's 260,000 of them". I have personally sniped larger quantities than that from digikey. "Not for new design" means you don't use it in a new design. PERIOD. And ironically, for something like a Murata cap, they've probably literally changed one letter in the PN because they've made a tiny process or recepie change and its more a "move everyone to the new iteration" process.
I do sincerely wish them success and maybe they can carved out a small bit of the hobbyist market but that's probably it.
It also made some similar mistakes to what you describe, with silly things like the wrong parameter names. It still saved a lot of typing compared to doing it all manually. It didn't take long to fix it either, it just required fairly decent software engineering skills.
How is this relevant to circuit design? Well, in the same way it could potentially be an effort amplifier for people who already thoroughly know what they are doing.
AI is on the verge of becoming a really powerful lever.
This is a first step today, it’s neither done nor perfect…but it kept blowing our and our our most senior testers minds and so we decided to not withhold this incredible technology and rather invite to open experimentation and learning.
There are a lot of space constraints so first thing is figuring out if it is even possible to build as small as needed. Usually I would probably be better off starting to build big first and then scale down. But I feel confident that I can already get quite far with the amazing resources out there.
I spent hours watching videos about soldering, PCB design, some of the components I think I would need (very flat flexible cables, small ESP32 SMD, no space for USB) and then how to start actually doing the work. My last exposure to electronics was in grade 8 and that was very heavy on cables.
This 3h video here was very helpful to understand the process. It's an overview of building your own ESP32 dev board with EasyEDA from start to finish: https://youtu.be/S_p0YV-JlfU
I like these top-down big picture kind of things that go over everything in just enough detail to know what to search for. Does something like this but with flux.ai exist for complete beginners? The tutorials seem more focused on users switching over from other PCB design tools.
Every step of the way I am just amazed how today I can get physical things custom made and shipped within days.
And then there's fun videos like this here: Small scale fully automated PCB production in what is basically a garage -> https://www.youtube.com/watch?v=ZH6ljHUx_IM&t=800s
Also curious: has anyone created an LLM for circuit schematics trained on HDL code?
Very cool application. It's nice to be reminded that people are actually trying to make useful products in a world where it seems everyone is trying to sell you a solution with no problem.
That plus broad world knowledge and reasoning abilities gets you very far
And then you dynamically provide proper context and prompting as well as deterministic tooling such as a calculator to and voila…you got your new favorite virtual team mate :)
From this landing page's headline & 1st paragraph, I was still thinking they were talking about the broader-market, better-known Github Copilot – also a for-pay LLM-driven coding assistant! – in some way. In the 2nd paragraph, I saw they were just reusing the name of someone else's LLM coding assistant for something different that they independently trained.
Note also the current submission headline here is "ChatGPT for Hardware Design", not the page's "AI For Hardware Design with Copilot".
Did the submitter not use the usually-requested original page title, because they knew `Copilot` would be misunderstood? (They created another misunderstanding via their change: ChatGPT is only an analogy for what Flux.ai's product is doing, rather than a contributing part of the system.)
What is the process of adding component to the list? Additional, could rule to been user defined (to minimum EMI/EMC, e.g.)?
Flux is designed much like GitHub in this regard…you can fork other people projects and what not.
You can already ask it today to perform EMI/EMC checks for you…and we have started thinking about using it to write static rules for you among other things
Oh and something that can read datasheets for me, and extract important parameters (or convert them to spice models).
A tool that can hunt for cheap components would also be nice.
Apart from that, I don't think I like to have any fuzziness in my workflow.
We are working on all of this right now and have a bunch more announcements over the coming weeks and months.
Stay tuned!
It is becoming possible for AI systems to rapidly design hardware and software systems from the requirements of individuals who do not necessarily understand the subject matter.
I give tech 10 years before most of it is accomplished by AI and most people in tech will be forced out. Like DC elevators, some will remain, but not like it was.
See what I did there? Everyone can make outrageous claims without backing them up whatsoever :)
So def give it a shot!
I'm on call for the team this week.
We'd love to get more information here as we take anything around privacy and authentication very seriously. I'd like to get usernames of profiles involved and rule out being redirected back to a page you were visiting before signing in.
Would you be open to discussing further? You can reach me at robert.herbst@flux.ai. Being able to discuss with you directly would be really helpful.
Alternatively we actively monitor and respond to bug reports filed here: https://feedback.flux.ai/bugreports
When a user visits a Flux project without being signed in we create a temporary anonymous user record for them so that they can interact with the document. These user records are populated with human-like names (which is partially to blame for the confusion here. We'll fix that.)
When a user signs up their anonymous record is updated with the details they provide but the record id remains the same so we can keep the user's interaction history.
In this case when the Github sign up failed the process stopped half way. The anonymous account was never updated with the username neodypsis provided but appeared to be someone else's account (a human name followed by several numbers).
We've verified that the account was indeed linked to neodypsis' Github profile and no-one else had ever had access to it.
We've started some work to improve the OAuth sign up process.