I then tried building a custom AI chatbot for our support team, powered by a vector database (with previous answers, our docs, and community forum data). In my prototypes, the successful response rate was much higher (around 5 - 7x higher, ~30%). But it was surprisingly hard to build this bot: I needed to stitch together vector databases, custom integrations with Intercom, cronjobs to sync data, etc.
That’s why we built Retool Vectors. The idea is that we want to build a vector database that has full ETL from whatever inputs you have (e.g. Intercom, a postgres database, Salesforce, a community forum, a website, etc.). It’s always kept up to date. (We’re still working on some of these features, but I decided to try and launch this week because I just wanted to get feedback from HN, haha.) I think the industry has now settled on vector databases as the best way to provide context to LLMs. I hope that Retool Vectors can be a much easier way of getting data into it.
If you have any feedback please let me know!
>That’s why we built Retool Vectors.
Reminds me of Tobi Lütke who founded Shopify like that. When he built an ecommerce store to sell snowboards and realized how hard it was to set that up, he thought it was better to sell ecommerce stores than to sell snowboards
1. A lot of the useful data for answering questions is in our public docs and community forum answers, which Intercom doesn’t have access to. (And we wouldn’t feel comfortable giving them access to our internal Slack anyhow.) For example, we’ve debugged complicated OAuth issues in Slack, and there is a lot of “context” there that is helpful for answering future OAuth questions (but isn’t available to Intercom).
2. Intercom doesn’t allow you to customize prompts or customize context easily. In our case, for a highly technical product, “prompt engineering” allowed us to radically improve answer quality. We could also use chain-of-thought prompting, which Intercom didn’t support. Together these two improvements probably doubled the answer success rate.
3. We needed to integrate with our data warehouse for in-product context. For example, if a customer has an error with a particular product/feature, knowing what plan they’re on, which features they’re using, which feature flags are enabled, etc. is quite helpful.
And no, this is definitely not making money. (It’s free after all, so I guess we’re probably losing money on this.) To be honest, as an engineer @ Retool, I work on things because I think they’re cool and could be useful to our customers… not because it has to make my company money.
I'd love to hear from you (or anyone else reading this) on how your getting started experience went, particularly if you found yourself wanting to just eject into writing a script or app elsewhere. I love to pair on apps whether it's moving existing code into Retool or starting fresh to see the hurdles. My email is my HN username @retool.com
BTW just sent a twitter DM, thanks again for the feedback.
We are building https://github.com/trypromptly/LLMStack, a low-code platform to build LLM apps with a goal of making it easy for non-tech people to leverage LLMs in their workflows. Would love to learn about your experience with retool and incorporate some of that feedback into LLMStack.
It feels like all these "low code" tools are aimed at a tiny audience of technical people who are somehow technical enough to understand these UIs, but not quite technical enough to quickly roll their scripts and prototypes out the "traditional" way.
If you’re looking for external apps, check out portals (https://retool.com/products/portals) that has custom volume discounts for many, many users
Then we have an inbox app (also made in Retool) that our support team uses to manually review any submissions that are isLikelySpam = true. The <reason> helps to understand why it was flagged.
Our use case is for a form builder (https://fillout.com) but I imagine this type of use case is pretty common for any app that has user-generated content
Getting permission is definitely not a sure thing and might require some luck or savvy, or having a champion with clout (director, VP, etc…)
For a lot of companies, logos on homepages is more of an “ask for forgiveness later” type of thing. Not uncommon to do it without permission.
It’s also very common for TOS to include the rights to use your company logo for marketing purposes by default.
(I'm not claiming Retool does or does not do this.)
[1]: https://app.retool.com/auth/login [2]: https://retool-edge.com/app.cd90db16bade51faa017.js
For context, these resources are cached on your machine. So you should find that the actual data transferred significantly decreases after the first visit.
I presume that figure will be including a full browser, which is obviously inapplicable if you’re opening it in an existing browser, as you can and as the cited page is.
For comparison, on Arch Linux, the code package is a 15 MiB download, 90 MiB installed, and depends on electron22 (that is, it doesn’t include a browser itself).
> and is considered small in the desktop space.
Citation needed. I don’t think anyone reasonable would consider it small, not by a long shot. Not as huge as some very large things, sure, but certainly not small.
As for Google Docs, I hope no one uses Google stuff as an example of good or justifiable largeness. I can’t speak for Google Docs at all, but Gmail is one that I was historically somewhat familiar with, and around five years ago it was generally around 10× the download size and memory usage of Fastmail’s webmail on reasonably similar functionality (with one having more in one direction and the other more in another direction), and I know a more recent change made things at least twice as bad, though they could have clawed that back by now, and Fastmail resource requirements have definitely increased somewhat since.
Are you even reading the parent comment? He is pointing to the login page. I don't understand the comparison with VSCode.
Great to hear that! There is hope to frontend obesity crisis. What actions have you done to slip down the login page to 6MB? How large was it in the beginning? What is the bulk of that file? I mean React framework would span only 0.1% of that room.
Granted these are still early days for LLMs, but it looks they are primarily going to revolutionize how knowledge work is done at a scale similar to what computers have done. There are exciting projects coming out every week putting LLMs into the workflows of more and more knowledge workers. open-interpreter is the recent of the bunch.
Similar to other workflow automation projects in this space, we started building LLMStack (https://github.com/trypromptly/LLMStack) with a goal of scaling knowledge work pieplines. Exciting times ahead!
Who knows longer-term I'd love to help users fine-tune their own models on llama2/gpt-3.5!
I have been playing around with it for a priority enterprise use-case at my org, and this seems promising. We tried using the Retool vectors, but it really struggles to pick up the right context if we dump data without pre-processing it. For example, adding a URL without stripping header/footers/disclaimers/cross links etc. I think adding some steerability to the context selection process is going to be key.
I also have to admit that while retool is extremely feature rich - the user-experience is clunky and the learning curve is steep.
Zapier is no-code, catering more towards a citizen developer audience, where we’ve built Workflows with professional developers in mind. We typically see customers turn to Workflows when they want scalable self-hosting and more customization options (e.g. complex branching/looping, source control, custom error handling) than what Zapier is offering.
Just to be clear Retool Vectors is completely free (we're even taking on the LLM costs). We put the embeddings in a free postgres DB (and soon other open source vector DB providers) and give you the connection string so you can take it anywhere