There will be a paradigm shift where our-customers-are-ai apps appear and most stuff will need to have an API which makes AI able to connect and use those services effortlessly and without error because who don't want to tell his assistant to "send 5$ to Jill for the pizza"? There will be money in base AI models you can choose(subscribe to) and what it can and cannot do for you. It will still be riddled with ads and now it's your personal assistant who can push any agenda to you.
Operation systems will become a layer under the assistant ai.
You will still talk on/to your phone.
I guess free software will be more important than ever.
Free ai assistants will be available and computing power will be there to run it on your phone but all the shit we've seen with open Vs closed source, Linux Vs Windows, walled gardens whatnot will go another round this time with free open public training data assistants Vs closed-but-oh-so-less-clumsy ones.
Security problem will be plenty like how to hide your AI assistant custom fingerprint? Authentication and authorization system for AI? How much someone else personal assistant worth? How to steal or defend it?
Right now, if you want to build something like a daemon you will need multiple agents. When a task needs to be delegated, the central control system needs to be able to spin up a few processes and get them going.
You can do this right now. You can create an ensemble of personalities/roles (jr dev, sr dev, project manager) and have them plan.
They do a good job, if you are monitoring it. You can break up the plan into chunks, spin up more instances, distribute chunks and have those instances work on it.
Sadly this cant happen, and I think these are fundamental limits to Generative AI. Right now personas just go ahead and "pretend work" - they say we "I am now going to go and come up with a project plan".
You have a dependency where output from all prompts must be actionable, or verifiable. If output is off, then it snowballs.
This is not a tooling, or context window issue. This is a hard limit to what Generation can achieve. Its impressive that we got this level of emergence, but when you look at output in detail, its flawed.
This is the verification issue, and verification (or disproving something) is the essence of science.
Maybe something can be added to it, or entirely new structures can be built - but generation on its own will not take us over this threshold.
Right now, LLMs are more like actors. They are very good actors, but you dont get your prescriptions from actors.
People will mourn them and there will be real world AI cementeries.
Diffie Whitfield envisioned an internet highway where every computer is connected to every other in 1974 and then went on to find out how to do that communication in secure channels.[1]
I think my vision is quite tame and deductible from the current situation compared to his.
[1] according to Singh Simon: The Code Book
and snitch.
> It'd need to be only personally accessible and safe/private.
IMO there's very little chance that such a thing won't report or sell every conceivable shred of your existence to governments and corporations.
AI on it's current track is going to so radically reshape society that it will be totally unrecognizable to people today. Society in it's current form makes no sense when there are smarter and cheaper non-human workers available.
I grew up without the things, and boy was that fun.
But seriously there will be some communication device or chip implant. And 'personal' assistant will be just a presentation of a software running 'in the cloud' and controlled by big corporation. With no privacy at all. Which makes it a spyware through which government can track and control crowd.
That's sounds like dystopia, but having a tracking device on almost every person was unimaginable not so long ago. Today it's a norm. Big corporations have and sell your location in real time.
Don't you mean high-entropy data? High entropy data would be less orderly, more compressible, and have a lower signal to noise ratio ... like TV shows compared to a textbook.
I'm honestly a little excited about our Snow Crash future.
I'm also not sure that the recent broadening of media has been a net benefit to society. Look at the degree of polarization in recent years. At a certain point heterogeneity is no longer a societal good.
Not clear to me if the author actually uses LLMs to do meaningful work, or is speculating about how they might be used.
I've written about 2500 lines of F# for the first time in the past 1.5 weeks using ChatGPT-4 to guide me. It has been an constant back and forth, iterative process. My decades of development experience factored in heavily to guide the process. I would've have been at maybe a quarter the progress without ChatGPT, or given up entirely on F# as my language.
I don't think that iterative aspect will be eliminated any time soon for AI-supported complex, creative processes. It's no different from tweaking inputs to a Photoshop filter until your experienced brain decides things look right.
To that end you need to know roughly what looks "right" before you use an LLM. This will all become second nature to the average developer in the next 5-10 years.
Like you said, you might have given up on F# without ChatGPT assistance, and the main way ChatGPT is able to help with F# is because of all of the example code it's been trained on. If developers rely more and more on LLM aid, then a new language without strong LLM support might be a dealbreaker to widespread adoption. They'll only have enough data once enough hobbyists have published a lot of open-source code using the language.
On the other hand, this could also leading to slowing adoption of new frontend frameworks, which could be a plus, since a lot of people don't like how fast-moving that field can be.
i.e. the advantages of an even-higher-level python that's almost like pseudo-code with assembly-level speed and rust-level safety, where some complexity can be abstracted out to the LLM.
If you try to generate code, you’ll find it underwhelming, and frankly, quite rubbish.
However, if you want an example of what I’ve seen multiple people do:
1) open your code in window a
2) open chatgpt in window b (side by side)
3) you write code.
4) when you get stuck, have a question, need advice, need to resolve an error, ask chatgpt instead of searching and finding a stack overflow answer (or whatever).
You’ll find that it’s better at answering easy questions, translating from x to y, giving high level advice (eg. Code structure, high level steps) and suggesting solutions to errors. It can generally make trivial code snippets like “how do I map x to y” or “how do I find this as a regex in xxx”.
If this looks a lot like the sort of question someone learning a new language might ask, you’d be right. That’s where a lot of people are finding a lot of value in it.
I used this approach to learn kotlin and write an IntelliJ plugin.
…
…but, until there’s another breakthrough (eg. Latent diffusion for text models?) you’re probably going to get limited value from chatgpt unless you’re asking easy questions, or working in a higher level framework. Copy pasting into the text box will give you results that are exactly as you’ve experienced.
(High level framework, for example, chain of thought, code validation, n-shot code generation and tests / metrics to pick the best generated code. It’s not that you cant generate complex code, but naively pasting into chat.openai.com will not, ever, do it)
- A rough crash course in F#. I'll say "what's the equivalent in F# of this C# concept?". It will often explain that there is no direct concept, and give me a number of alternative approaches to use. I'll explain why I'm asking, and it'll walk through the pros/cons of each option.
- Translating about 800 lines of TypeScript JSON schema structures to F#. A 1:1 translation is not possible since TypeScript has some features F# doesn't, so ChatGPT also helped me understand the different options available to me for handling that.
- Translating psuedo-code/algorithms into idiomatic F# as a complete F# beginner. The algorithms involve regex + AST-based code analysis and pattern matching. This is a very iterative process, and usually I ask for one step at a time and make sure that step works before I move onto the next.
- Planning design at a high-level and confirming whether I've thought through all the options carefully enough.
- Adding small features or modifications to working code: I present part of the function plus relevant type definitions, and ask it for a particular change. This is especially useful when I'm tired - even though I could probably figure it out myself, it's easier to ask the bot.
- Understanding F# compiler errors, which are particularly verbose and confusing when you're new to the language. I present the relevant section of code and the compiler error and 90% of the time it tells me exactly what the problem and solution is; 5% of the time we figure it out iteratively. The last 5% tends I have to stumble through myself.
- Confirming whether my F# code is idiomatic and conforming to F# style.
- Yes it makes mistakes. Just like humans. You need to go back and forth a bit. You need to know what you're doing and what you want to achieve; it's a tool, not magic.
Note: this is the commercial product, ChatGPT-4. If you're using the free ChatGPT 3.5, you will not be anywhere near as productive.
https://chat.openai.com/share/d041af60-b980-4972-ba62-3d41e0... https://github.com/Mk-Chan/gw2combat/blob/master/generate_co...
I had a problem last week where I wanted to extract sheet names and selection ranges from the Numbers app for a few dozen spreadsheets. ChatGPT, came up with the idea of using Apple script and with a but of coaxing wrote a script to do it. I don't know ApplesScript and I really don't want to learn it. I want to solve my problem and its 10 lines of AppleScript did just that.
We're nowhere near LLMs being capable of writing codebases, be we are here for LLMs being able to write valuable code because those concepts are orthogonal.
1. some, most
But it’s obvious outside of jr dev work and hobby projects there’s no way it could possibly grasp enough context to be useful.
- De-obfuscate a obfuscated JS code
- Unminify a JS code. Asked it to guess function names based on the functionality
- Work with it like a rubber duck to plan out the possible solutions to a code problem
- To suggest function names based on the functionality
- To name repos
- Modify a piece of Go code to add specific functionality to it. I don’t know to write Go; I can read it and grok the high level functionality
I mean, I don't like intellisense either (but simple autocomplete is fine). Perhaps it is because I only code to help with my job, I don't get paid to have a lot of good quality code output.
It is not intellectually stimulating for me to think about what the syntax for a dict comprehension is, or what exactly I'm supposed to do to map over the values of an array in javascript without screwing it up, or any of a million other kinds of minutia. Computers know the answers to these uninteresting questions.
I love to write code, I love to modify existing code too.
I do not love to read and then fix code after someone all the time. With ChatGPT I have to read then understand then fix code after ChatGPT every time.
Also, I do not love to fix code that often contains hallucination.
Make, docker, kubernetes, all fit this pattern. Heck, maybe I won't be so down on autotools if I run into it again now that I can attack it with LLM support.
Or html / css; I haven't written any of that in the LLM era, but maybe I'd enjoy it more now.
> Even if an LLM could provide me with a recipe that perfectly suits what I’m looking for, I wouldn’t want to give up the experience of using a recipe search engine and browsing through a large collection of recipes.
Me too, but allrecipes.com has already switched from “search for what you want” to “we’ll tell you what you want”. This is a UX pattern that I hate but has proven a winner across many apps lately - e.g. TikTok. Streaming music still allows you to build playlists of specific songs, but auto-built radio stations are filled with a suspicious amount of whatever the major labels are pushing this quarter. Netflix/etc has shockingly fuzzy search which largely pushes whatever they want you to watch rather than what you’re searching for. YouTube is mostly also push rather than pull today.
I expect everything to continue moving that direction, against the wishes of the power users. The majority of users seem to go for the simpler UX, even if they sometimes complain about quality.
> In an ideal world, I’d like to have the underlying model be a swappable implementation detail. Llama 2 and similar developments make me optimistic.
This is a pipe dream. LLMs may be hot-swappable by developers but for 99% of apps + OSes this wont be a user-configurable thing.
has anybody ever heard of a cookbook? it's the perfect ux for this. especially if you have a lot of different ones. even better if you collection is mostly physical copies.
I could also quickly filter against recipes which contained ingredients I didn't like, or filter for recipes which used ingredients I already have on hand.
For example, I work on developing a logistics management and route optimization platform. If I try to envision new features that could be unlocked through AI or just LLMs, I basically get nothing back from my feeble brain, that I would fit into this category. E.g. - automate incident handling (e.g. driver broke down, handle redirection of other driver, handover of goods, reoptimize routes) - but the implementation would be just a decision tree based on a couple of toggles and parameters - no AI there? Other things that come to mind - we already use ML for prediction of travel times and service durations - it's a known space, that I refuse to call AI.
Apart from serving as an alternative and sometimes more efficient interface for data queries through NLP (e.g. "tell me which customers I had margin lower than 20% on, due to long loading times and mispackaged goods" - even then, all the data already needs to be there in appropriate shape, and it's just replacing a couple of clicks), I really fail to see new use-cases / features that the current state / hype for AI / LLMs unlocks.
Am I just lacking vision? Are there opportunities I'm grossly overlooking?
Given all the log data for the last N packages, analyze for anomalies and hypothesis as to their cause. Eg is there a specific shipper, warehouse or driver causing problems?
ML does well when you have too much data for a human to wrangle and the search target is well described.
I guess the question is: how much of our web or software use is leisurely browsing (reading news or HN would be other likely candidates for this category) and how much is more task-like, e.g. send a message to some friends, add a note to a specific list, order some groceries?
We might also want to consider how much of a role such private use of software plays in shaping UX trends. If business software (sheets, Photoshop, CAD etc.) can be sped up with chat input, it will be, and people will be expected to use the quickest UI.
This is not to say that browsing will disappear, but I can totally see it being relegated to a second class UI in the long run, even in applications where it's currently the obvious choice, just because our default UX expectations will be different.
I have a hard time seeing chat input become the primary UI for that class of applications, unless you can delegate complete tasks to it. As an analogy, for driving a car, I can see voice commands replacing the steering wheel if we reach full self-driving capabilities, but absent that, the steering wheel and gas/breaking pedals will remain the more efficient and practical UI (even ignoring safety concerns).
I think the author's sentiment here is different. There's a personal subjectiveness when it comes to things like recipes. It could come down to the presentation (photos, narrative), an added ingredient in one that piques your interest and curiosity (a chili recipe with dark cocoa powder?!), or other subjective difference that is experienced differently by each of us.
The other aspect is mental bookmarking or "what ifs". Maybe I'll try this recipe this time, but I might come across other recipes I want to try some other time or I'll find an author that I really vibe with. That process of discovery is lost with LLMs today
But just to voice an opinion, please kill the lengthy paragraph levels of fluff in blog posts. I don't want to say my opinion is influenced by shitty blog recipes and research papers, but it is. So stop it.
Just say what you need to say in bullet points at the beginning and fill in the details further on.
Writing in long form is its own process of mental synthesis.
and for some there will be a "please elaborate" command of course
Strong agree.
I find it very telling that the kind of people who advocate for chat-like software are often managers and executives. Sure, if the kind of person you are is one that enjoys giving verbal orders to people, it makes perfect sense that that's the kind of job you'll seek, and the kind of software you'll want.
But for the rest of us creative hands-on people who like to feel that we're making things through direct manipulation, talking to a computer is just about the least joyful activity I can imagine.
I would like to add my own prediction for 2027. I believe in the next 4 years, much more comfortable and capable mixed reality glasses and goggles may be somewhat common. Also AI generation and streaming of realistic avatars will have advanced. Quite possibly this will use low-latency Wifi to stream from a PC.
So if you want, you will be able to have a fairly realistic representation of the AI as a synthetic person in the room with you when you put the MR device on. It will have eye contact and seem quite similar to a real person (if you want). You will be able to just talk to it.
Another thing that might become popular could be larger 3d monitors. The type that have some stereoscopic effect tuned to your exact position. So your AI helper might just live in a virtual window or doorway or something like that.
You can actually already build something a bit like this, at least in 2d, without really inventing anything complex. You would use things like a HeyGen or D-ID API and maybe Eleven Labs or something. You won't get eye contact or a realistic 3d avatar that seems to be sitting on your couch, and there will be pauses waiting for it to respond. But theoretically, fast-forwarding several years, those things are not at all insurmountable.
But being able to have a verbal communication with a chatbot is of immense help. It can be used while driving, while cleaning or doing anything else which requires the use of your hands or while they are dirty or covered with gloves.
These glasses will be expensive for at least 4 more years and most of us just won't feel the need to invest in it.
Transformers were only invented six years ago after all. Some people are even very optimistically projecting that we'll reach the singularity in the next three.
It's local first, and ties many different AIs into one text editor, any arbitrary text editor in fact.
It does speech recognition, which isn't useful for writing code, but is useful for generating natural language LLM prompts and comments.
It does CodeLlama (and any HuggingFace-based language model)
It does ChatGPT
It does Retrieval Augmented Gen, which is where you have a query that searches through eg PDFs, Youtube transcripts, code bases, HTML, local or online files, Arxiv papers, etc. It then surfaces passages relevant to your query, that you can then further use in conjunction with an LLM.
I don't know how mainstream LLM-powered software looks, but for devs, I love this format of tying in the best models as they're released into one central repo where they can all play off each others' strengths.
I predict llm based agents will be used as a 3rd party layer on top of the modern web to work in users favor against enshitification.
I'm predicting the opposite -- LLMs will be used to turn Dark Patterns into Vantablack Patterns.
I'm more optimistic than the author about how useful LLMs may be for chat based interfaces - I don't think it is appreciated in tech how many people are (still) computer illiterate in the world, and natural language can be a big improvement in usability for the kinds of usecases these users need.
In 5-10 more years models will all run locally on device with weight updates being pushed OTA.
Advertisers will somehow figure out how to train ads into these models so you get subtle references to new products.
This isn't enough to enable a useful interface.
It takes a lot of scaffolding to get an llm powered interface to actually work.
More details here: https://medium.com/evadb-blog/augmenting-postgresql-with-ai-...
So, Language Model UX in 2024 according to me:
- Pervasive: Virtual assistants are omnipresent, and integrate everything together. Like Alexa that is actually intelligent and can for example make you a custom app for interfacing with your banking following your preferences on-the-fly for you only. Web apps become meaningless as you are getting the UI you need customized, tailor-made, live when you need it. Agents go with you in smartphones or project to any device nearby. No need for authentication, bots know you.
- Agentic: They will negotiate with each others all the time and present a unified interface to you no matter which system you are interacting with. The air is full of silent chatter. Things move when you aren't watching. Progress is decoupled from human input, attention and care.
- Web searches and such are for bots. Answering phones is for bots. Making calls is for bots. Any person still grasping at these kinds of interfaces will notice all emails they get are from bots, and all emails they send are instantly answered by bots. Just let go, let the machines handle that, which brings me to:
- Intermediation: No matter what you do, a bot will help you with it. Swiping Tinder? Don't bother, a bot does it better for you. Just lay back and relax. Ads become targeted to bots and truthful, because the bots won't forgive misrepresentation, ever.
This will come together with the downfall of search, a prime driver why so much knowledge was published on the web. Search will start to drown in an explosion of generated AI content, and the case to publish your knowledge for the sake of SEO marketing will diminish.
And if selling knowledge is the main revenue, why would transferring it to an LLM mean losing the opportunity to sell it multiple times? Au contraire.
For any category I assume the owner of the knowledge keeps in control and my case is that a LLM can be beneficial for them.
And also for models, to feed them with data generated by AI seems like an issue to me.
I think my main jobs are getting the prompts right and building an UI and UX experience that makes sense, and the rest is somehow taken care of by magic.
[1] there's good arguments to evolve a business model in that direction, e.g. Apple beating blackberry not through superior hardware but a superior app store.
[2] I'd be remiss not to plug my own recent oevre here, https://eucyclos.wixsite.com/eucyclos/contact-8-1 inspired by the difficulty of conveying positive emotions over screens. Advice from people who have built this kind of thing successfully is of course welcome.
A lot of people feel this way. The romanticism of computing is diminished by LLMs. Call me a luddite. I even work in AI.
Let's say I want to change a setting that uses a traditional radio button UI:
- Autoscale servers - Manual scale servers
It's much easier to discover, understand the options, and make a decision via a radio button UI than to ask for my options via chat. That would look like:
"I'm having load issues on my servers. Can you increase the server capacity?" "Sure! Do you want to autoscale or manual scale?" "What's the difference? Are those the only two options or are there more?" "There are only these 2 options. The difference is..."
That's just a worse UX.
There are great examples of this in Westworld and The Expanse, with characters talking to screens to update and refine their queries.
So are touchscreens, especially in cars. Physical buttons are far better for the end user.
Imagine a future when all people know is blah blah to use computers, can't type, never see buttons, and barely read any more.
Now is it a bad UX?
(Before the Internet, the majority of people rarely read. And even now, most people don't read, they tiktok, they youtube.)
--- Create a reference table that maps neighborhoods to zipcodes using ChatGPT
CREATE TABLE reference_table AS
SELECT parkname, parktype,
ChatGPT(
"Return the San Francisco neighborhood name when provided with a zipcode. The
possible neighborhoods are: {neighbourhoods_str}. The response should an item from the
provided list. Do not add any more words.",
zipcode)
FROM postgres_db.recreational_park_dataset;
--- Map Airbnb listings to park
SELECT airbnb_listing.neighbourhood
FROM postgres_db.airbnb_listing
JOIN reference_table ON airbnb_listing.neighbourhood = reference_table.response;
More details on LLM-powered joins and EvaDB: https://medium.com/evadb-blog/augmenting-postgresql-with-ai-..., https://github.com/georgia-tech-db/evadbWhat‘s missing to get all of this now? What recolutionary research, product development that hasn‘t happened yet will happen in the coming year?
To me it looks like LLM tech is stagnating, after the hype peak we are close to the trough of disillusionment.
Partly it just takes time - it will overall take (I think, based on previous similar changes like the web) 20 years before the ideas from the current generation of LLMs are built out and integrated into products and made into new products and it is all done. People and organisations take time to change.
- install this nVidia driver and make it work with pytorch
- get this ethernet connection to $IP working
- schedule a daily cron job to make an incremental backup of this harddrive, don't store duplicate files
Anyway, if LLMs can write computer programs (see Copilot), then surely they can do simple administrative tasks ...
At the moment I'm sure there are only a couple of user interfaces that people know and will resist l reuse. After all they'll want to take very little risk and use what they've seen works in other places. The full screen ChatGPT style page, or those annoying chatbot pop-ups that sit in the bottom right hand corner of sites I don't care about. Something to keep an eye on and see what emerges.
The largest problem with a gpt future imo would be bot networks creating manufactured outrage. these will shift the decisions of policymakers towards those who controls these networks ie big corp.
https://www.npr.org/2022/05/16/1099290062/how-many-of-americ...
> As the U.S. marks one million people dead from COVID-19, scientists suggest that nearly one third of those deaths could have been prevented if more people had chosen to be vaccinated.
From a human moral viewpoint this is indeed despicable and worth considering mitigation.
I am optimistic though that in the grand unfolding that natural selection brings, it will pressure an elevation of rationality, but with the cost of some human suffering.
Seems like a perfect blend of keeping the core personality of the LLM trained on your data local, while still allowing permissioned access.
No mention of the role of software requirements ? Has prompt engineering somehow replaced them ?
Nevertheless a nice and agreeable read.
These paths are not mutually exclusive, but I’m personally more excited about the latter.
I might try it.
This article is close to my heart as I've been working on craftcraft.org with similar perspective.
1. Chat UX Consolidation: I agree that having crappy chat UIs everywhere is very suboptimal. Perhaps having a complete UX as a component is another solution here. We took many months to get http://chatcraft.org from prototype to an ergonomic productivity tool. Highly unlikely such attention will be paid to every chat UI integration.
2. Persistence Across Uses. This one is tricky. We keep all of our history client-side...but after using it this way, having a shared history server-side and having it pulled in as relevant context would be a nice improvement.
3. Universal Access: It's super weird to have LLMs restricted to providing output that you cut/paste. We have integrated pretty slick openai function interface to allow calling out to custom modules. So far we integrated: pdf conversion/ingestion, clickhouse analytics and system administration using a webrtc<->shell connector. Demo here: https://www.youtube.com/watch?v=UNsxDMMbm64
I've also investigated teaching LLMs consume UIs via accessibility UIs. I think this is underexplored. Blog post on that here: https://taras.glek.net/post/gpt-aria-experiment/
3b. LocalLLMs. These have been underwhelming so far vs openai ones(except maybe WizardCoder). Industry seems to be standardizing around openai-compatible REST interface(ala S3 clones). We have some support for this in a wip pull req, but not much reason to do that yet as the local models are relatively weak for interactive use.
4. Dynamically Generated UI & Higher Level Prompting: I do a lot of exploration by asking http://chatcraft.org to generate some code and run it to validate some idea. Friend of mine built basic UX for recruiting pipelines, where one can ingest resume pdfs into chatcraft and via custom system prompt have chatcraft become a supervised recruiting automation. We also do a lot generation of mermaid architecture diagrams when communicating about code. I think there a lot of room for UX exploration here.
Now a few categories that weren't covered:
1. Multi-modal interaction: It's so nice to be able to have chat with the assistant and then switch to voice while driving or to input some foreign language. I think extending UX from chat to voice and even video-based gestures will make for an even cooler AI assistant experience.
2. Non-linearity in conversations: Bots are not human, so it makes sense to undo steps in conversation, fork them, re-run them with different input params and different model params. Most of my conversations in chatcraft are me trying to beat llm into submission. Example: tuning chan-of-density prompt https://www.youtube.com/watch?v=6Vj0zwP3uBs&feature=youtu.be
Overall, really appreciate your blog post. Interesting to see how our intuition overlaps.
would be great to chat on discord https://discord.gg/JsVe9ZuZCn
(updated discord link)
But searching for places to eat and recipes to make is very much not a precise search.
IMO the reason chat and not input text and get an answer is so powerful is that it allows messy search with iterative refinement - just like talking to an expert. Just "chat input, result given" doesn't have that.
I want to tell a recipe search I'm after a healthy light thing as it's hot. I want to get back options and then say "I don't really like cucumber though" and have it get rid of cucumber heavy recipes but leave in some salads and say "look this recipe has cucumber in but it'll be fine without it". Or "you asked for a chicken recipe but here's one with pork that should sub in just fine".
For restaurants I want to get back some options and tell it "that's too expensive, this is a quick thing" and get back more hole-in-the-wall things. Tell it "Hmm, something spicy sounds good but I've been to those Indian restaurants before, I'm after something new" and get a recommendation for the Ethiopian place in town.
> The current state of having a chat-ish UX that’s specific to each tool or website (e.g. a Documentation Chat on a library documentation page, a VSCode extension, a Google Bard integration, a really-badly-implemented chatbot in my banking app, etc.) doesn’t make any single one of those experiences more enjoyable, effective, or entertaining;
Coding chat in my editor absolutely makes coding more effective. I want heavy integration around how it edits text, not a general chat widget.
> The idealized role of persistence in LLM UX is also fairly obvious: it’s easy to imagine an LLM-powered experience that remembers and “understands” all my previous interactions with it, and uses that information to better help me with whatever my current task is
I sort of agree, but I absolutely detest hidden state about me. I should not alter my behaviour just because I'm worried about how that'll impact things. You see this regularly, I may avoid some weird youtube video (e.g. to see a weird flat earth take) because I don't really want the hassle of having loads of weird conspiracy stuff promoted or having to manually remove it from some list.
Having said that, recipe search that remembers I hate cucumbers is great.
I wonder if manually curated context will in general be better? Or maybe just for me.
> I’m interacting with UXes that can remember and utilize my previously expressed preferences, desires, goals, and information, using that underlying memory across different use cases seems like low-hanging fruit for drastically reducing friction.
The tricky part here is I switch contexts and don't want them bleeding into each other. My preferences for interaction around my kids and while at work are very different.
> A small-scale and developer-centric example: I use GitHub Copilot in VSCode, and I was recently implementing a library with documentation that featured LLM-powered Q&A, and it felt bizarre to have two LLM-mediated experiences open that each had exactly half the info needed to solve my problem.
I think here a split between data sources and frontends is key. This interaction is awkward, and should be combined (copilot should be able to reach out to that documentation).
> locally runnable and open models are no match for GPT-4 (etc.),
It's going to be more efficient to move compute to central places, the less they're used per person the more efficient it will be to have one central location process everyones things. A short taxi ride per week is cheaper than owning a car. However as uses grow (e.g. proactive llms), will this shift the equation towards locally runnable ones? A few queries a day and you're obviously better off not buying a h100. Constantly running things all day and if prices fall maybe that'll change.
- summerise (then paste in the text)
- explain like I'm 10
Gave (the highlights)
-- ai here
1. Chat consolidation
2. Remembering stuff
3. Access to Everything
4. Making things easier
5. Being helpful
-- human here below
I'd love for the author to be right in his predictions. Arguably current AI has made understanding this article '4. easier' for me to understand already.