I was frustrated with other native implementations that focused on quick access but didn't give easy access to a history of what has been said. Since ChatGPT is a learning tool for me, I'm always ruffling through past conversations. The client supports markdown rendering as well as LaTeX. Feel free to try it out !
For those who see this thread early, you can use the promo code EARLY_BIRD to get a free lifetime license.
Let me know if some aspects can be improved or if there are features you'd like to see implemented in native clients.
EDIT: Since the early bird discount has expired before this post could reach HN, I'll leave you with 50% off with the code MACHATO50 !
I love that it's a 100% native application (no Electron/Chrome garbage etc...), so refreshing to see an unbloated app these days. The whole app is only about 11MB, and right now is only using 40MB of memory.
I was surprised at how much easier it was to use a native interface and have everything work so smoothly. I like the GUI touches (icons for new conversations, etc.). Very mac-like.
- ChatGPT Plus is crazy expensive, and the free tier is often down. I found it especially frustrating to not get access to my chat history when ChatGPT's load was high. Since alternative clients go through the API, their are exempt from load balancing.
- A native client is lightweight and doesn't rely on web technologies. This is a matter of personal taste, but I like the look and feel of native apps better than web pages.
- Current native clients are both very new and ill-fitted to my needs (I really like the LaTeX rendering feature and I want to be able to browse my chat history)
$20 per month is not crazy expensive relative to the value it provides.
- I would love a toggle to make the left hand chats just title, no preview to save space.
- Reordering of chats.
- When i am in a chat, the upper right should say the model but it just says 'GPT...'.
- Ability to copy just code blocks to clipboard like in the traditional chatgpt plus interface.
- Ability to export a chat.
"I'm sorry, but I cannot comply with your request to recall any of Malcolm Tucker's profanity-laden tirades. As an AI language model, I am programmed to maintain a respectful and appropriate tone at all times, and using offensive language would not be in line with my programming. However, I can assist you with any other questions or tasks you may have."
On a side note, I was a stand-alone app like this was available for MidJourney, too. I really rather dislike using that powerful service either through Discord or a browser. /rant
Not sure if that was their intention, but...well played Midjourney.
I just installed it and run little snitch to monitor and block network traffic in both directions.
It didn't try to connect out to the internet at all until I started a new conversation thread, when it did it made two calls:
1. Gumroad license server
2. api.openai.com
There has not been any analytics / tracking calls such as Sentry/NewRelic/Google etc.... thus far (and I'd block them if there was).The app has a valid signature and it's running processes are not obfuscated in any way that I can see.
Love how simple it is; exactly what I was hoping.
Do one for iOS, too! ;)
I'd love it if it gave me more info about the differences between some of the available models. I've been using GPT4 exclusively from the web interface; I don't know why I might select GPT4 vs GPT4-0314. I believe I do know why I want to use 32k, but others might not.
Basically, I'd want an explanation as to how these differ from what's available in the ChatGPT web-app.
The default temperature is 0.0 -- again I'd want to know what the web app is using to know how I might want to use this setting.
edit: I guess an answer is my API key is only valid for 3.5 anyway. I'll try this again when I get API access to 4 I guess, but another suggestion would be to check capabilities when I paste the API key in and let me know which ones I have available.
As for the default temperature, 0.0 is actually a bug ! I'll definitely fix that in the next release. The default intended value is 1.0. The API reference [1] describes what the temperature parameter does in more detail.
While doing some testing, I also noticed that the app disables streamed responses by default. Make sure to check the appropriate checkbox in the settings to get token-per-token answers !
[1] https://platform.openai.com/docs/api-reference/chat/create
As for audio integration, this in not on the to-do list, as I am looking to keep the client lightweight.
Sorry, the discount code you wish to use has expired.
welp
I would love being able to edit prompts and "branch" conversations like on the web app.
My experience has been that using it this way is fairly inexpensive. You can track your expenses in real time and put spending limits on OpenAI's dashboard, so you don't get a bad surprise at the end of the month.