When calling the automation, you need to provide three main parameters: 1. Title (title): A brief descriptive name for the automation. This helps identify it at a glance. For example, "Check for recent news headlines". 2. Prompt (prompt): The detailed instruction or request you want the automation to follow. For example: "Search for the top 10 headlines from multiple sources, ensuring they are published within the last 48 hours, and provide a summary of any recent Russian military strikes in the Lviv Oblast." 3. Schedule (schedule): This uses the iCalendar (iCal) VEVENT format to specify when the automation should run. For example, if you want it to run every day at 8:30 AM, you might provide:
BEGIN:VEVENT RRULE:FREQ=DAILY;BYHOUR=8;BYMINUTE=30;BYSECOND=0 END:VEVENT
Optionally, you can also include: • DTSTART (start time): If you have a specific starting point, you can include it. For example:
BEGIN:VEVENT DTSTART:20250115T083000 RRULE:FREQ=DAILY;BYHOUR=8;BYMINUTE=30;BYSECOND=0 END:VEVENT
In summary, the call typically includes: • title (string): A short name. • prompt (string): What you want the automation to do. • schedule (string): The iCal VEVENT defining when it should run.
- Assumed UTC instead of EST. Corrected it and it still continued to bork
- Added random time deltas to my asked times (+2, -10 min).
- Couple notifications didn't go off at all
- The one that did go off didn't provide a push notification.
---
On top of that, only usable without search mode. In search mode, it was totally confused and gave me a Forbes article.
Seems half baked to me.
Doing scheduled research behind the scenes or sending a push notification to my phone would be cool, but surprised they thought this was OK for a public beta.
Anthropic is ahead in this because they keep their UIs simplistic so the failure modes are also simple (bad connection)
OpenAI is just pushing half baked stuff to prod and moving on (GPTs, Canvas).
Find it hilarious and sad that o1-pro just times out thinking on very long or image-intense chats. Need to reload page multiple times after it fails to reply and maybe answer will appear (or not? Or in 5 minutes?). Kinda shows they’re not testing enough and “not eating their own food” and feels like chatgpt 3.5 ui before the redesign
Right now, in fact, my understanding is OpenAI is using their current LLM's to write the next generation ones which will far surpass anything a developer can currently do. Obviously we'll need to keep management around to tell these things what to do, but the days of being a paid software engineer are numbered.
That’s the only way I get it to have a halfway decent brain after a web search. Something about that mode makes it more like a PR drone version of whatever I asked it to search, repeating things verbatim even when I ask for more specifics in follow-up.
The same company that touts their super hyper advanced AI tool that can do everyone's (except the C-level's, apparently) jobs to the world can't figure out how to make a functional cron job happen? And we're giving them a pass, despite the bajillions of dollars that M$ and VC is funneling their way?
Quite interesting they wouldn't just throw the "proven to be AGI cause it passes some IQ tests sometimes" tooling at it and be done with it.
But wouldn't a company like OpenAI use a tick-based system in this architecture? i.e. there's an event emitter that ticks every second (or maybe minute), and consumers that operate based on these events in realtime? Obviously things get complicated due to the time consumed by inference models, but if OpenAI knows the task upfront it could make an allowance for the inference time?
If the logic is event driven and deterministic, it's easy to test and debug, right?
Makes me wonder if they internally have "press releases / Q" as an internal metric to keep up the hype.
some people cant even wrap gheir heads around it, taking hours and hours of discussions. still my favourite problem though.
Apple has not innovated in years and a GPT Phone where your lock screen is a FaceTime call like UI/UX with your AI Agent who does everything for you would give Apple a run for it's money! Pick up your phone & see your agent waiting to assist & it could be skinned to look like a deceased loved one (mom still guiding your through life).
To get things done it would interface with other AI Agents of businesses, companies, your doctor, friends & family to schedule things & used as a knowledgebase.
Maybe this is their step towards creating said agents?
I just… don’t want this. I don’t think anyone I know wants this.
I switched over to the "GPT4o with scheduled tasks" model and there were no UI hints as to how I might use the feature. So I asked it "what you can you follow up later on and how?"
It replied "Could you clarify what specifically you’d like me to follow up on later?"
This is a truly awful way to launch a new product.
Then there are some UI hints.
"Remind me of your mom's birthday on [X] date"
Wow, really maximising that $10bn GPU investment!
Edit: I suppose they'll be here at some point: https://help.openai.com/en/articles/9624314-model-release-no...
These seem like extremely shitty release notes. I have no clue why anybody pays for this model.
It only scheduled the first thing and that was after having to be specific by saying "7:30pm-11pm". I wanted to say "from now to 11pm" but it did couldn't process "now"
https://x.com/karinanguyen_/status/1879270529066262733 https://x.com/OpenAI/status/1879267276291203329
We support just about every other job platform but I’d love to hear from potential users before I hack something together.
I got the best results by not enabling Search the Web when I was trying to create tasks. It confuses the model. But scheduled tasks can successfully search the web.
It's flaky, but looks promising!
Resource URL: https://cdn.oaistatic.com/assets/jbl0aowda306m4s1.js Source Map URL: jbl0aowda306m4s1.js.map
Also I am getting`Unable to display this message due to an error.`a lot.
Me:
> Give me positive feedback every hour
ChatGPT:
> Provide positive feedback
> Next run Jan 15, 2025
> Got it! I’ll send you positive feedback every hour.
An hour later, I received the following email:
```
Your scheduled task couldn't be completed
ChatGPT tried to complete Provide positive feedback multiple times, but it encountered an error and wasn't able to send. It will try again the next time this task is scheduled.
Open chat If you have any questions, please contact through the help center.
All the best, ChatGPT
```
We already have many implementations where at a cron interval one could call the GPT APIs for stuff. And its nice to monitor it and see how things are working etc.
So I am curious whats the use case to embed a schedule inside the ChatGPT infrastructure. Seems like a little off its true purpose?
Many existing apps (like Todoist) have already had LLM integrations for a while now, and have more features like calendars and syncing.
Or do I completely not understand what this product is trying to be?
i saw no mention of them on the help article, or the ui
if i ask for a daily early morning news summary will it show up in the middle of the night or around lunch time? will it get updated when i travel? seems interesting if what you're looking for is a reminder that is not time relevant, just a thing that should happen at some point with a time precision of about 1 day.
https://help.openai.com/en/articles/10291617-scheduled-tasks...
I even have an automated x account @alarmsglobal
Otherwise, you'll have a lot of systems dependent on these orchestrators creating hard-to-debug mistakes up and down the pipeline. With software, you can reach a state where it does what you tell it to without having to worry if some model adjustment or API change is going to break the output.
If they solve that, then yes. Otherwise, what I personally expect is a lot of businesses rushing into implementing "agents" only to backpedal later when they start to have negative material effects on bottom lines.