Skip to content
Better HN
Top
New
Best
Ask
Show
Jobs
Search
⌘K
0 points
alew1
2y ago
0 comments
Share
But the model ultimately still has to process the comma, the newline, the "job". Is the main time savings that this can be done in parallel (on a GPU), whereas in typical generation it would be sequential?
undefined | Better HN
0 comments
default
newest
oldest
sebzim4500
2y ago
Yes. If you look at the biggest models on OpenAI and Anthropic apis, the prompt tokens are significantly cheaper than the response tokens.
j
/
k
navigate · click thread line to collapse