It’s purposefully high level and non-technical for a general audience - my theory was that most people who aren’t into tech/AI don’t care too much about training, or how the system got to be the way that it is.
But they do have some interest in how it actually operates once you’ve typed in a prompt.
Happy to answer any questions or take on board feedback
Right now we are only seeing the denoising process after it's been morphed by the latent decoder, which looks a lot less intuitive than actual pixel diffusion.
If you can't find a suitable pixel-space model, then you can just trivially generate a forward process and play it backwards.
Has there been any study of grammar and other word order effects in the result? Is "Dog fetches ball with tail" more likely to produce an image of dog with a ball grabbed with its tail than "tail ball dog fetch with"?
Like search engines, an issue is user searched for "best price on windows". Do they mean windows the OS or glass windows.
My impression, at least with image generation I've used, it's while there is some mapping of words and maybe phrases through the latent space to an image it's very weak. If you put "red ball" in a long prompt, it's nearly as likely "red" will get applied to some other part of the description than the ball.
When I was building this I did have to rework the prompts quite a bit so they worked nicely with the word-by-word reveal visualisation, i.e. they mention the subject early, then add adjectives about setting and light etc.
Found the manual latent space exploration part really interesting.
Too many LLM/diffusion explanations fall in the proverbial “how to draw an owl” meme without giving a taste as to what’s going on.
The interpolations between butterfly and snail were pretty horrifying. But something like Z-Image you could basically concatenate the text and end up with a normal image of both. Is the latent space for "butterfly and snail" just well off the path between the two individually?
It's hard to imagine what is nearby in latent space and how text contributes, so I did really like the section adding words to the prompt 1-by-1.
This is what I think is missing in most AI (broad sense) learning resources. They focus too much on the math that I miss the intuitive process behind it.
So different seeds lead to slightly different end points, because you’re just moving closer to the “consistent region” at each step, but approaching from a different angle.
You can't jump to the endpoint because you don't know where it is - all you can compute is 'from where I am, which direction should my next step be.' This is also why the results for few-step diffusion are so poor - if you take big jumps over the velocity field you're only going in approximately the right direction, so you won't end up at a properly stable point which corresponds to a "likely" image.
Thanks for sharing!!