What is public right now is StableDreamFusion [2]. It produces surprisingly good results on radially symmetrical organic objects like flowers and pineapples. You can run it on your own GPU or a colab.
Or, if you just want to type a prompt into a website and see something in 3D, try our demo at https://holovolo.tv
[1] https://dreamfusion3d.github.io/
They use https://deepimagination.cc/eDiffi/ as the text-to-image diffusion model, which can be replaced with Stable Diffusion or something else.
Such a fascinating and upside-down moment.
But also something like Scribblenauts where arbitrary 3d objects can be created by wizards.
Side note: I'd recommend avoiding 'magic' branding in AI technology because it's going to be outdated in a week
Current state of the art tends not to differentiate between gibberish and output of actual value so that may be a bit of a downer.
But I definetly see it as plausible - I just don't know where you would get the training data though...