Thank you! I did not use the concepts directly as prompts, but did ask the concept-model to also output a visual description (which was only used internally). This description I then piped through GPT4 (which was much better at this then GPT3), then added some modifiers, and only then sent it to Midjourney. You will often note though, that language comprehension is not really on point – like often it does show the right elements in the image, but not necessarily in the same relationship to another as they are described in the concept...
For the second part of your question regarding style and postprocessing: In terms of style, what really helped was to use "--style raw" and also to give it very specific locations like "…in a contemporary art museum". I had a whole bunch of these which where randomly added to the prompt. In terms of postprocessing, I upscaled the images with Stable Diffusion locally (using imaginAIry) and then added some grain, but that's it.