For my "graduation project" in architectural visualisation, I decided to get more familiar with image generating AI, more specifically Stable Diffusion using ComfyUI. 
I wanted to explore how the current state of Image generating AI holds up to the needs of architectural visualisation, especially in regards to the possibility to influence specific details. This is what I percieve as the biggest challenge for the usability of AI at the moment, since image generation often leaves more than desired up to chance.
To have a guide and be able to make a comparison, I chose to try to loosely replicate a previous render of mine, using AI and a very simple render as a base for masks and control nets. By building several different node based workflows, providing both text and image prompts, experimenting with different settings and combining parts of different generated versions, I was able to produce an image with pretty good control over the details. 
Even though it was quite a lot of work to achieve a pretty much correct, but not overly beautiful image, the true value lies in its reusability. Once the workflows are built and the process is structured, it can be used to generate many different images of different motives in a short time, reusing the same principles.
Final image, produced from AI-generated images
Final image, produced from AI-generated images
Simple render, used as base and for control nets and masks
Simple render, used as base and for control nets and masks
"Traditional" render, no AI
"Traditional" render, no AI
Moodboard, partially used for prompts
Moodboard, partially used for prompts

You may also like