Workshop: Generative AI Images

 Wednesday, 5 November 2025
2:00 — 4:00 pm
BR-151
Tim Fransen
Technical Tutor / Researcher
/ Designer

This hands-on workshop aims to demystify image-based generative AI, providing participants with an entry point for critical and responsible engagement, along with a foundational understanding of the system architecture and its real-world implications.

Focusing on Stable Diffusion as an example, the session introduces participants to the core processes by which such systems generate visual outputs from random noise, guided by text prompts. Through a series of mini-exercises, attendees will explore the basics of prompt engineering, the role of seeds in controlling randomness, and the influence of guidance scales on image generation. These activities offer an accessible and reflective entry point into the inner workings of generative AI systems.

Screenshot of ComfyUI, an open-source interface used to build and run AI image-generation workflows locally, providing fine-grained control over each stage and the final output.

Beyond technical exploration, the workshop foregrounds key ethical and environmental considerations. The session will highlight issues such as dataset bias, intellectual-property concerns, and the significant energy and water demands of generative AI tools. Framed by open-education principles – specifically equity, transparency, and environmental sustainability – the session explores practical mitigation strategies, including open and efficient models (e.g. Public Diffusion, Stable Diffusion 3.5 Large Turbo) and locally run, open-source tools (e.g. ComfyUI, DiffusionBee).

You will learn how to:

  • Explain the system architecture of diffusion models and how they transform random noise into coherent images.
  • Generate images with Stable Diffusion 3.5 Large Turbo (or an efficient equivalent) and control outputs by adjusting prompts, seeds, steps and guidance scale.
  • Conduct a simple experiment to identify signs of dataset bias and discuss the implications.
  • Distinguish between closed, open-weight and open-source models, and understand their intellectual-property implications.
  • Build and run a simple ComfyUI workflow to generate images locally, reducing environmental impact and increasing creative control over outputs.

Requirements: Laptops will be provided, but participants are welcome to bring their own laptop with Google Chrome installed and Internet access.