Workshop: Private GPTs

 Wednesday, 11 March 2026
2:00 — 4:00 pm
BR-151
Tim Fransen
Technical Tutor / Researcher
/ Designer

This hands-on, beginner-friendly workshop introduces participants to running large language models locally using Ollama. You’ll explore how to browse and download models to your device, turn your local files into information sources for models, and adjust application settings for drafting, summarising, and ideation – without relying on cloud-based systems.

Screenshot of Ollama, an open-source interface for running multimodal language models locally.

The session will also unpack the trade-offs of local vs hosted AI (including privacy, bias, performance, and environmental impact), and discuss practical ways this tool can support teaching, learning, and everyday knowledge work.

You will learn how to:

  • Use Ollama to run local large language models
  • Write prompts for common tasks (drafting, rewriting, summarising, planning)
  • Work with your own local documents to help keep sensitive information private
  • Use a vision-language model (Qwen3-VL:4B) to describe visual content, answer questions about it, and draw inferences from images
  • Critically assess the possibilities and constraints of local GPT use (privacy, bias, accuracy, energy use)

Requirements: Laptops will be provided. No prior experience with Ollama is required.


Resources:

Are Large Language Models a Dead End? — The Artificial Human, BBC Radio 4, 25 February 2026 (28 mins)

This Is Not The AI We Were Promised. — The Royal Society Michael Faraday Prize Lecture, Professor Michael John Wooldridge, 18 February 2026 (62 mins)