Experimenting with AI, Part 2: In Search of a Better Sensemaking Pre-read
Written by: Ashley Dresser, The Convive Collective
In our collective journey into the world of AI experimentation, some of AI’s most immediate applications have been shared by my colleague, Monalisa Salib, in her most recent article: Experimenting with AI: What “Humans in the Lead” Looks Like at Convive . She discusses Convive’s use of AI on projects through the lenses of “The Investigator”, “The Professor”, and “The Author”. And so naturally, as the Learning Experience Designer at Convive, I will now take up the exploratory baton from Monalisa and ask, “What about the designer?”
In part two of this blog series, I share my findings on leveraging AI as a design tool, specifically in the phase that comes after you’ve analyzed the data and are deciding how to present it for sensemaking. I was curious to see if AI is ready to be the new intermediary - either by helping non-designers produce these learning artefacts or by assisting experienced designers in doing their work more efficiently.
I focused my experimentation on a common pain point for our client learning experiences:
the sensemaking pre-read.
Pre-reads are meant to level the playing field before a workshop or engagement, but in practice they often struggle in two main areas:
Pre-reads are quick to fall into “TL;DR territory” (too long, don’t read), impacting the success of a workshop.
A significant amount of context is required in the pre-read, making a session feel heavy before it has even begun.
My goal was to confront “death by pre-read” with my new shiny AI sword, and explore formats that could reduce cognitive load upfront while still creating a shared reference point.
I ran two small experiments using two AI tools to explore different support for sensemaking pre-reads: AI-generated video and AI-designed narrative documents. In the first experiment, I tested whether short AI-generated videos, produced using Runway AI and Sora, would be quick and easy to produce. These videos could then be used as a way to help teams “see” each other’s work more easily ahead of sensemaking. In the second experiment, I tested Gamma, a tool that designs web-based, visual documents, to see whether it could offer a more engaging pre-read experience while significantly reducing design time.
Adding AI Video to Sensemaking Pre-reads: Not Convinced (Yet)
I selected five different top stories that I wanted to present for a cross-programmatic teams sensemaking session. I asked ChatGPT to turn them into a “Five-Beat Narrative”, a standardized story structure that I could then use to create video prompts and clips in Runway AI. It gave me quality five-beat narratives, but it took me multiple iterations to get there and the subsequent generated videos still felt overly generic. I found that Sora (OpenAI’s video generation tool) produced more realistic results with less adjustments required.
I wanted to edit the video clips to create one sensemaking video per story, but the RunwayAI platform was slow and glitchy so I had to switch to another video editing platform to finish the job. Given that I couldn’t use it as an “all-in-one” platform, Runway AI doesn’t feel like an improvement in process for me. I was also left with the question of whether these videos and their particular aesthetic would be received as helpful framing or as low-value “AI slop”, particularly in our already saturated content landscape.
As an alternative, I generated a single image to represent each “beat” of the five-beat narrative, using them as visual anchors without compiling full videos. Image generation, which is available even in the free version of ChatGPT,requires significantly less time and technical bandwidth. While less dynamic than video, these visuals still provide a strong anchor for context in sensemaking pre-reads.
AI for Producing Faster and More Dynamic Pre-Reads: Surprisingly Useful
Gamma turned out to be much faster to design an engaging sensemaking pre-read. Gamma has a “Canva-like” feel to it, which means anyone can use it without too much fear. Its visual hierarchy choices worked well overall, but this may be influenced by the quality of the information inputs and the content organization work that is done upfront. A ten-page pre-read took about 30 minutes to put together, so my dog (waiting for his walk) was happy about that.
One caveat to keep in mind is that Gamma works best in its own browser environment, which requires a sign-in. You can export it to PDF or Slides and keep editing from there, but some formatting may be lost. For short-term, live use, it’s great. For anything that needs to live on for a long time or move easily across different systems, it takes a bit more wrangling. I will continue to explore use cases for this tool because of the time-savings and design support it offers.
These experiments helped me see past the hype and clarify that while AI can support design work, the role of “AI as the designer” is still some distance off on the horizon. In the meantime, if you want to keep playing in the AI sandbox, we recommend Futurepedia as a tool database, and the newsletter Superhuman AI to learn more about practical AI applications. See you in the sand!