Experimenting with AI: What “Humans in the Lead” Looks Like at Convive

Written by: Monalisa Salib, The Convive Collective

These days, it can feel like if you are not experimenting with AI, you are falling behind. At the same time, there is and should be a healthy dose of skepticism around new tools, especially those that raise ethical and environmental concerns. In this blog series, we’ll share with you what we’ve been learning in our AI use in hopes of giving you a place to start. 

At Convive, we have been experimenting with AI tools across a range of tasks: as thought partners, research assistants, and of course, notetakers. On a recent assignment with a European Foundation, we tested ChatGPT, Perplexity, and Gemini to support a large-scale qualitative systems change analysis.

This piece is not a review of the tools themselves. Instead, it focuses on how we work with them. At Convive, we follow a “humans in the lead” mindset, which is meaningfully different from the more common “humans in the loop” framing used in AI conversations. 

“Humans in the lead” means that we, the sentient beings, set the goals, define the parameters, ensure ethical use, crosscheck and verify outputs, and ultimately take responsibility for what is produced. The goal is not to replace ourselves, but to supplement our expertise in ways that make us more effective. Left unchecked, AI can quietly reshape how decisions are made, who is accountable, and what gets trusted. 

Here is what humans in the lead looked like on one large-scale qualitative analysis project. In addition to being researchers, writers, facilitators, and communicators, working with AI required us to step into a few additional roles:


The Investigator: Verify Everything 

Treat AI as Your Informant

I regularly prompted ChatGPT and Perplexity for research support and analysis, then went back to verify what they produced. This step was non-negotiable. I encountered hallucinations and, just as often, confident conclusions drawn from one or two weak data points. With Perplexity in particular, I would follow the cited links only to discover that the evidence did not support the claim being made.

AI offers clues. I verify sources, follow trails, and determine credibility. At that point, I became the investigator, trying to confirm or disprove what AI had surfaced. Sometimes the tools offered a single useful breadcrumb that led me to more solid research. The process could feel tedious, and there were moments when I questioned whether AI was speeding me up or slowing me down. In the end, I can say with confidence that I produced more in less time overall, while still staying in control of the products.

This is where Gemini proved useful, combining basic Google search with an AI assistant. Still, the conclusions I ultimately reached were often meaningfully different from what the tools initially generated.


The Professor: Lead With Your Own Thinking

Use AI as a Student Assistant

I come to the data with my own point of view. AI plays the role of a capable, but fallible, student whose thinking I interrogate. Before asking AI for its perspective, I first formed my own conclusions based on the data. Only then did I prompt ChatGPT and Perplexity, treating them like students in a graduate seminar whose thinking I intended to challenge.

This approach runs counter to how many people use AI. Often, AI is positioned as the professor while the human remains the student. That dynamic is risky. It invites us to accept conclusions that may sound authoritative but are not well grounded. My advice is simple: treat AI as a research assistant, not your team leader.


The Author: Retain Your Voice

Treat AI as a Copy Editor

Many people now rely on AI as their editor, but final authorship still matters. AI polishes. I decide what stays, what goes, and what sounds like me. I write my drafts first, then use ChatGPT for targeted copy editing with clear guidance on what kind of feedback I want. After that, I review and revise where needed to ensure the voice is mine. (Like I’ve done with this post!)

ChatGPT is a strong copy editor, but it has limits. Sometimes it sounds polished while saying very little. I have heard that Claude is a strong writer and can mimic a user’s style when given examples. Even so, I am not ready to give up final ownership of my writing.

These are three ways we are making sure humans stay in the lead as we experiment with AI at Convive. For us, experimenting with AI is less about speed and more about judgment. While tools will keep evolving, responsibility cannot be outsourced. We will continue sharing what we are learning through our newsletter and blog posts. Sign up to follow along as our experiments evolve.

Next
Next

Swimming in the Waters of Systems Learning