.jpg)
Calm The AI Panic!
Curators, You Are Still In Charge!
-
The Biggest Myth About AI in Museum Interpretation
-
Why You Don’t Need to ‘Understand AI’ to Stay in Charge
-
How to Set the Rules Before Any AI Project Begins
If AI currently makes your team feel more nervous than excited, you are in good company. Surveys across the heritage sector show real enthusiasm for digital tools, but also deep concern about AI’s risks, from budget and skills gaps to ethics and visitor trust.
The aim isn’t to turn curators into coders or AI engineers, it’s to give you enough clarity and control that you can brief, challenge and steer partners with confidence.
This guide is about moving from “We’re supposed to be doing something with AI…” to “We know where AI helps, where it doesn’t, and what we expect from any partner using it on our behalf.”

What museum teams are really worried about
When you strip away the hype, three anxieties come up again and again in museum AI discussions.
-
Losing control of voice and accuracy: fear that AI will oversimplify collections, hallucinate facts, or speak about sensitive histories inappropriately.
-
Damaging trust with audiences and communities: concern that visitors will feel tricked if AI isn’t transparent, or that partners and artists will be displaced.
-
Getting stuck with something unmanageable: worry about skills, capacity, and what happens when a pilot ends, but the tech needs ongoing care.
Recent guidance from the US and UK heritage sectors stresses that these are not reasons to avoid AI entirely, but reasons to put ethics, governance and staff involvement at the centre of any project. In other words: if a proposal doesn’t address these worries up front, it’s not ready.
Reframing AI: you set the rules, your partners do the plumbing
For most curators and learning teams, the realistic role in an AI‑supported project is strategist, author and reviewer – not system builder. Sector articles on ethical AI make the same point: leadership and content teams must define where AI is allowed, how it is checked, and what happens when something goes wrong.
In a Panivox/RichCast project, that typically looks like this:
-
You bring the story. You define the themes, objects, communities involved, red lines, and tone of voice.
-
Together, we set the AI boundaries. We agree on what AI can and cannot do (for example: support voices, translations and structured Q&A, but never generate new historical claims).
-
Panivox builds the experience. Our team uses RichCast and AI tools behind the scenes to produce interactive scripts, audio and branching structures that match your brief.
-
You approve and adjust. You review prototypes, test with colleagues or community partners, and sign off only when the interpretation meets your standards.
So, AI isn’t something you “switch on” yourself. It’s something you ask clear questions about when you commission work and insist that partners show how they are using it responsibly.
A curator‑friendly SAFE checklist
Ask: “What are we trying to help visitors understand or feel, and why is this story important now?” If the answer is “because AI is trendy”, you can safely say no.
With Panivox, projects begin with story planning and visitor journeys; only once that foundation is clear do we decide whether AI is useful for tasks such as multilingual audio, question-and-answer segments, or adapting content for different age groups.
A – Authorial control
Decide whose voice is speaking – curator, community collaborator, character – and insist that this voice is scripted and approved by humans.
In practice:
-
You control the core script and any alternative wordings.
-
Panivox uses AI to perform and deliver the script (for example, via synthetic voices, translations, or timing adjustments) but does not change its meaning.
-
Any “conversational” responses are drawn from a curated knowledge base you have reviewed, not from the open internet.
F – Fair, transparent and safe
External guidance emphasises that responsible AI in museums must be transparent and aligned with institutional values. For interpretation, that means:
-
Being open with visitors when an experience uses AI‑generated voice or imagery, especially in sensitive contexts.
-
Checking for bias, stereotyping and omissions just as rigorously as you would with labels or learning resources.
Panivox builds disclosure and review loops into projects, but the ethical compass comes from you and your colleagues.
E – Easy to pause and improve
Confidence increases when you know you’re not stuck with a black box. Heritage research highlights that lack of expertise and fear of “locking in” bad choices are major barriers to AI adoption.
Because RichCast experiences are browser‑based and centrally managed by Panivox, you can:
-
Start with a small pilot (e.g., one interactive portrait) and limit it to a single season or gallery.
-
Request quick edits, tone changes or content removals without a full rebuild.
-
Treat AI‑supported work as an iterated interpretation, not a permanent installation that can never change.
Where AI genuinely lightens the load (without touching your voice)
Once ethics and roles are clear, it’s easier to see specific tasks where AI can help your team without undermining expertise. Sector resources point to some consistent “quick wins”.
-
Multi‑voice, multi‑language delivery
AI can help generate and refine audio in multiple voices and languages from your approved text, making it affordable to serve more audiences with the same interpretation budget. -
Adapting complexity
AI can support the creation of shorter or simpler variants of a script for younger visitors or new audiences, which you then review and correct, instead of writing each one from scratch. -
Structured, question‑led experiences
Panivox can build RichCast experiences in which visitors ask common questions and hear pre‑approved answers, powered by AI speech recognition rather than AI-generated content.
Across all of these, your role is to judge: “Does this feel like us? Is this fair? Would I be comfortable standing next to this in the gallery?” If the answer is yes, AI has done its job. If not, your partner revises it.
How to brief your first AI‑supported interpretation project
If your institution is cautious, aim for a tightly scoped project that demonstrates value and good governance. A practical starting brief might include:
-
The story and audience: which object, space or theme you want to focus on, and who it’s for (for example, families, school groups, adults).
-
The red lines: topics, phrases or approaches that are off‑limits; communities that must be consulted; how you want AI to be used (or not used).
-
The success test: what a “win” would look like (for example, more questions from visitors, better dwell time, positive teacher feedback, or stronger connection to mission).
Panivox then turns that into a concept and prototype in RichCast, using AI where it clearly supports those goals, for example, to deliver multiple voices or formats, and brings it back to you to critique, refine and approve.
Two useful external reads to share with colleagues
You can deepen this conversation internally with sector‑focused resources (neither promotes competing products):
-
Artificial Intelligence in Museums: Discussing Ethics and Protocols – American Alliance of Museums
Why: A recorded AAM session that lays out use cases, ethical questions and practical protocols for AI in museums – ideal background reading for curators and learning teams starting to shape their own stance. -
AI adoption in heritage institutions faces barriers – Heritage Research Hub (SHIFT Project)
Why: EU‑funded survey analysis showing both enthusiasm for AI and the main barriers (budget, skills, resistance to change) in cultural heritage – useful evidence that your institution’s anxieties are normal and solvable.
Handled this way, AI becomes less a source of panic and more a set of behind‑the‑scenes tools that your partners use to amplify what you already do best: careful, human‑centred storytelling rooted in collections and communities.
AI doesn’t have to be something museums hand over control to.
With the right structure and safeguards, it becomes a powerful tool. If you’d like to explore this approach in more detail, feel free to get in touch for an informal conversation.
