Voices in Creative Innovation: Tuki Clavero, Research Coordinator at BUas & Senior Researcher at AGM
Tell us about your journey through the creative industry and how you got inspired by AI?
I came into the creative industries through games and interactive media. My inspiration comes from using AI to remove friction in workflows, open up access to tools, and give people more space for experimentation rather than replacing their creativity.
Our audiences include emerging talent, students, and educators. What advice would you give to people entering the industry during such a fast-moving, rapidly changing time?
Do not chase every new tool; focus on understanding underlying principles. Treat AI tools as something you will learn, outgrow and swap out, rather than as skills that define you. Be ‘tool agnostic’. Build a portfolio that shows how you think with tools: your process, your decisions. Finally, stay curious but sceptical, ask what a tool is good at, what it is bad at, and whose interests it serves. Always be the human in the loop of whatever tool you use, make sure you are always the one making the decisions.
How do you see AI reshaping the creative industries right now, and what changes feel most exciting to you personally?
AI is making the distance between idea and first prototype shorter. It is becoming much faster to sketch an experience, visualise a scene, explore variations or test out narrative directions. Personally, I am most excited about AI as an interactive sketching tool/responsive brainstorming surface in the early stages of a project: you put in rough ideas, and the system responds with variations that you can evaluate, edit, combine and refine. That makes your thinking visible more quickly, gives you something concrete to react to, and makes it easier to compare options and make decisions with collaborators.
What do you believe is the most misunderstood aspect of AI in creative work?
Many people still see AI as either ‘magic’ or ‘a threat’, rather than as a set of fallible systems with very specific strengths and limitations. It does not understand meaning or context the way humans do, it predicts patterns based on data. That means it can be brilliant at generating options but terrible at judgement, nuance and responsibility.
Within the AICI, what part of the project excites you the most and why?
Using AI to help people who cannot draw (or do not feel confident drawing) still express rich visual ideas. What motivates me is lowering barriers for students, educators and practitioners so they can test and communicate concepts quickly and iterate on ideas.
What creative disciplines or mediums do you feel AI is especially well-suited to support, and why?
AI is strong in pre-production and concept phases. It is also helpful in writing and rewriting text, from dialogue options to documentation, as long as humans stay firmly in charge of intention and editing. Anywhere that iteration, variation and quick visualisation are important, AI can be a powerful assistant.
What challenges do you think our industry still needs to overcome to adopt AI meaningfully and ethically?
We need clearer, workable standards around data, IP, consent and attribution so that creators are not forced to choose between speed and ethics. There is also a skills gap: many people feel pressure to ‘use AI’ without support to understand its limitations, biases and risks. We also must make sure AI does not deepen inequalities in who gets access to tools, training, and opportunities to shape how these systems are built.
What skills do you think future creators will need to thrive in an AI-enabled world?
Critical thinking and ethical awareness will matter more than ever: being able to question outputs, spot gaps, and understand biases. Communication skills (including ‘prompting’ as a form of design brief because LLMs are not mind-readers) will be key, because working with genAI is often about specifying context and constraints very clearly. Data and information literacy will help creators understand where systems get their knowledge from and what that implies.
How do you see the role of educators evolving as AI tools become more accessible?
Educators will increasingly shift from being primary sources of information to being designers of learning experiences and coaches in judgement. Their role will be to help students use AI critically and reflectively. Focusing more on process, decision-making and collaboration, and less on outputs that can easily be automated.
What opportunities do you think Europe has to lead in AI for creative industries?
Europe has a strong mix of cultural diversity, public institutions and funding structures that can support thoughtful experimentation, not just rapid commercialisation. There is an opportunity to combine regulatory leadership with practical support for ethical, human-centred AI in the arts, media and education.
Cross-border collaboration is also a strength: creative industries here are used to working across languages and contexts, which is very relevant for AI. If we invest in open, accessible tools and research, Europe can help define models of AI use that respect creators.
What trends do you believe will shape the intersection of creativity and technology in the next five years?
I’m hoping for provenance, authenticity and watermarking becoming more important (mandatory even) as synthetic content scales, especially for education and news. I’m also hoping for more focus on creating tools that support learning and reflection, not just speed and output volume.

