Skip to content

Reinventing the Intelligence Architecture: Crafting Ethical Artificial Intelligence for the Creative Industries

Vision for Ethical AI in Creative Sectors Propositioned by AI Pioneer Ve Dewey

Redefining AI Framework: Crafting Ethical Artificial Intelligence for the Creative Industry
Redefining AI Framework: Crafting Ethical Artificial Intelligence for the Creative Industry

Reinventing the Intelligence Architecture: Crafting Ethical Artificial Intelligence for the Creative Industries

In the ever-evolving landscape of technology and design, globally renowned design leader Ve Dewey has been at the forefront, working with iconic brands like Mattel Inc. and Adobe. Today, the focus is shifting towards responsible AI design in the creative industries, guided by four core principles: inclusivity, transparency, accountability, and sustainability.

Inclusivity by Design

The drive for inclusivity in AI systems involves working closely with diverse stakeholders, prioritising underrepresented voices in both design and governance. Initiatives like the Inclusive AI Lab integrate feminist and cross-cultural perspectives through participatory governance and community-led research, ensuring AI respects cultural nuances and diversity rather than eroding them.

Transparency and Accountability

Transparency about data collection and AI usage is crucial for maintaining trust with users and stakeholders. Accountability means humans remain responsible for AI-involved decisions, with a human-in-the-loop approach that augments rather than replaces human creativity and judgment. Organisations are committed to never using generative AI to deceive or spread misinformation, ensuring ethical AI deployment that honours creators’ original contributions.

Sustainability by Design

Sustainability is recognised as essential, promoting AI systems that consider environmental impacts and long-term cultural effects. Sustainable AI design in creative industries aims to support democratic participation and preserve cultural diversity over time, rather than undermining them.

Human-Centered Use

AI should serve as an assistive tool that complements human creativity, intuition, and emotional intelligence, rather than an autonomous system replacing creative professionals. The focus lies on empowering creative workers through AI upskilling and reskilling opportunities, ensuring AI enhances careers and creative expression ethically and respectfully.

Additional aspects include ethical use of AI to promote diversity and inclusion, safeguard user privacy, and comply with data protection regulations. Public policy approaches that apply risk-based oversight to AI allow innovation while protecting creatives’ rights and mitigating associated risks. The creative AI research community is exploring evolving human-machine collaboration dynamics, emphasising the preservation of emotional, cultural, and ethical wisdom during AI integration into creativity.

These frameworks and efforts reflect a multidimensional approach to responsibly leveraging AI in creative industries, ensuring it advances innovation while protecting human values, diversity, and sustainability. However, some concerns remain, such as the lack of structural governance to protect imagination, creative agency, and originality in AI, and the disappearance of responsibility in AI practice.

As AI reshapes how culture is made and shared, it presents a systemic design challenge. It is essential to avoid reproducing extractive colonial logics in our cultural systems, safeguard the conditions that enable cultural expression, and ensure transparency in AI systems to maintain trust. The international context adds urgency, with the U.S. targeting so-called "woke AI" in an Executive Order, potentially impacting AI use in the UK and Europe.

In the UK, the creative industries, a key growth sector, employ 2.4 million people and contribute £124 billion to the economy. At a recent All-Party Parliamentary Group (APPG) on the Future of Work meeting, artists described how their voices were cloned, music sampled, and livelihoods reshaped without consultation or consent. WeTransfer's updated terms of service in summer 2025, permitting user content for AI training, caused a backlash and exposed the importance of transparency.

Singapore's anticipatory workforce policies offer a model of cross-sector, design-led governance that the UK could adopt for its creative industries. EPFL, ETH Zürich, and the Swiss National Supercomputing Centre are building a sovereign open-source multilingual model on public infrastructure, exemplifying AI designed for trust and regeneration.

AI companies like Anthropic claim to make tools with human benefit at their foundation, and OpenAI promises AI that benefits all of humanity, but these ideals often clash with lived experience. The Inclusive AI Lab, co-founded by Laura Herman and Professor Payal Arora, is a leading example of embedding inclusion through participatory governance and community-led research.

As we navigate this new era of AI in the creative industries, it is crucial to remember that AI should serve as a tool to enhance human creativity, not replace it. By prioritising inclusivity, transparency, accountability, and sustainability, we can ensure that AI advances innovation while protecting human values, diversity, and sustainability.

  1. Ve Dewey, an acclaimed design leader, has been advocating for responsible AI design, particularly in the creative industries, as technology continues to evolve.
  2. The Inclusive AI Lab, a pioneering initiative, emphasizes the importance of integrating feminist and cross-cultural perspectives into AI design and governance.
  3. Transparency in AI practices is vital for maintaining trust with users and stakeholders, and organizations are committed to ensuring ethical AI deployment that respects original creators.
  4. The creative industries, a significant contributor to the UK economy, are grappling with the impact of AI on artists' work as their voices, music, and livelihoods are altered without consultation or consent.
  5. Singapore's anticipatory workforce policies, focusing on cross-sector, design-led governance, could serve as a model for the UK's creative industries.
  6. EPFL, ETH Zurich, and the Swiss National Supercomputing Centre are developing a sovereign open-source multilingual model on public infrastructure, prioritizing AI designed for trust and regeneration.
  7. AI companies like Anthropic and OpenAI claim to focus on tools that benefit humans, but these ideals may sometimes conflict with real-world experiences.
  8. The Inclusive AI Lab, co-founded by Laura Herman and Professor Payal Arora, is a leading example of how engaging diverse communities through participatory governance and community-led research can advance responsible AI design in the creative industries.

Read also:

    Latest