Artificial intelligence has unlocked new business capabilities to understand customers, create content, and deliver personalised experiences. The rise of generative AI, particularly with systems like DALL-E for image generation and ChatGPT for language, has signalled a new era of AI-powered marketing. These generative models can help create marketing assets, engage customers via chatbots, and optimise real-time campaigns based on data and behaviours. The promise is enticing - improved efficiency, lower costs, and better results.

However, AI-driven marketing has significant risks and ethical challenges if deployed carelessly without proper oversight. Systems built on biased data can lead to discriminatory and unfair outcomes. Opaque algorithms make auditing and explaining AI decisions that impact people hard. AI-generated content raises concerns about authenticity, safety, and intellectual property. And misaligned AI incentives can lead to behaviour that deceives or manipulates.

These issues underscore why ethics and building trust should be top priorities for marketers adopting AI. Customers have grown wary of exploitation or deception by brands for quick gains. Earning genuine trust is critical for long-term success. According to experts like Olivia Gamblin, an AI ethicist, and Tara Dezao, a marketing leader at Pega, ethics is the currency that builds this trust.

This post explores responsible AI strategies from thought leaders like Olivia, Tara, and Pandata CEO Cal Al-Dhubaib. Olivia emphasises how AI must align with human ethics and values. Tara advocates techniques to ensure AI fairness, transparency, empathy and robustness against misuse. Cal provides insights into stress testing AI and focusing on explainability. Marketers can develop trustworthy relationships between brands, customers, and AI by championing organisational ethics.

Defining Ethics

Before exploring responsible AI strategies, it's vital to level-set what ethics means. Olivia Gamblin, the founder of Ethical Intelligence, explains that ethics is an innately human decision analysis tool. We all make small ethical choices daily, like skipping drinks with a friend to video call parents and spending quality family time. These choices align actions with values in pursuit of a "good life."

Ethics defintion

Similarly, ethics provides a tool to build "good tech" that embraces human values. It helps assess if AI decisions match the objectives and values we want to uphold. For marketers, this means evaluating if AI optimisation tactics align with brand purpose, integrity and customer well-being. Ethics is not a box to check but an ongoing exercise of thoughtful analysis rooted in moral principles.

Sources of Bias

However, as Tara Dezao from Pega points out, AI systems don't automatically make ethical decisions. "AI algorithms are not objective; they're blind," she explains. The algorithm's performance depends wholly on the data used to train it and the programmer's instructions. Without careful design, AI will amplify biases in data or coding.

For example, Amazon built an AI recruiting tool that downranked women's resumes since it was primarily trained on male applicant data. Likewise, dermatology algorithms have much higher error rates for darker skin types if not trained on representative datasets. These examples underscore the need for ethical analysis to uncover biases before real-world use.

Risks and Benefits of Generative AI

The rise of generative AI, with its ability to create human-like content and interactions, surfaces new opportunities and ethical quandaries. Systems like ChatGPT promise highly customised conversations that can engage customers and reduce contact centre costs. AI image and video generation can help small brands produce marketing assets at scale.

But Olivia highlights concerns like plagiarism, manipulating people through misinformation, and existential threats of AI replacing human creativity. Generative models optimised purely for efficiency and bottom lines can cause actual harm to individuals, communities and society.

Responsible use starts with awareness - understanding what training data and directional prompts were used to build generative AI. Thoughtfully examining ground truths and assumptions is critical before unleashing such powerful technology.

Principles of Responsible AI

With AI fundamentals covered, how can marketers move towards responsible use? Tara Dezao suggests four core principles:

Fairness - AI must avoid unjust bias and discrimination against protected groups. Biased data or algorithms that disadvantage people based on attributes like gender or race are unethical.

Transparency - Opaque "black box" AIs that can't explain reasoning are risky. Ethical AI should reveal its decision-making process to a human audience.

Empathy - AI should adhere to moral and social norms, not just maximise efficiency at the expense of people. Customer well-being should supersede short-term profit goals.

Robustness - AI needs hardening against misuse and real-world variability outside training data. Unconstrained AI optimised purely for metrics is prone to harmful exploitation.

These principles provide guardrails as marketers integrate AI into campaigns and customer interactions. They help avoid biased and high-risk algorithmic systems optimised solely for clicks or conversions over human dignity and welfare.

Testing for Bias

Cal Al-Dhubaib, CEO of Pandata, emphasises rigorously stress testing AI solutions before deployment to uncover biases. His teams continually audit models for specific ethical values and use quantitative metrics to track group indicator distribution.

For instance, a CX chatbot could be evaluated for potentially offensive or inappropriate responses. Or an algorithm optimising customer engagement could be analysed for equal representation across demographics. Proactively detecting issues prevents unethical AI from ever negatively impacting people.

Enabling Transparency

Transparent AI also builds trust. Tara highlights the dangers of opaque algorithms like the Apple Card's discriminatory credit limits. Yet full transparency isn't always possible or appropriate, depending on the application.

The key is determining the right level of transparency for the use case. Providing explanations to help humans understand AI-driven actions without revealing confidential IP reinforces ethics. As Cal says, "We have to make it explainable", - clarifying why AI makes decisions critical for responsible adoption.

Organisational Values

Olivia Gamblin emphasises that responsible AI starts with people, not technology. A critical first step is identifying core brand and team values. What ethical principles does your organisation want to uphold?

Gathering stakeholders to define fairness, accountability, transparency, privacy, etc., establishes a moral compass. Come to a consensus on definitions aligned to company goals. For example, defining safety and well-being is crucial when strategising for a child-focused brand.

Regularly revisiting team values through open conversations fosters an ethical culture. People need psychological safety to surface concerns without fear of repercussions—repeated discussions sharing diverse perspectives cement shared values.

Ethical Policies and Oversight

Next, bake ethics frameworks into workflows to reinforce those values. Documenting AI practices provides accountability. Olivia suggests setting checkpoints to analyse if decisions align with criticalHow values. This makes ethics top-of-mind during the AI lifecycle.

Auditing tools and algorithms against defined values identify issues to address before launch. Tara Dezao asks, "how often are you testing your models?" to gauge if testing occurs continually, not just as a one-off compliance exercise.

Shared responsibility is also vital - one leader can't force-feed ethics. Distributing ownership across teams makes ethical AI a collective effort. But concentrated oversight is still needed to drive action on frameworks.

Ethical Technology Practices

Procurement is another opportunity to ingrain ethics, evaluate new AI tools against brand values, and require supplier due diligence.

When commissioning AI, mandate options to adjust transparency, from fully revealing logic to providing high-level explanations. Proactively request techniques like in-loop testing and synthetic data to prevent data abuse and bias.

This section covered tactics like defining values, audits, documentation, training, and ethical procurement to embed ethics within teams, workflows and technology. Establishing accountability and guidelines makes responsible AI possible.

Best Practices for Developing Trustworthy AI

  1. Audit existing data and algorithms for biases before deploying AI to address ethical concerns and data privacy.

  2. Actively test for and measure unfairness to minimise ethical implications, focusing on customer data.

  3. Adopt strategies to make AI decision-making transparent and reasonable, assuring customers and business leaders.

  4. Don't hide behind opaque "black box" models; document AI systems thoroughly to enhance ethical guidelines.

  5. Stress test AI systems for edge cases and misuse, considering human rights and related articles during testing.

  6. Ensure AI systems align with ethical standards and are developed by AI developers and data scientists who adhere to ethical practices.

  7. Create organisational values definitions aligned to goals, taking ethical implications into account, and involve principle-centred discussions.

  8. Build ethics directly into workflows to establish accountability across AI development and use teams.

  9. Set ethical review and oversight checkpoints during AI development to uphold ethical guidelines and standards.

  10. Enforce responsible AI practices through procurement contracts, demanding options for transparency and synthetic test data.

  11. Emphasise the opportunity to augment human values through ethical technology, benefiting customer service and personal data protection.

  12. Actively engage in external initiatives advancing AI ethics to stay on top of emerging best practices and case studies.

The Rising Imperative of Responsible AI

Artificial intelligence brings tremendous potential to understand customers, personalise engagements, and optimise campaigns. But unchecked AI also poses risks like bias, deception, and unintended harm. Ethical risks multiply as AI becomes more generative, creating content and conversations.

The Critical Role of Trust in AI Adoption

Trust is the currency of marketing. Without ethics, brands lose trust and credibility. Customers expect respect for their rights and welfare, not just personalised sales pitches. They weigh a company's principles alongside price and products.

Strategies to Develop Ethical, Trustworthy AI

Responsible AI that aligns with human values and ethics is imperative for marketing success. Strategies like testing for bias, enabling transparency, stress testing edge cases, and auditing algorithms can help develop trustworthy AI.

People and Culture Enable Responsible AI

But technology alone can't ensure ethical AI. As discussed by experts like Olivia, Tara and Cal, change starts with people and culture. Equipping teams to have integrity-driven discussions, document practices, and make ethical choices at each stage of the AI lifecycle roots out problems early and often.

Unique Opportunity for Marketers to Lead

Marketers have a unique opportunity to champion AI ethics within their organisations. They must translate responsible AI into compelling brand purpose and customer experiences. By implementing the strategies covered, we can build AI that augments human values rather than undermines them.

Guidance for Putting Ethical AI into Practice

Additional resources like the AI Now Institute, Partnership on AI, and Springer Nature's AI and Ethics journal guide on putting ethical AI into practice. The path won't be perfect, but progress over perfection is needed to realise AI's potential while protecting people. The time to act purposefully is now.

Download the AI for Marketing Playbook

Comments