First study on generative AI’s impact on communications reveals 80% of content writers use Gen AI, but only 20% say their manager knows

PRESS RELEASE
5th November 2024

CheatGPT? Generative text AI use in the UK’s PR and communications profession is the first report to provide in-depth insight into how and why PR and communications professionals are integrating generative text AI (Gen AI) into their workflows.

The research was conducted by Magenta Associates and the University of Sussex. It highlights how Gen AI can be both a catalyst for efficiency and a source of ethical concerns.

Based on surveys of 1,100 UK-based content writers and managers, as well as 22 in-depth interviews, the white paper details both the advantages and challenges of AI use in the profession. Key findings include:

  • 80% of UK communications professionals use Gen AI tools frequently, but only 20% have told their line manager, and even fewer (15%) have received any training.
  • 66% of participants agreed training on the use of Gen AI could be useful.
  • 68% of respondents report that Gen AI improves productivity, primarily in initial drafting and ideation stages.
  • 71% of writers said their organisation had no guidelines regarding its use or they were not aware of any, and the 29% of people whose organisations did have guidelines said that employers issued advice like “use it selectively”.
  • While 68% consider Gen AI use ethical, concerns persist over transparency, especially as only 20% discuss use openly with clients.
  • 95% of managers are to some degree concerned about the legalities of using Gen AI tools like ChatGPT.
  • Nearly half of respondents (45%) are concerned to some degree about the IP implications.

Why this research matters

As Gen AI tools become more sophisticated, the need for industry-specific insights and guidelines grows. Magenta’s research aims to bridge this gap, equipping professionals with actionable recommendations for responsible Gen AI use in content creation. The report advocates for improved ‘critical algorithmic literacy’ – a common language to improve understanding and develop effective and ethical use cases –  transparency in AI use, and workplace guidelines that align with ethical standards, particularly for smaller businesses that are less equipped to influence the AI landscape dominated by larger tech corporations.

Jo Sutherland, managing director at Magenta Associates, said: “We’re excited to lead the charge on research that directly addresses the realities of Gen AI in our industry. This isn’t just about understanding how AI works, but about navigating its complexities thoughtfully. AI has undeniable potential, but it’s crucial that we use it to support, rather than compromise, the quality and integrity that defines effective communication.”

Dr. Tanya Kant, senior lecturer in media and cultural studies (digital media) at the University of Sussex and research lead, said: “This white paper highlights the urgent need for what we call ‘critical algorithmic literacy’ in the industry – an understanding not just of how to use AI tools, but of the broader impacts these tools have on ethical practice and industry power dynamics. In the UK, there are numerous PR and communications SMEs. It’s vital that these stakeholders can exert more influence in shaping the wider landscape of development, ethics and power in Gen AI technologies.”

This white paper signals the beginning of a vital conversation within the UK communications sector. Magenta and the University of Sussex will continue to collaborate to develop a more ethical, transparent, and inclusive approach to Gen AI – one that respects the profession’s core values while embracing innovation.

Download ‘CheatGPT? Generative text AI use in the UK’s PR and communications profession for free.

Greg Bortkiewicz