Generative AI is your reality. PR Pros Must Be Ready to Advise your Organization on Accountability and Responsibility Guidelines

Submitted by Holly S. Ingram, APR


“When machines make bad decisions, the public and other stakeholders will look to hold organizations accountable and the people in charge of those organizations.”


These words are from the recently released PRSA position and guidance for PR Pros from the Board of Ethics and Professional Standards (BEPS):  “AI and Leadership Accountabilities, on generative artificial intelligence (AI) and machine learning.

AI tools like ChatGPT and many other fast-evolving machine learning programs are now a reality of business that PR pros and communicators must explore – from uses to enhance our profession, to the many benefits and reputational risks.  

Yes, uses of AI can be silly and sublime. For example, ChatGPT helped me write a last minute, fun limerick around the time of St. Patrick’s day to liven up my welcome for a speakers panel. But when it comes to business uses of this wondrous technology, PR pros need to take their role seriously in advising leadership and organizations, or advising clients’ leadership, to avoid reputational and ethics gaffes while embracing and leveraging the tools. 

The BEPS position focuses on a fundamental of the ethical challenges AI can present-- determining how your organization will assign accountability for decisions made by machines, their impact and what you’ll do to address any issues that could arise due to AI-generated decisions or actions. After all, only humans can be held responsible for business decisions made based on AI, not the machines who gave the assistance. 

The BEPS guidance on AI Accountability is a quick read and outlines 4 PRSA Code of Ethics provisions that come into play: Disclosure, Free Flow of Information, Conflicts of Interest and Enhancing the Profession. I encourage you to look it over. As a peek, the PRSA BEPS recommends that:

  • Public relations professionals provide counsel to senior leadership on the basis that AI or machine learning processes cannot be held accountable for the decisions they make, and ultimately, human beings charged with active oversight of operations at that time hold ownership of the situation and actions that must be taken.

  • When developing communications programs, content and responses to situations involving AI decision-making, clear lines of human responsibility should be drawn and presented forthrightly.

Also, if you missed PRSA’s national webinars (free for members) on AI, ChatGPT and Ethical, Bias and Diversity considerations, be sure to check the PRSA.org  Professional Development tab. You can learn and earn APR maintenance credits.


And as always, I urge you to LEARN THE CODE and LIVE THE CODE. Just scan the QR code below to find the Code of Ethics and more. 


If you have PR ethical questions, challenges, case studies or experts our programming we can feature, please reach out to me @ holly.ingram@perfettivanmelle.com.