Corporate Affairs Webinar – AI and ChatGPT

Artificial intelligence, though not a new technology, has leapt into our mainstream discussions and consciousness with the release of new generative AI programs, such the launch of ChatGPT in November last year.

AI is the fastest growing tech sector in the world, and PWC estimates it will add $15.7 trillion to the global economy by 2030. As of January this year, ChatGPT had 100 million monthly users, making it the fastest growing consumer App in history. The technology has had such momentum and acceleration over the years that experts estimate there have been five years’ worth of tech advancements in the past 12 months.

In fact, technology’s revolution is so rapid that the technology spoken about today may not be relevant within a week!

In Rowland’s Corporate Affairs Webinar, we asked our expert panel to unpack and explore the potential impacts, opportunities and risks that ChatGPT, and new AI technology in general, can have for their respective sectors.

Our panellists discussed these considerations and answered two pressing questions:

  • How is AI impacting your profession and the sectors you are working in? Are you already embracing this type of technology?
  • From each of your perspectives, what are the threats, opportunities, benefits, and limitations of ChatGPT and other AI tools? What is the potential for AI to change the future of how we work?

Panel:

  • Professor Fahim Khondaker, Partner, Data Analytics and Insights at BDO and Professor of Practice, UNSW
  • Professor Amisha Mehta, School of Advertising, Marketing and Public Relations, QUT Business School
  • Michael Rayner AM, Director, Blight Rayner
  • Shane Rodgers, Head of Media and Platforms & Executive Strategy Adviser, Rowland

Watch or listen to the full webinar here:

Don’t have time to watch? Read our summary here:

Professor Fahim Khondaker:

  • The phenomena of AI and analytics is not a new concept. For many years we have been using AI for predictive forecasting (modelling) and natural language processing (like the chatbot when you open a website). However, the extent to which people have been able to apply AI to their everyday lives is new. We haven’t before seen the capabilities to compute data like we currently see with ChatGPT, which has been in development for many years.
  • I don’t think ChatGPT and AI is something to fear. Right now, people are enjoying experimenting with the new program and learning how it may be applied to how they work, but still cautious about its use and the implications for clients.
  • It’s all about getting the balance right and approaching the program with a sense of realism to welcome new and exciting opportunities, while understanding the limitations. We all have a duty to inform ourselves on what ChatGPT it and what it isn’t. ChatGPT can predict the probability of what’s next and within seconds draws upon existing information on the internet to give you what you need. It’s algorithm has the ability to give you the second most likely answer when it needs to. However, with that comes the limitations of inaccurate sources on the Internet — it’s important to evaluate the risk of that.
  • The tool is a great starting point and evaluating it will be a task we all have to do. Essentially, using ChatGPT is no different to if we all went away and Googled items prior to a stakeholder consultation. The process of using information and reviewing it to suit our client’s bespoke needs hasn’t actually changed.
  • As with every new advancement in technology, the transition is going to cause some element of collateral damage and people may fall through the cracks temporarily — it’s the time for bi-partisanship when it comes to AI. 

Professor Amisha Mehta:

  • At QUT we are embracing ChatGPT and the university takes the view that ChatGPT is part of the environment — we will not be banning its use like other universities have chosen to do. Of course, unit coordinators can decide to what extent they would like their students to use the platform. Personally, I take the view that students can use it, but there are limitations of course.
  • For education facilities, it has already become obvious that ChatGPT has its limitations. It can’t gain access through academic journal firewalls and, though you can ask it to provide references for its information, it tends to just make these references up. Students are also asked to provide an evidence-base, which ChatGPT cannot do.
  • Particularly for research work, it’s all about providing that evidence-base to help organisations and industry make an informed decision, so I don’t see ChatGPT replacing that — at least for now.
  • I believe the core issue is the removal of the grunt work though. We are moving into a world where the grunt work is being done for us and we don’t get to learn from that, so it will be interesting to see the impact of that.
  • There will always be a need for human input. When helping communications and marketing students understand what makes a good key message or a good holding statement, there is the need for a human-input to make it sound authentic. Humans understand the different nuances and the audiences and can use this understanding of the external climate to create messages that AI cannot do.
  • A big challenge, which we are all aware of, is how education facilities will assess exams and assignments moving forward.
  • Opportunities are also evident, particularly for businesses who take on this new cohort of AI-using students. For example, the better query you pose to ChatGPT, the better answer you get. Which means we may start being able to better shape the questions we ask or ask the right kinds of questions to clients, who in turn can give us better answers.
  • Overall, ChatGPT and similar AI technology are great enablers to help support us. However, we are not ‘servants’ to the technology and it is still important for us to understand communicating as humans. There is still the innate need to be judicious about the types of information that informs our decisions. I don’t think ChatGPT changes that, we’ve always had to do it.

Michael Rayner:

  • Like everyone, I’ve been curious about ChatGPT and its potential, and have spent some time playing around in the program. I asked the program for information from my own company, Blighter Rayner, and it got the information wrong. I was listed down as a female. So, I know this technology has its limits and that it doesn’t know everything.
  • That being said, ChatGPT and other AI technology does create potential. We have a club at Blighter Rayner who are investigating these tools and how they work. They’re a lot of fun to use, and if anything, it’s become a big moral booster for our people who enjoy playing around with the technology.
  • I see opportunities with using AI in the architectural industry. It can take on a lot of the usually time-consuming ‘grunt work’ that comes with the technical work we do, like for building codes. It gives us more time to focus on the creative elements of the business.
  • However, the risks of AI’s use are present, particularly for graphic designers and product designers. There’s also the potential for the technology to form a bias as the most popular view will rise to the top of its algorithm, not necessarily the most accurate view.
  • It’s important to note, AI can’t create true originality, it relies on the answers that are already out there and can’t understand what will happen in the future.
  • The ethics and guidelines on how we use the AI technology will also need to be a point of discussion for businesses.
  • Though AI has it’s clear limits, it is not going away. Businesses need to continue to trial the AI resources that they do have, so that we may understand and embrace a new future with these technologies.

Shane Rodgers:

  • The buzz words of AI and ChatGPT have been deafening in the news. Companies and individuals alike have been experimenting with the program and determining where to draw the boundaries of the risk associated with these technologies.
  • I think ChatGPT is a great tool for us to use and improve our day-to-day business functioning, but the more we experiment with it, the more we see the need to bring the boundaries in and the more we see how it can sometimes be wrong. The tool is only as good as the information it accesses, and we know that articles, blogs etc., online can easily be published without ever being curated.
  • However, the technology is great at translating and summarising topics, and can be a great add on for organisations to make existing processes easier (i.e. removes the need of spending hours Googling a topic when AI can collect information in seconds). So, it’s an addition rather than replacement of our usual business processes.
  • As mentioned by others, limitations include it’s inability to form original ideas and the potential inaccuracy of its data, so its implementation into our daily lives while understanding its limitations will be the core focus of our work moving forward.
  • There is also the obvious threat that the technology will have to our existing workforce. We don’t want to lose jobs in the long-term but we have seen throughout history that, when we see an advancement in technology, new doors are opened and new kinds of jobs are created. The human race is good at finding new things to do, but it is the transition into this new AI era that will be challenging as people fall through the cracks.
  • We need to embrace it. We should be alert but not alarmed. We tend to overestimate how quickly these things happen. It will come at a pace that the human race can handle. We are moving into the AI and automation age and that will have its positives and negatives.
  • I’m excited about new age enlightenment for humans to move around and change.

Summary of questions asked from audience members:

Q: Being in teaching, I see the advantages for AI/ChatGPT, but at the same time, I am very concerned about the future and can see major difficulties in assessment and verification of student work. Possibly a return to hand-written exams?

A: It is certainly a challenge for educational facilities and we have to look ahead at how we can overcome this. We assess students’ ability to critically analyse the problem and the process and reflecting on the impacts for the stakeholder — AI doesn’t have that capability yet. We also know that people consume information differently now (i.e. less reading and more videos) which is causing our brains to shift. At universities, we’re learning from those brain changes and working to put that into our assessments. In addition, at QUT, if students are using it, they must reference it.

Q: I read recently that “Teams” will soon be launching more AI in the platform that will listen to meetings and create the action list and minutes; whilst I think this will be useful does anyone else have concerns about Teams listening in to conversations?

A: Yes, this is a risk, but there are laws in place that require them to make clear exactly how they plan to use the data. I imagine we will have to “enable listening”. It will be important to be informed of the usage policies and terms and conditions that we sign up to when we buy and use apps.

Q: Within the PR/comms/media industry, the skill is in finding a unique angle or crafting comms that are interesting (read: different to what else is out there). If ChatGTP delivers output based on what it sees as the most commonly occurring source data, how do you see its use in drafting press releases, etc?

Good communication is understanding the angles and the audience that will make people most likely read the source. ChatGPT can inform that decision, but it is far away from not needing human insight for this. When it comes to flare and style, there’s a flatness about AI.

Q: In terms of content creation – if fast, mediocre content is now the new normal — created at speed by AI and accessible by all organisations for free — will this raise the bar in terms of what expert service providers are expected to deliver?

A: I do think there is the threat that AI could replace the sector of people that do the more technical work. However, you’re not really creative if you’re relying on AI to be creative for you. Also, important to note that, for professional service firms, there are very different services and offers available. So, it depends on the buyer if the service and what you’re trying to present as your point of difference.

Q: How do you see the use of ChatGPT/AI evolving in real-world situations in regards to the broader advertising and marketing industry? Do you see marketers leveraging ChatGPT in generating ideas or writing briefs etc?

A: It can certainly give you ideas for content, but it doesn’t answer the important question of ‘why’ in your broader marketing strategy or what you’re going to do if something happens in the midst of your strategy. AI is missing that link at the moment.

The other piece around it is how transparent we are in communicating if we may or may not be using it. There are going to be a lot of ethical discussions and intellectual property discussions.

The nature of creativity is also very much a human skill and creating something that doesn’t exist is not yet possible for AI. That being said, an AI platform won a photo competition that was a summary of other people’s photos so that’s the conversation we need to be having for intellectual property.

Q: You’ve been speaking on Chat GPT as a chatbot tool but what does the panel think about the opportunities of its API model being integrated into other program workflows?

A: The integrations where NLP models are actually used. The barrier to log in to openai.com is much higher than it just being embedded in apps and products we already use daily (e.g. MicrosoftWord, email).

Q: If AI is going to take over the dirty, dull, dangerous and difficult jobs do you think that might create a human underclass?

A: Social inequity is something we need to consider at all times and for technologies like this it’s even more important given the speed at which inequities may arise. Lots of work for policy makers and us as humans.