CHATGPT: The End Of The Recruitment Marketing World As We know It?
I don’t know about anyone else, but chatter around ChatGPT has dominated my LinkedIn feed for months. From here’s-one-I-made-earlier examples...
3 min read
Jo Perrotta : 07-Jun-2023 10:06:37
The walls are closing in on ChatGPT. Or was that Wall Street?
Less than six months after ChatGPT entered the scene and blew our primitive brains with its capabilities, it’s already finding itself on the banned list.
Back in February, Bloomberg reported that Wall Street banks, including Bank of America, Citigroup, Deutsche Bank, Goldman Sachs and Wells Fargo, have banned the tool, declaring that it needed to be adequately vetted before use.
At the time, the news didn’t stir up much of a reaction. It’s standard practice for banking institutions to come down hard on unauthorised third-party software, with apps like WhatsApp vetoed for the same reason.
However, ChatGPT is facing fresh scrutiny as the Italian Data Protection Authority banned the chatbot in the country following concerns over a data breach and the use of personal data.
The watchdog stated it was imposing an “immediate temporary limitation on the processing of Italian users’ data” by the owner of ChatGPT, OpenAI. While the San Francisco company did disable ChatGPT in Italy, a spokesperson said: “We are committed to protecting people’s privacy, and we believe we comply with GDPR and other privacy laws.”
We previously discussed ChatGPT regarding its usage in the marketing field and whether it can replace humans. So far, much of the rhetoric around the software has related to the eventual role of us lowly humans should it become ingrained in our everyday lives. However, it appears that we’ve gotten ahead of ourselves.
Forget the rise of the machines. Forget panicking that we’ll be out of a job in favour of a bot. Let’s even park the fact that students could use it to cheat on their assignments (several universities, including Oxbridge, Manchester, Bristol, and Edinburgh recently announced a ban on ChatGPT).
We should actually be paying attention to the serious issue of data. The Italian watchdog referred to a data breach suffered by Posby OpenAI in March, which partly exposed user's conversations and personal details, including email addresses and the last four digits of their credit cards.
The Italian ban came just days after more than 1,000 AI experts, tech entrepreneurs, researchers, backers and scientists, including Elon Musk, who co-founded OpenAI, signed an open letter calling on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4” for any risks they may pose to be properly studied.
One of the concerns expressed in the letter is that tech giants OpenAI, Microsoft and Google have entered a tech ‘space race’ in their quest to develop and release new AI models as quickly as possible. It’s thought that the speed of such developments will mean society and, crucially, the regulators could fail to keep up.
The sprint to release products is a genuine concern. Even if a company like OpenAI agrees to a six-month circuit breaker to explore the risks of ChatGPT further, what about the others? We’ll likely see a scramble to steal market position, which could result in the launch of potentially less stable bots that collect and use our data without permission.
Data laws have been slow to implement (as any laws are), with countries operating under different rules. How will laws govern AI software? Will it fall to individual governments or watchdogs to request that each AI provider turns off the tap when asked? If so, how quickly can we assemble such groups and departments if they’re not in place already?
And don’t get me started on the cybercriminals who will undoubtedly find ways to use and abuse the software. After all, we already know a significant breach occurred that OpenAI has been forced to apologise for. Banks are still struggling to get a handle on cybercrime and fraud, with scams getting more and more sophisticated. What’s next?
As someone who’s worked within the recruitment sector for two decades, I’d usually be excited at the prospect of a new job sector emerging, but given we’re only just getting up to speed with cybercrime, coupled with the fact there remains a severe lack of skills in this field, I’m not thrilled of a potential and immediate requirement for new specialists.
As I’ve previously said, AI software like ChatGPT will provide significant value across a multitude of sectors, including mine. For recruitment marketers, it can automate routine tasks such as answering frequently asked questions and scheduling interviews in a way that’s timely and personalised to the individual. It can also deliver a plethora of data on candidate interactions, offering insights into their preferences and behaviour, which helps marketers optimise recruitment strategies and more effectively target specific groups.
All valuable stuff but recent events and developments related to ChatGPT reinforce the need for a break. We simply don’t know enough about the risks of such powerful AI and how we can control them effectively. Let’s just hope the horse hasn’t already bolted.
I don’t know about anyone else, but chatter around ChatGPT has dominated my LinkedIn feed for months. From here’s-one-I-made-earlier examples...
1 min read
If you’re looking to maximise your marketing ROI, then take a long hard look at your tech stack. Dig out those dusty tech contracts, understand what...
Social media is a great place to chat with friends and chuckle at memes of dogs riding skateboards. But it's also a very powerful business tool.