In just a few short years, generative AI has moved from a relatively niche, conceptual technology to a near-constant presence in our personal and professional lives. Indeed, Deloitte recently estimated that more than 7 million people across the UK now utilise some form of AI at work, with ChatGPT being the most popular platform. For context, only China has a higher number of regular AI users nationwide.
For many companies, generative AI is now a key component of their customer contact strategies, with chatbots now omnipresent on corporate websites, and AI-generated content streamlining the creation of new blog posts and web pages. At the same time, customers regularly utilise AI to expedite the decision-making process when considering the products and solutions currently on the market, collating enormous volumes of online data and translating it into an easy-to-digest form.
But, as with any new technology, the excitement over AI’s potential benefits can never be allowed to distract from its emerging risks, as a growing number of organisations have already found…
Most of us are now aware of deepfakes – the practice of utilising AI to generate audio or video content that seems real but is actually completely fake – and their role in the spread of disinformation. Unfortunately, cyber criminals have already seized upon such technology to make familiar attack vectors, such as phishing, even harder to spot until it’s too late. Consider the following recent cases:
- In late 2024, it was reported that a fake AI video generator, promoted across a number of fraudulent social media accounts, was being utilised to infect corporate infrastructure with the Lumma Stealer and AMOS malware
- In early 2024, a Hong Kong finance worker transferred $25 million to fraudsters who had created a deepfake of her CFO and used that to request the transfer via video call
- In 2019, the CEO of a British energy provider was tricked into sending £220,000 via wire transfer to a fraudster who had used AI to imitate the head of his parent company during a phone call
These are just new spins on familiar strategies employed by cyber criminals, utilising social engineering to gain access to companies’ critical data via their employees. But the increasing sophistication of generative AI makes such attacks increasingly difficult to identify, even for those well-versed in cyber security best practice. Organisations must therefore take proactive steps to mitigate this risk, in order to avoid the serious financial and reputational costs that will inevitably follow a breach.
While cyber best practice around the use of generative AI is still evolving, the following practical steps should be factored into wider cyber security strategies:
Be clear on all compliance obligations
Any potential application of generative AI should be reviewed against the salient regulations in order to ensure compliance is maintained. For example, PCI-DSS Requirement 3 mandates the robust protection of cardholder data throughout every step of a transaction, while the GDPR sets out clear standards around how customers’ data privacy must be maintained. If, for example, a chatbot is incorporated into a website, it should be clear to the user that they are not speaking to a real person, with measures in place to ensure the integrity of any data shared through such channels will be maintained, especially if a financial transaction is involved.
Implement ongoing training for all staff, and ensure it is regularly updated
Good cyber security is just as much about people as it is technology.
As the cases cited above demonstrate, no matter how much technology evolves, simple human error remains the primary cause of data breaches, which means regular training for employees is essential, ensuring they understand their individual roles in maintaining the organisation’s cyber posture, both in and out of the office. This training should be subject to regular review, based on the very latest threat intelligence, so all staff are able to identify and act on the new breed of AI-powered threats, particularly when it comes to spotting deepfakes and ensuring they never inadvertently offer backdoor access to critical data.
A human should always have oversight of any AI implementation
While it is extremely difficult to predict the different ways generative AI will evolve in the years ahead, for both positive and negative applications, it would benefit organisations at all levels to assign a dedicated individual to maintain oversight over this technology’s potential implementations, having the final say over whether security, privacy, and ethical standards and regulations have been fully adhered to before any new roll-out.
This approach should also be taken to any potential AI-based automation, especially if financial data is involved. By implementing multiple levels of approval, with a human ultimately having the final say, the right balance can be struck between efficiency and security. In terms of blog posts or web copy created through generative AI, these should also be subject to multiple rounds of review from designated SMEs before they are taken live, both to maintain security and compliance and to ensure technical accuracy.
Ultimately, for all the potential benefits of generative AI, it is important that we always bear in mind that it should be treated no differently than any other emerging technology, and make sure every implementation serves to deliver a smoother, more secure customer experience and enable employees to deliver their best, with zero compromise in terms of security and compliance.
If you’re ready to explore generative AI’s potential applications further, but are keen to maintain a world-class cyber security posture across your organisations, get in touch. Our experts will work closely with you to ensure your own AI journey is a successful one, and that you are able to offer your customers complete peace of mind that their critical data will always remain secure.
More data is publicly available than ever before… And it’s a major cyber security risk
Open-Source Intelligence (OSINT) is a regularly used to enhance marketing activities and cyber security, but is increasingly used as part of cyber-attacks, which means training is essential.