ARIADNEXT & Identity.TM are now IDnow.Learn more

Learning to live, work, and be safe with ChatGPT and other AI forms.

How artificial intelligence is likely to affect the online world of crime and why we must act now.

If history has taught us anything, it is that with every innovation and development, whether technological or otherwise, there comes a new criminal endeavour to exploit suddenly created regulatory gaps and legal loopholes. 

ChatGPT, which seemed to appear out of nowhere from the Tech Gods at the latter end of 2022, has been met with unbridled excitement, cautious trepidation, and most recently in Italy, outright ban.  

In the weeks since the launch of ChatGPT 4.0, LinkedIn was quickly awash with ways that artificial intelligence could be used to supercharge operations; taking creativity to the next level by outsourcing the grunt work of actually doing it to AI, leaving humans with the important stuff like… erm… strategy, budget, and making a perfectly crafted coffee every time (although there are tools for those too).  

Social media users on less business-focused platforms saw things less optimistically and stopped just short of predicting the imminent End of Days. Many argued that if we rely too heavily on artificial intelligence for creating content and take our hands off the wheel for too long, then this would weaken the argument for wrestling power back. Some even questioned whether we would even be capable after ceding generational creative control. 

The next step in AI?

Despite such emotional public reactions, huge corporations including Apple, Microsoft, Google and HubSpot have all jumped on the ‘AI bandwagon’, or at least planning to embark very soon, and have announced that they will implement AI technology into its products in the near future. 

According to some tech titans and academics, however, humanity itself could be at risk if AI continues to advance too rapidly without proper and adequate guardrails. This could explain why many experts, including Elon Musk and Steve Wozniak, are calling for a six-month ‘time out’ on the AI arms race.  

AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Pause Giant AI Experiments: An Open Letter

They argue that rather than continue to build even more powerful AI systems, all parties should pause work on development, and instead shift focus to making systems more accurate, safe, trustworthy and loyal, while ensuring problematic forms of bias are eradicated. It would also, of course, allow time for policymakers and regulators to roll out governance.

What is the AI Act?

The European Commision is the first major governing body to formally attempt to regulate AI. In its proposed Artificial Intelligence Act, it warns of the need to monitor and “curtail’ AI use in social scouting, manipulation, and some uses of facial recognition (referred to as ‘high-risk’ AI). It also mentions General Purpose AI (GPAI) large language models like ChatGPT, which have applications in a wide variety of use cases. EU lawmakers have thus proposed an amendment to the Act, which would require GPAI providers to comply with the same standards and restrictions as “high risk” AI.  

March 2023 also saw the release of Europol’s ‘ChatGPT: The impact of Large Language Models on Law Enforcement’ report, which detailed how artificial intelligence systems were currently being used by criminals and highlighted potential future misuse. 

But what exactly have regulators got to fear?

The corruption of innovation.

While there is clearly excitement regarding how ChatGPT can boost productivity and supercharge operations, it is only a matter of time before criminals start using AI for nefarious activities, such as fraud, impersonation and social engineering. In fact, scammers are already utilizing the powerful technology on romance scams and phishing to churn out avalanches of fake documents (invoices, financial statements, contracts, transaction records) that are often more personalized and accurate than man-made documents (which often contain grammatical and other semantical errors). GPAI models can also be used to reproduce realistic language patterns to replicate the idiosyncrasies of individuals. 

According to Europol, one of the other major misuses of ChatGPT, and other artificial intelligence systems, is the potential for criminals with little technical knowledge to engage in cybercrime. This is because as well as being able to draft human-like language, AI can quickly produce (and correct) code in different programming languages. With relatively little effort, bad actors can easily create malware, malicious VBA scripts, and realistic AI chatbots that can assist in priming victims for the scammer.  

Over the last decade, AI-powered software has achieved significant progress, taking the technology to the next level and offering increasingly robust solutions in many aspects of our daily lives. However, these advancements have also created opportunities for malicious use.

Montaser Awal, Director of Research and Development at IDnow. 

With the rapid improvements in generative AI (a type of artificial intelligence that can create a wide variety of data, such as images, videos, audio, text, and 3D models), it is not inconceivable that we are only a few months away from scammers using AI to create realistic looking passports and official sophisticated documents in any language, with correct letterheads and security features, such as holograms at scale, within a matter of seconds. 

Verifying which documents are authentic, and in the case of deepfake technology, which people are real, are likely to cause problems for companies without the most rigorous of Liveness checks and facial recognition technology

ChatGPT also poses clear concerns over data and privacy, as evidenced in March 2023, when Italian Data Protection Authority provisionally took the platform offline until it “respected privacy” and fixed a bug where other users’ chat history, and personal identifiable information (PII) was visible. Some major banks, including Goldman Sachs and Deutsche Bank, have also broken ranks and taken steps to limit staff utilizing ChatGPT and similar platforms until research can be carried out on ways of ensuring “safe and effective usage.” 

Controlling who has access to information, and whether that information is a) accurate and b) unbiased will be a top priority. Of course, this has always been a concern when using the internet, but the danger of unsubstantiated “fake news” is likely to grow exponentially alongside the increasing volume of content. For example, according to a recent ChatGPT enquiry, Aung San Suu Kyi was sanctioned by the European Union. Companies are therefore advised to not soley rely on ChatGPT for Sanctions and Politically Exposed Persons (PEP) checks.

Knowledge is power: Fighting fire with fire.

There are clearly dangers when ChatGPT is in the wrong hands. However, in the right hands, AI can also be used for good, and to battle digital fraud, and other types of financial crime.  

Increasing numbers of organizations, including IDnow are utilizing artificial intelligence to help digitize their Know Your Customer (KYC) processes and help to prevent fraud. Whether your business is in the Mobility, Financial Services, Gaming, or Crypto sector, if you need to onboard customers and need verify their identity, then there are major benefits of implementing an AI-enabled identity proofing service: 

  • The most important step for any organization that wishes to protect its business from monetary and reputational harm is its KYC process. AI can help automate much of the analysis, including the processing of customer data, personal information, and transaction history.  
  • AI can process huge swathes of information and verify customer identities, quickly, effectively, and securely. By automating tasks, bottlenecks in customer onboarding can be removed, allowing the company to scale at an unprecedented rate.
  • Check customers against sanctions and PEP lists (using a dedicated sanctions or PEP platform).
  • Using natural language processing and machine learning algorithms, AI can flag potential fraud in real-time. For example, AI can detect suspicious user behavior and alert the a platform’s security team, such as someone with the same identity using different gambling platform accounts to place bets at the same time.
  • Multinational organizations can focus on servings its international audience, safe in the knowledge that they re compliant with complex and disparate regulatory requirements.
  • By combining AI with anti-fraud technology, organizations can analyze vast volumes of data, including transactions, locations and times to identify suspicious patterns and activity. 

The coming years will be crucial regarding the development, use and regulation of artificial intelligence. 

Concerning KYC, the primary danger is identity theft and obfuscation. Specifically, generative models have enabled the creation of relatively user-friendly tools for generating more and more realistic attacks on both identity documents and biometric characteristics. This same technology is an important asset that we consider fighting efficiently against different kinds of fraud.

Montaser Awal, Director of Research and Development at IDnow.

Dystopian days. Utopian bytes.

The threat posed by artificial intelligence is likely to be less Terminator, more Wall-E, so we shouldn’t necessarily be worrying about losing our clothes, boots and motorcycles, but more our creativity, autonomy and will to work.  

Clearly, there is no putting the ChatGPT genie back in the bottle now, even if we wanted to. We must learn to live with it, and learn from it, like it learns from us. Only then will we realize its [and our] true potential.

By

Learning to live, work, and be safe with ChatGPT and other AI forms. 1

Jody Houton
Senior Content Manager at IDnow
Connect with Jody on LinkedIn

Questions?

Let's talk!
Play