Foodman CPAs and Advisors

On 11/13/24, FinCEN issued an alert aimed at assisting financial institutions in recognizing fraudulent schemes linked to the utilization of deepfake media generated by generative artificial intelligence (GenAI) tools. The alert for deepfake media outlines the various typologies related to these schemes, offers red flag indicators to aid in the detection and reporting of suspicious activities, and serves as a reminder to financial institutions regarding their obligations under the Bank Secrecy Act. FinCEN has noted a rise in the reporting of suspicious activities by financial institutions, particularly concerning the alleged utilization of deepfake technology. This includes the fraudulent use of identity documents aimed at bypassing established identity verification and authentication processes. The misuse of generative artificial intelligence tools is a contributing factor to the escalation of cybercrime and fraud, which are central to FinCEN’s National Priorities in Anti-Money Laundering and Countering the Financing of Terrorism. This alert forms part of the U.S. Department of the Treasury’s comprehensive initiative to equip financial institutions with insights regarding the potential benefits and challenges associated with the implementation of artificial intelligence. Accordingly, financial institutions are encouraged to collaborate with corporate governance professionals that are experts in fraudulent GenAI tools.

Deepfake Media Publicly Available GenAI Tools

The FinCEN alert states that the arrival of Generative AI (GenAI) tools has significantly diminished the resources necessary for the creation of high-quality synthetic content. This encompasses media that is either entirely generated through digital or artificial processes or media that has been altered or manipulated using various technologies, whether they are analog or digital. In numerous instances, GenAI is now capable of producing synthetic content that is indistinguishable from original or human-generated material. Content generated by GenAI that exhibits a high degree of realism is often termed “deepfake” content. Deepfakes can fabricate authentic events, such as an individual appearing to perform or articulate actions they did not genuinely undertake.

FinCEN’s analysis of BSA data indicates that criminals have used GenAI to:

  • create falsified documents, photographs, and videos to circumvent financial institutions’ customer identification, verification, and customer due diligence controls.
  • alter or generate images used for identification documents, such as driver’s licenses or passport cards and books
  • combined GenAI images with stolen personal identifiable information or entirely fake personal identifiable information to create synthetic identities
  • opened accounts using fraudulent identities suspected to have been produced with GenAI and used those accounts to receive and launder the proceeds of other fraud schemes that include online scams, check fraud, credit card fraud, authorized push payment fraud, loan fraud, or unemployment fraud.
  • opened fraudulent accounts using GenAI created identity documents and used them as funnel accounts

Financial institutions frequently identify GenAI and synthetic content within identity documents by performing thorough re-evaluations of the documentation submitted during the account opening process. That said, FinCEN states that there are three indicators that warrant additional scrutiny

  • Inconsistencies among multiple identity documents submitted by the customer;
  • A customer’s inability to satisfactorily authenticate their identity, source of income, or another aspect of their profile; and
  • Inconsistencies between the identity document and other aspects of the customer’s profile.

Financial institutions have implemented enhanced due diligence measures to identify deepfake identity documents beyond the initial account opening process. While the following indicators do not definitively indicate suspicious activity, they may prompt further investigation: 

  • Accessing an account from an IP address that does not align with the customer’s established profile.
  • Observable patterns of coordinated actions among multiple similar accounts.
  • Significant payment volumes directed towards potentially high-risk recipients, such as online gambling platforms or digital asset exchanges.
  • A high frequency of chargebacks or declined transactions.
  • Rapid transaction patterns from newly established accounts or those with minimal transaction history.
  • Immediate fund withdrawals following deposits, particularly through methods that complicate reversals in suspected fraud cases, such as international bank transfers or payments to offshore gambling sites and digital asset exchanges.

FinCEN has identified nine red flag deepfake media indicators to help financial institutions detect, prevent, and report potential suspicious activity related to the use of GenAI tools for illicit purposes. Following are the extracts:

  1. A customer’s photo is internally inconsistent (e.g., shows visual tells of being altered) or is inconsistent with their other identifying information (e.g., a customer’s date of birth indicates that they are much older or younger than the photo would suggest).
  2. A customer presents multiple identity documents that are inconsistent with each other.
  3. A customer uses a third-party webcam plugin during a live verification check. Alternatively, a customer attempts to change communication methods during a live verification check due to excessive or suspicious technological glitches during remote verification of their identity.
  4. A customer declines to use multifactor authentication to verify their identity.
  5. A reverse-image lookup or open-source search of an identity photo matches an image in an online gallery of GenAI-produced faces.
  6. A customer’s photo or video is flagged by commercial or open-source deepfake detection software.
  7. GenAI-detection software flags the potential use of GenAI text in a customer’s profile or responses to prompts.
  8. A customer’s geographic or device data is inconsistent with the customer’s identity documents.
  9. A newly opened account or an account with little prior transaction history has a pattern of rapid transactions; high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges; or high volumes of chargebacks or rejected payments.

Know this

Deepfake Media also engineers attacks aimed at customers and employees of financial institutions, facilitating various scams and fraudulent activities. These include business email compromise (BEC) schemes, spear phishing, elder financial exploitation, romance scams, and virtual currency investment fraud.

Has your financial institution been a victim of deepfake media fraud?

Does your financial institution have processes in place to prevent or reduce the risk of deepfake identity documents?

Is your financial institution reporting suspicious activity related to the use of GenAI tools for illicit purposes?

Who is your corporate governance advisor? ©