This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

Singapore: MAS sounds the alarm on deepfake dangers for FIs

The MAS has published a new information paper Cyber Risks Associated with Deepfakes, addressing emerging cyber threats in the financial sector linked to deepfake technology. This guidance builds on MAS’s ongoing focus on digital resilience, and the MAS provides recommendations to FIs for how to combat deepfake threats. The recommendations set extremely high standards for FIs, showing that this is a high priority topic for MAS and emphasising the importance of FIs applying the recommendations to their own business models.

Background

The MAS has paid increasingly close attention to cyber risks and has emphasised to FIs that they should take mitigating measures to reduce such risks. Most recently, last year MAS published an information paper on cyber risks associated with generative artificial intelligence. This information paper continues this direction of travel from the MAS, and focuses on the risks posed by deepfakes specifically.

Key recommendations

The information paper focuses on three key impacts of deepfakes on the financial sector, providing real-world examples of those impacts and recommending respective mitigating measures. It is critical that all types of FIs consider whether they can implement these recommendations (if relevant to their business models), given the MAS clearly expects FIs to take deepfake threats seriously. The recommendations are extensive and set high standards for firms, who will need to think carefully about how to implement these in a practical and risk-proportionate manner. Importantly, the suggested mitigating measures are not exhaustive, so FIs will need to consider how deepfake threats apply to their own business models and how these recommendations could apply to their own business models.

  1. Compromising biometric authentication: Deepfakes have the potential to defeat biometric systems such as facial and voice recognition, enabling unauthorised account access or fraudulent transactions. MAS gives examples of fraudulent biometric authentication in Asia. To combat this, MAS recommends the following mitigating measures for FIs:
    • conduct comprehensive checks to detect tampering and verify the authenticity of identification documents and digital content during customer onboarding and verification processes; 
    • implement liveness detection techniques in biometric authentication solutions to cover deepfakes; 
    • conduct regular vulnerability assessments and security testing by simulating deepfake attacks to evaluate the resilience of biometric authentication systems; 
    • implement endpoint-level protection to biometric authentication processes to prevent injection attacks;
    • implement strong encryption for data in motion, storage and use to protect biometric information; and
    • implement cancellable biometrics to protect against biometric data compromise and reuse.
       
  2. Social engineering and impersonation scams: Malicious actors use deepfake-generated audio, video, and messages to impersonate FI employees or customers, bypassing trust-based controls and triggering unauthorised actions. To combat this, FIs should consider the following mitigating measures:
    • conduct staff and customer awareness campaigns on deepfakes and GenAI-enabled phishing; 
    • implement measures to verify the authenticity of senders; 
    • implement endpoint-based deepfake detection tools on employees’ corporate issued communication devices, such as laptops and mobile phones, to identify and provide alerts on manipulated media in real time; and 
    • strengthen processes and controls for high-risk transactions and for FI staff in high privileged roles through additional verification and separation of duties. 
       
  3. Misinformation and disinformation: Synthetic media may manipulate markets, damage reputations, or trigger operational disruptions by circulating convincing but false information. To combat this, FIs should consider the following mitigating measures:
    • implement monitoring tools to detect deepfake-based brand abuse and impersonation attempts targeting the FI across digital channels, including social media, websites, video platforms and news sources; 
    • include deepfake attack scenarios in incident response; and 
    • enhance sector-wide deepfake defence through collaboration and information sharing. 

Next steps

MAS states in the information paper that FIs should assess their specific exposure to deepfake risks and implement appropriate defensive measures accordingly. Those measures should include: monitoring technological advances; implementing robust defence measures; and regularly updating and testing incident response plans. However, the information paper leaves open the question of how firms can adopt proportionate approaches to deep fakes, depending on their business model, and so firms will need to consider how to apply these very extensive recommendations to their own businesses in an appropriate manner.

Tags

asia, fintech, payments