• The Salted Hash
  • Posts
  • AI Under Fire: Deepfake Scams, New Privacy Rules, and the Tools Fighting Back

AI Under Fire: Deepfake Scams, New Privacy Rules, and the Tools Fighting Back

How deepfakes are changing cybercrime, what NIST’s new privacy framework means for you, the latest AI security tools.

Good morning. It's Monday, April 21, 2025.

On this day in AI history: April 21, 2011 marked the first public demonstration of IBM Watson's capabilities beyond its Jeopardy! win, showcasing how AI could be applied to healthcare decision support.

In today's email:

  • Deepfake Threats: Why Executives Must Be Proactive

  • NIST Updates Privacy Framework with AI Focus

  • Top AI Privacy Protection Tools for 2025

  • AI Security Research Partnerships

  • Quick Hits: Latest AI Security News

You read. We listen. Let us know what you think by replying to this email.

Today's trending AI security news stories

DEEPFAKE THREATS: WHY EXECUTIVES MUST BE PROACTIVE

AI-driven cyber threats are growing exponentially, with deepfake technology emerging as a weaponized tool for fraud that allows criminals to convincingly replicate an executive's voice, image, and mannerisms with near-perfect accuracy.

The Alarming Rise of Deepfake Fraud According to a report from Forbes, deepfake fraud cases have surged tenfold globally from 2022 to 2023. In early 2024, a Hong Kong employee was tricked into wiring millions of dollars after scammers used deepfake videos to impersonate senior company representatives.

AI-Powered Data Harvesting AI-powered bots can compile an executive's entire digital footprint in seconds, gathering everything from home addresses and phone numbers to social media activity. This data can fuel targeted phishing attacks, identity fraud, and doxxing threats.

Financial Impact Deloitte's Center for Financial Services predicts that generative AI could enable U.S. fraud loss to grow from $12.3 billion in 2023 to $40 billion in 2027, highlighting the urgent need for proactive security measures.

Security Professionals' Concerns According to a report from Team8, 56% of CISOs now consider deepfake-enhanced fraud the biggest threat to their organizations, signaling a shift in security priorities.

"As AI becomes more sophisticated, so do scams, fraud and other schemes perpetrated by bad actors. Now, AI-driven threats go far beyond reputation and into a direct security risk for executives and high-profile individuals."

— Chad Angle, Head of ReputationDefender at Gen Digital April 16, 2025

NIST UPDATES PRIVACY FRAMEWORK WITH AI FOCUS

The National Institute of Standards and Technology (NIST) has released a draft update to its Privacy Framework, designed to address current privacy risk management needs and maintain alignment with NIST's recently updated Cybersecurity Framework.

Key Framework Updates The draft update, titled "NIST Privacy Framework 1.1 Initial Public Draft," includes targeted revisions to its core structure and content, with a focus on the Govern Function (risk management strategy and policies) and the Protect Function (privacy and cybersecurity safeguards).

New AI and Privacy Risk Management Section A significant addition is a new section on AI and privacy risk management. The draft PFW's Section 1.2.2 briefly outlines ways that AI and privacy risks relate to one another and how PFW 1.1 can be used to manage AI privacy risks.

Web-Based Use Guidelines Those seeking a guide to using the PFW now can find this information on the web rather than in its former location in Section 3. The online material has been structured as an interactive FAQ page intended to allow users to find answers quickly.

Timeline for Implementation NIST is accepting public comments on the draft via [email protected] until June 13, 2025. Following the comment period, NIST will consider additional changes and release a final version later this calendar year.

"This is a modest but significant update. The PFW can be used on its own to manage privacy risks, but we have also maintained its compatibility with CSF 2.0 so that organizations can use them together to manage the full spectrum of privacy and cybersecurity risks."

— Julie Chua, Director of NIST's Applied Cybersecurity Division April 14, 2025

5 new AI-powered tools for privacy protection

Protecto Protecto leads the market with AI-driven privacy protection tailored for LLMs and AI applications. Its context-aware tokenization ensures AI models retain data utility while staying compliant with GDPR, HIPAA, and CCPA regulations. protecto.ai

Granica AI Granica AI offers real-time sensitive data discovery, classification, and masking for both data lakes and LLM prompts. Its ML-powered scanning algorithms provide high accuracy while minimizing compute costs. granica.ai

Nightfall AI This AI-powered DLP tool provides real-time scanning for sensitive data leaks across SaaS applications, email, and cloud storage. Its machine learning-based detection identifies PII, PHI, and financial data with high accuracy. nightfall.ai

Securiti AI Securiti AI delivers automated data privacy management, AI-powered risk assessment, and zero-trust access controls for secure data sharing. Its comprehensive suite helps organizations identify and mitigate data vulnerabilities. securiti.ai

Private AI Private AI specializes in context-aware redaction for unstructured data, supporting images and voice transcripts. Its low-latency API integration enables real-time data protection for privacy-preserving AI applications. private-ai.com

COMPLIQ AND PURDUE UNIVERSITY PARTNER TO ADVANCE AI SECURITY RESEARCH

Collaborative Digital Innovations (CDI), the company behind COMPLiQ, has entered into a research partnership with Purdue University's Center for Education and Research in Information Assurance and Security (CERIAS) to drive innovation in AI security, governance, and compliance.

Combining Industry and Academic Expertise The partnership combines CDI's expertise in AI assurance, threat intelligence, and compliance automation with CERIAS's world-class research in cyber and cyber-physical security, privacy, autonomy, and trustworthy AI.

Focus on Regulated Industries Together, the organizations will develop solutions that enhance AI observability, improve threat detection, and ensure compliance with stringent regulatory requirements across financial services, healthcare, government, and other critical sectors.

Addressing Adversarial Threats This collaboration aims to reinforce protections against adversarial threats in regulated industries, where AI systems face increasingly sophisticated attacks designed to manipulate their outputs or extract sensitive information.

Research Outcomes Expected outcomes include new methodologies for AI security testing, enhanced governance frameworks, and practical tools that organizations can implement to secure their AI systems while maintaining regulatory compliance.

"As AI becomes more deeply integrated into critical infrastructure and sensitive operations, the need for robust security measures becomes paramount. This partnership represents a significant step forward in developing practical solutions to these complex challenges."

— CERIAS Research Director April 8, 2025

Quick Hits

  • NIST's updated Privacy Framework now includes a dedicated section on AI privacy risk management, addressing the growing use of AI tools such as chatbots nist.gov

  • The same AI technologies that help identify cyber threats are also capable of mass surveillance and intrusive data collection, according to Forbes forbes.com

  • AI-generated code is opening up a host of vulnerabilities, with security continually lagging behind software development forbes.com

  • The International AI Safety Report 2025 highlights privacy risks in general-purpose AI, including training data leaks, real-time exposure, and AI-enabled cyber threats private-ai.com

  • Starting in May 2025, new and enriched AI detections for risks identified by OWASP such as indirect prompt injection attacks will be generally available in Microsoft Security Copilot microsoft.com

  • The first International Workshop on Artificial Intelligence Security and Privacy (AI Security & Privacy 2025) will be held May 26-27, 2025, in Osaka, Japan sites.google.com

  • Darktrace leads the list of best AI security tools for 2025, utilizing machine learning and AI algorithms to detect and respond to cyber threats in real-time wbcomdesigns.com

  • Watch: "The Future of AI Security: Balancing Innovation and Protection" - A comprehensive overview of emerging AI security challenges and solutions aiconference.org

  • The 2025 edition of the Forbes AI 50 list shows how companies are using agents and reasoning models to start replacing work previously done by people at scale forbes.com

  • Governments are enforcing AI transparency laws, stricter data protection regulations, and AI governance frameworks to ensure ethical AI use and prevent misuse of AI-driven technologies cyberproof.com

  • 📄 "Securing Large Language Models: A Comprehensive Framework for Enterprise Deployment" github.com/tsmotlp/AI-Security-Research

🤖 AI Meme of the Day

Need a quick laugh? Here’s a hilarious AI-themed meme GIF to lighten your day:

Fox Jamie GIF by EsZ Giphy World

Gif by EcrookedletterZ on Giphy

Thank you for reading today's edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become a Salted Hash sponsor, reply to this email!