AI Security Hub

Exploring the evolving landscape of AI security, including threats, innovations, and strategies to…

🔍 AI Security Weekly — Feb 17, 2025

--

Each week, I share key insights on how AI is being applied in cybersecurity. Join the AI Security group on LinkedIn and follow @AISecHub on X for real-time updates.

âš« We Need to Integrate and Unify for AI Security

An interesting perspective from Sven Cattell on adopting a CWE-like system for AI security, featuring transparent disclosures — similar to CVE processes — to balance innovation and reliability in AI models. https://blog.nbhd.ai/disclosure

🛑 Malicious ML Models Discovered on Hugging Face

ReversingLabs researchers identified nullifAI, a malware injection technique targeting Hugging Face. The attack exploits Pickle file serialization vulnerability to distribute malicious code, bypassing existing security scans. Two malicious models were detected, embedded with Python-based reverse shell payloads that connect to hardcoded IP addresses. Hugging Face has since removed the models, but the discovery highlights ongoing security challenges specifically within the Hugging Face platform. This is not the first time Hugging Face has been targeted by such vulnerabilities, emphasizing the platform’s ongoing risk exposure. https://www.reversinglabs.com/blog/rl-identifies-malware-ml-model-hosted-on-hugging-face

🟢 STRIDE GPT: AI-Powered Threat Modeling

Developed by Matthew Adams, STRIDE GPT is an AI-driven threat modeling tool that leverages Large Language Models (LLMs) to generate threat models and attack trees based on the STRIDE methodology. Users provide key application details — such as type, authentication methods, internet exposure, and data sensitivity — allowing the model to generate structured threat assessments tailored to the specific context. 🔗 https://github.com/mrwadams/stride-gpt / https://stridegpt.streamlit.app/

đźź  Agentic AI Threat Modeling Framework: MAESTRO

MAESTRO (Multi-Agent Environment, Security, Threat, Risk, and Outcome) is a new AI threat modeling framework designed for Agentic AI. Developed by Ken Huang, it provides a structured, layer-by-layer approach to identifying, assessing, and mitigating AI security risks across the entire AI lifecycle. Unlike traditional frameworks like STRIDE or PASTA, MAESTRO accounts for adversarial machine learning, data poisoning, and the complexities of autonomous AI interactions. https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro

đźź  6 Essential Security Operations Use Cases for LLMs

This breakdown explores how Large Language Models (LLMs) enhance security operations, from alert triage to threat detection and incident response, offering practical applications for SOC teams. đź”— https://www.prophetsecurity.ai/blog/6-essential-security-operations-use-cases-for-llms

🟣 Security Risks and Compliance Considerations in EU Prohibited AI Practices

The European Commission’s enforcement guidance for Article 5 under the EU AI Act (Regulation (EU) 2024/1689), detailing restricted AI deployments that pose risks to privacy, governance, and fundamental rights. Key security concerns involve biometric data integrity, real-time identification accuracy, and behavioral analysis systems, with fines reaching up to €35 million or 7% of global turnover for non-compliance.
https://taleliyahu.medium.com/security-risks-and-compliance-considerations-in-eu-prohibited-ai-practices-121a29605558

🟡 Well-Architected IaC Analyzer: AI-Powered Infrastructure Evaluation

The Well-Architected Infrastructure as Code (IaC) Analyzer demonstrates how generative AI can evaluate infrastructure code for alignment with AWS Well-Architected best practices. It provides a web-based interface for uploading CloudFormation, Terraform templates, or architecture diagrams for assessment. Leveraging Amazon Bedrock, it analyzes configurations, compares them against AWS Well-Architected whitepapers, and synchronizes findings with the Amazon Bedrock knowledge base. https://github.com/aws-samples/well-architected-iac-analyzer

đźź  AWS Security Guardrails & Terraform

Traditional security approaches designed for on-premises environments may not address risks in cloud architectures that operate at scale. This solution applies “paved roads” and “security guardrails” to help engineers integrate security without manually interpreting each requirement. It merges outputs from tools such as Checkov and Prowler with Anthropic’s Claude 3.5 Sonnet model on AWS Bedrock to consolidate AWS service security requirements and create secure Terraform modules. Reusable IaC templates are generated so organizations can streamline remediation and maintain consistent security controls across multiple environments.
https://naman16.github.io/cloud-security/AWS%20Security%20Guardrails%20%26%20Terraform/

🟣 UK AI Safety Institute Rebrands to AI Security Institute

The UK’s AI Safety Institute has rebranded to the AI Security Institute, reflecting a strategic shift toward AI-related cyber threats, including cyber-attacks, cyber fraud, and cybercrime. This marks a move away from AI ethics toward AI security risks. The institute will collaborate with the Ministry of Defence’s Defence Science and Technology Laboratory and the National Cyber Security Centre (NCSC). https://www.gov.uk/government/news/tackling-ai-security-risks-to-unleash-growth-and-deliver-plan-for-change

đź’¬ Read something interesting? Share your thoughts in the comments.

--

--

AI Security Hub
AI Security Hub

Published in AI Security Hub

Exploring the evolving landscape of AI security, including threats, innovations, and strategies to safeguard AI systems and data. A hub for insights, research, and discussions at the intersection of artificial intelligence and cybersecurity.

Tal Eliyahu
Tal Eliyahu

Written by Tal Eliyahu

Passion for solving problems, developing new solutions, innovation and experimentation

No responses yet