2068
All about cloud security Contacts: @AMark0f @dvyakimov About DevSecOps: @sec_devops
📤 Threat Actors Abuse Railway.com PaaS as Microsoft 365 Token Attack Infrastructure
Railway PaaS is being weaponized as a clean token replay engine in an active AiTM and device code phishing campaign impacting 268+ M365 organizations and 100+ MSPs.
https://www.huntress.com/blog/railway-paas-m365-token-replay-campaign
#PaaS
🔴 Remote Command Execution in Google Cloud with Single Directory Deletion - GMO Flatt Security Research
A race condition in Google Cloud Looker's directory deletion API allows deleting the ".git" directory while concurrent Git operations proceed, causing Git to use attacker-controlled worktree configs for RCE. Kubernetes service account misconfigurations further enabled cross-instance privilege escalation.
https://flatt.tech/research/posts/remote-command-execution-in-google-cloud-with-single-directory-deletion
#gcp
🔶 Simulating Ransomware with AWS KMS
Post that demonstrates how attackers can abuse AWS KMS by importing malicious key material to encrypt RDS/EBS resources, then deleting the material to make data inaccessible without ransom payment.
https://heilancoos.github.io/research/2025/09/02/aws-kms-ransomware.html
#aws
🤖 OpenSandbox
OpenSandbox is a general-purpose sandbox platform for AI applications, offering multi-language SDKs, unified sandbox APIs, and Docker/Kubernetes runtimes for scenarios like Coding Agents, GUI Agents, Agent Evaluation, AI Code Execution, and RL Training.
https://github.com/alibaba/OpenSandbox
#AI
🔶 Pwning AI Code Interpreters in AWS Bedrock AgentCore
Phantom Labs discovered that AWS Bedrock AgentCore Code Interpreter's sandbox mode allows DNS queries, enabling bypass of network isolation through DNS-based command-and-control. This research details the discovery, proof-of-concept exploit, disclosure timeline, and defensive guidance for organizations using Code Interpreter workloads.
https://www.beyondtrust.com/blog/entry/pwning-aws-agentcore-code-interpreter
#aws
⚙ trajan
A multi-platform CI/CD vulnerability detection and attack automation tool for identifying security weaknesses in pipeline configurations. You can also check out the companion blog post.
https://github.com/praetorian-inc/trajan
#cicd
🔶 Introducing account regional namespaces for Amazon S3 general purpose bucket
AWS launches a new feature of Amazon S3 that lets you create general purpose buckets in your own account regional namespace simplifying bucket creation and management as your data storage needs grow in size and scope.
https://aws.amazon.com/ru/blogs/aws/introducing-account-regional-namespaces-for-amazon-s3-general-purpose-buckets
#aws
🔶 Bucketsquatting is (Finally) Dead
AWS introduced account-regional namespaces for S3 (<prefix> - <accountid> - <region> - an) to eliminate bucketsquatting, where attackers claim deleted bucket names.
https://onecloudplease.com/blog/bucketsquatting-is-finally-dead
#aws
🤖 The Reach Pattern
The "Reach" pattern is a personal CLI that hijacks existing browser sessions to query SaaS APIs (Slack, Jira, Confluence, etc.) on your behalf, feeding structured organizational context to your AI coding assistant.
https://jackdanger.com/the-reach-pattern
#AI
🔶 Inside AWS Security Agent: A multi-agent architecture for automated penetration testing
AWS Security Agent's penetration testing uses a multi-agent architecture: specialized swarm agents handle reconnaissance, managed/guided exploration, and exploit validation. The system achieves 80% attack success rate on CVE Bench under real-world conditions, with assertion-based validation reducing false positives and CVSS-scored reporting.
https://aws.amazon.com/ru/blogs/security/inside-aws-security-agent-a-multi-agent-architecture-for-automated-penetration-testing/
(Use VPN to open from Russia)
#aws
🤖 How "Clinejection" Turned an AI Bot into a Supply Chain Attack
A prompt injection in a GitHub issue title gave attackers code execution inside Cline's CI/CD pipeline, leading to cache poisoning, stolen npm credentials, and an unauthorized package publish affecting the popular AI coding tool's 5 million users. Here's the full technical breakdown and what developers should do now.
https://snyk.io/blog/cline-supply-chain-attack-prompt-injection-github-actions/
(Use VPN to open from Russia)
#AI
🔴 Google API Keys Weren't Secrets. But then Gemini Changed the Rules
Enabling the Gemini API on a GCP project silently grants existing public AIza... keys (e.g., Maps/Firebase) access to sensitive Gemini endpoints. Truffle Security found 2,863 such exposed keys via Common Crawl, enabling data access, billing abuse, and quota exhaustion, including against Google's own infrastructure.
https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules
#gcp
🤖 caterpillar
Caterpillar is a security scanning library for AI agent skill files (e.g., Claude Code skills) for dangerous or malicious behavior.
https://github.com/alice-dot-io/caterpillar
#AI
🔶🤖 Building an AI-powered defense-in-depth security architecture for serverless microservices
This AWS blog demonstrates implementing a seven-layer AI-powered defense-in-depth security architecture for serverless microservices using AWS Shield, WAF, Cognito, API Gateway, VPC, Lambda, Secrets Manager, and DynamoDB, enhanced with GuardDuty and Amazon Bedrock for intelligent threat detection and automated response.
https://aws.amazon.com/ru/blogs/security/building-an-ai-powered-defense-in-depth-security-architecture-for-serverless-microservices/
(Use VPN to open from Russia)
#aws #AI
🤖 augustus
LLM security testing framework for detecting prompt injection, jailbreaks, and adversarial attacks. See also the companion blog post.
https://github.com/praetorian-inc/augustus
#AI
👨💻 Widespread GitHub Campaign Uses Fake VS Code Security Alerts to Deliver Malware
A large-scale phishing campaign is targeting developers directly inside GitHub, using fake Visual Studio Code security alerts posted through Discussions to trick users into installing malicious software.
https://socket.dev/blog/widespread-github-campaign-uses-fake-vs-code-security-alerts-to-deliver-malware
#github
🔶 Locking down AWS principal tags with RCPs and SCPs
A post explaining how to use SCPs to restrict sensitive IAM actions to tagged principals, RCPs to block unauthorized "scp-*" session tags from external/non-tagger principals, and SCPs to protect the "tagger" role itself via CloudFormation StackSets.
https://awsteele.com/blog/2026/02/21/locking-down-aws-principal-tags-with-rcps-and-scps.html
#aws
🔶 Cracks in the Bedrock: Bypassing SCP Enforcement with Long-Lived API Keys
Sonrai Security researcher discovered that AWS "bedrock-mantle" IAM permissions could bypass SCP enforcement when using long-lived Service Specific Credential API keys. IAM policy denials worked correctly, but SCP denials were bypassed. AWS patched this between Jan–Feb 2026; no customer action required.
https://sonraisecurity.com/blog/cracks-in-the-bedrock
#aws
🤖 Securing our codebase with autonomous agents
Cursor's security team built a fleet of security agents to find and fix vulnerabilities across a fast-changing codebase.
https://cursor.com/blog/security-agents
#AI
🔶 Pentesting a pentest agent - Here's what I've found in AWS Security Agent
A researcher pentested AWS Security Agent, finding 4 issues: DNS confusion enabling unauthorized domain pentesting, a full reverse shell/container escape chain to host root + AWS credentials via prompt injection, unnecessary destructive actions (e.g., DROP TABLE probes, exploit-based cleanup deleting /etc/crontab), and unredacted secrets in pentest reports.
https://blog.richardfan.xyz/2026/03/14/pentesting-a-pentest-agent-heres-what-ive-found-in-aws-security-agent.html
#aws
🤖 When an AI agent came knocking: Catching malicious contributions in Datadog’s open source repos
How Datadog discovered malicious issues and PRs in two of their public repositories as the result of attacks by hackerbot-claw, an AI agent designed to target GitHub Actions and LLM-powered workflows.
https://www.datadoghq.com/blog/engineering/stopping-hackerbot-claw-with-bewaire
#AI
🔶 Behind the console: Active phishing campaign targeting AWS console credentials
Datadog Security Research identified an active adversary-in-the-middle (AiTM) phishing campaign targeting AWS Console credentials via typosquatted domains that mimic AWS infrastructure.
https://securitylabs.datadoghq.com/articles/behind-the-console-aws-aitm-phishing-campaign
#aws
🤖 How AI Agents Automate CVE Vulnerability Research
A technical deep-dive into Praetorian's multi-agent CVE research pipeline, exploring how orchestrated AI agents transform vulnerability data into validated detection templates.
https://www.praetorian.com/blog/how-ai-agents-automate-cve-vulnerability-research/
#AI
🤖 hackerbot-claw: An AI-Powered Bot Actively Exploiting GitHub Actions
A week-long automated attack campaign targeted CI/CD pipelines across major open source repositories, achieving remote code execution in at least 4 out of 5 targets. The attacker, an autonomous bot called hackerbot-claw, used 5 different exploitation techniques and successfully exfiltrated a GitHub token with write permissions from one of the most popular repositories on GitHub. This post breaks down each attack, shows the evidence, and explains what you can do to protect your workflows.
https://www.stepsecurity.io/blog/hackerbot-claw-github-actions-exploitation#attack-6-aquasecuritytrivy---evidence-cleared
#AI
🤖 Running OpenClaw safely: identity, isolation, and runtime risk
OpenClaw, a self-hosted agent runtime, lacks built-in security controls, enabling credential exfiltration, memory/state manipulation, and host compromise via indirect prompt injection and malicious skills. Microsoft recommends isolated deployment, least-privilege identities, continuous monitoring, and Defender XDR hunting queries.
https://www.microsoft.com/en-us/security/blog/2026/02/19/running-openclaw-safely-identity-isolation-runtime-risk/
#AI
🤖 Using threat modeling and prompt injection to audit Comet
Trail of Bits used ML-centered threat modeling and adversarial testing to identify four prompt injection techniques that could exploit Perplexity's Comet browser AI assistant to exfiltrate private Gmail data. The audit demonstrated how fake security mechanisms, system instructions, and user requests could manipulate the AI agent into accessing and transmitting sensitive user information.
https://blog.trailofbits.com/2026/02/20/using-threat-modeling-and-prompt-injection-to-audit-comet/
#AI
🔶 AWS Incident Response: IAM Containment That Survives Eventual Consistency
Standard AWS IR containment fails against attackers exploiting IAM eventual consistency. This article presents an SCP-enforced technique that makes identity-level containment attacker-resistant.
https://www.offensai.com/blog/eventual-consistency-resistant-iam-containment-aws-incident-response
(Use VPN to open from Russia)
#aws
🤖 MCP Server Security: The Hidden AI Attack Surface
MCP servers connecting AI assistants to external tools create significant attack surfaces enabling arbitrary code execution, data exfiltration, and social engineering. Both local and remote MCP servers can be exploited through server chaining, supply chain attacks, and malicious tool implementations.
https://www.praetorian.com/blog/mcp-server-security-the-hidden-ai-attack-surface/
#AI
🤖 3 Principles for Designing Agent Skills
Block Engineering discusses designing agent skills using three principles: make deterministic outputs script-based, let agents handle interpretation and conversation, and write explicit constitutional constraints. Skills codify tribal knowledge into executable documentation for AI agents across their organization.
https://engineering.block.xyz/blog/3-principles-for-designing-agent-skills
#AI
🏗 Encrypting Files with Passkeys and age
A post explaining how to encrypt files with passkeys, using the WebAuthn prf extension and the TypeScript age implementation.
https://words.filippo.io/passkey-encryption
#build