2777
🛡️ Cybersecurity enthusiast | 💻 Helping secure the digital world | 🌐 Web App Tester | 🕵️♂️ OSINT Specialist Admin: @laazy_hack3r
#AIOps
#CogSec
#MLSecOps
"Cognitive Control Architecture (CCA): A Lifecycle Supervision Framework for Robustly Aligned AI Agents", Dec.2025.
// Method is predicated on a core insight: no matter how subtle an IPI attack, its pursuit of a malicious objective will ultimately manifest as a detectable deviation in the action trajectory, distinct from the expected legitimate plan
See also:
]-> Dynamic Environment to Evaluate Prompt Injection Attacks and Defenses for LLM Agents
]-> https://agentdojo.spylab.ai
InfoSec Write-ups - Medium
Securing AI Agents with Information Flow Control (Part I)
#Whitepaper
#Offensive_security
"API Security Testing (Penetration Testing) Guide", 03.03.2025.
// This comprehensive guide explores the methodologies, techniques, and best practices for conducting thorough API security testing, also known as API penetration testing
#reversing
#Whitepaper
#Cyber_Education
#Hardware_Security
"Embedded Hacking", Nov. 2025.
]-> Repo
// A comprehensive step-by-step embedded hacking tutorial covering Embedded Software Development to Reverse Engineering
API Pentesting Series — Part 7
Before you attack APIs, you need a solid lab.
This part covers:
• Tooling (Burp, DevTools, Postman)
• Discovery tools (Kiterunner, Nikto)
• Docker-based vulnerable APIs
• Full environment setup
Notion Notes 🔗: https://notion.so/aacle/PART-7-API-PenTesting-Series-LAB-SETUP-2b9f7b9ea30e809f8e8ddc938eb0fb1a
✎ Common Rate Limit Bypass Techniques
IP Spoofing
Altering a request’s source IP to appear from another device, and rotating IPs lets an attacker bypass per-IP limits. You can use the following Burp Extensions for IP Spoofing:
• BurpFakeIP: GitHub
• IP-Rotate: GitHub
Changing User-Agent
Rate-limit systems often track the User-Agent header; changing or randomizing it makes requests appear from different clients, and attackers may brute-force the User-Agent field (e.g., with tools like Burp Suite Intruder).
Header Manipulation
Header manipulation alters HTTP headers (e.g., X-Forwarded-For, X-Real-IP) to trick servers — bypassing IP restrictions, evading rate limits, or hiding the real IP from logs and filters.
• Common Headers by 🕷Spix0r
Requesting with Different HTTP Methods
Some rate-limiters monitor only certain HTTP methods (e.g., GET/POST); attackers may bypass them by sending requests with other methods (PUT, DELETE, OPTIONS) and testing alternatives (e.g., with Burp Suite Repeater).
• HTTP request methods
Parameter Name Variation
Some backends accept alternate parameter names and still process requests, enabling attackers to bypass input filters, WAFs, or login restrictions.
username=admin&password=1234
user=admin&pass=1234
uname=admin&pwd=1234
login=admin&passwd=1234
u=admin&p=1234
email=admin&key=1234
id=admin&token=1234
user=admin%20 # space after admin
user=admin%00 # null byte injection
user=%61%64%6d%69%6e # 'admin' in hex
user=ad%6Din # only 'm' is encoded
user=%2561%2564%256d%2569%256e # double-encoded 'admin'
Email: Test@Example.com # Mixed case
Email: test@example.com # Lowercase
Email: TEST@example.com # Uppercase
Email: t3st@3xample.com # '3' instead of 'e'
Email: t@est@example.com # Replacing 'l' with 'I' or vice versa
email=" test@example.com " # Adding spaces at the beginning and end
email=test@example.com%20 # Adding a space encoded as %20
email=test@example.com%E2%80%8B # Injecting a zero-width space
email=test@example.com%09 # Tab character
email=test@example.com%0A # Newline character
How I track the latest CVEs — top 20, fast 🔥
curl -s 'https:/ /cvedb.shodan.io/cves' \
| jq -r '.cves[:20][]?.cve_id'
==> Want id+summary?
curl -s 'https:/ /cvedb.shodan.io/cves' \
| jq '[.cves
| sort_by(.published? // .Published? // .modified? // "1970-01-01")
| reverse
| .[:20][]? | {cve_id, summary}]'
Note : Make sure you remove the space between https:/ and /cvedb before using the command must be https://
Tool: cvedb.shodan.io
#exploit
1⃣ CVE-2025-50165:
Critical Flaw (RCE) in Windows Graphics Component
// Windows 11 24H2 x64/ARM64, Windows Server 2025
2⃣ CVE-2025-9491:
Windows UI misrepresentation vulnerability
// PoC tool for demonstrating the Windows Shortcut (LNK) file vulnerability
3⃣ CVE-2025-60718:
Windows 11 Insider Preview EoP
// Vulnerability exists in the Windows Administrator Protection feature that allows a low privileged process to get full access to a UI Access process which can be leveraged to access to a shadow administrator process leading to EoP
#Research
#MLSecOps
"Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations", 2025.
]-> Repo
// This work presents a systematic taxonomy of existing jailbreak defenses across prompt-level, model-level, and training-time interventions, followed by three proposed defense strategies
#Analytics
#Threat_Research
An analytical review of the main cybersecurity events for the week (November 15-22, 2025)
1⃣ With blazing-fast WiFi 7 speeds come extra security risks
// Bitdefender's Practical Tips for Protecting Your Data on WiFi 7 Networks
2⃣ New RCE vulnerabilities in D-Link DIR-878 routers
// CVE-2025-60672, CVE-2023-60673, CVE-2025-60674, CVE-2025-60676. The device is still available for purchase, but support ended in 2021...
3⃣ Oracle E-Business Suite RCE (CVE-2025-61882)
// PoC + Detect Scripts
4⃣ BADAUDIO Malware
// This nearly three-year campaign is a clear example of the continued evolution of APT24’s operational capabilities
5⃣ IBM AIX NIMSH High Criticality Vulnerabilities
// CVE-2025-36251, CVE-2025-36250, CVE-2025-36096, CVE-2025-36236
6⃣ Cloudflare outage on Nov. 18, 2025
// The issue was not caused, directly or indirectly, by a cyber attack or malicious activity of any kind
7⃣ Multiple OS command injection in Fortinet API and CLI
// CVE-2025-64446 and CVE-2025-58034
]-> Analytical review (Nov. 8-15, 2025)
When comments aren't just comments
<script>
delete/delete; //alert(1)
typeof/typeof; //alert(2)
void/void; //alert(3)
throw/throw; //alert(4)
</script>
#tools
#Mobile_Security
"A Comprehensive Study on Static Application Security Testing (SAST) Tools for Android", 2024.
]-> A Unified Platform for Evaluating SAST Tools for Android
// We propose a unified platform named VulsTotal, supporting various vulnerability types, enabling comprehensive and versatile analysis across diverse SAST tools. We also redefine and implement a standardized reporting format, ensuring uniformity in presenting results across all tools. Additionally, to mitigate the problem of benchmarks, we conducted a manual analysis of huge amounts of CVEs to construct a new CVE-based benchmark
#SCA
#tools
#cryptography
"Automated Side-Channel Analysis of Cryptographic Protocol Implementations", Nov. 2025.
]-> Automated Side-Channel Analysis of Cryptographic Protocols Implementations + PoC attack implementation
// Key contributions: (1) the first formal model of WhatsApp, extracted from its binary, (2) a framework to integrate side-channel leakage contracts into protocol models for the first time, (3) revealing critical vulnerabilities invisible to specification-based methods
#tools
#Cloud_Security
#Offensive_security
"Azure Pentest: Tools and Techniques", 2025.
#Malware_analysis
1⃣ Ghostframe Phishing Kit
https://blog.barracuda.com/2025/12/04/threat-spotlight-ghostframe-phishing-kit
2⃣ EtherRAT Ethereum implant in React2Shell attacks
https://www.sysdig.com/blog/etherrat-dprk-uses-novel-ethereum-implant-in-react2shell-attacks
3⃣ BYOVD loader behind DeadLock ransomware attack
https://blog.talosintelligence.com/byovd-loader-deadlock-ransomware
4⃣ BRICKSTORM/WARP PANDA Malware
https://www.crowdstrike.com/en-us/blog/warp-panda-cloud-threats
#MLSecOps
#Offensive_security
"Multi-Faceted Attack: Exposing Cross-Model Vulnerabilities in Defense-Equipped Vision-Language Models", Nov. 2025.
// Multi-Faceted Attack (MFA) - framework that systematically uncovers general safety vulnerabilities in leading defense-equipped VLMs, including GPT-4o, Gemini-Pro, and LlaMA 4, etc. Central to MFA is the Attention-Transfer Attack, which conceals harmful instructions inside a meta task with competing objectives. We offer a theoretical perspective grounded in reward-hacking to explain why such an attack can succeed
#Threat_Modelling
"Advanced Threat Modeling: Methodologies and Implementation Strategies for Security Architects",
June 2025.
// This comprehensive guide explores advanced threat modeling methodologies, practical implementation strategies, and integration approaches for security architects and development teams seeking to build security into the fabric of their systems
#Research
#MLSecOps
"Evaluating the Robustness of Large Language Model Safety Guardrails Against Adversarial Attacks", Nov. 2025.
// This study evaluated ten publicly available guardrail models from Meta, Google, IBM, NVIDIA, Alibaba, and Allen AI across 1,445 test prompts spanning 21 attack categories
#Tech_book
"Artificial Intelligence for Cybersecurity:
Develop AI approaches to solve cybersecurity problems in your organization", 2024.
// This book is for cybersecurity or general IT professionals or students who are interested in AI technologies and how they can be applied in the cybersecurity context
Evasion Attacks on LLMs - Countermeasures in Practice:
A Guide to face Prompt Injections, Jailbreaks and Adversarial Attacks", Nov. 2025.
#MLSecOps
"InfoFlood (Information Overload) Attack:
Jailbreaking Large Language Models with Information Overload", Jun 2025.
// In this work, we identify a new vulnerability in which excessive linguistic complexity can disrupt built-in safety mechanisms-without the need for any added prefixes or suffixes-allowing attackers to elicit harmful outputs directly
#OSINT
#AppSec
#Research
"Hey there! You are using WhatsApp: Enumerating Three Billion Accounts for Security and Privacy", NDSS 2026.
]-> https://github.com/sbaresearch/whatsapp-census
// To initiate conversations, users must first discover whether their contacts are registered on the platform. This is achieved by querying WhatsApp's servers with mobile phone numbers extracted from the user's address book. This architecture inherently enables phone number enumeration, as the service must allow legitimate users to query contact availability. While rate limiting is a standard defense against abuse, we revisit the problem and show that WhatsApp remains highly vulnerable to enumeration at scale
#AIOps
#MLSecOps
#Offensive_security
#Red_Team_Tactics
"AutoBackdoor: Automating Backdoor Attacks via LLMAgents", Nov. 2025.
]-> Code, datasets, and experimental configurations
// AutoBackdoor - general framework for automating backdoor injection, encompassing trigger generation, poisoned data construction, and model fine-tuning via an autonomous agent-driven pipeline. Unlike prior approaches, AutoBackdoor uses a powerful language model agent to generate semantically coherent, context-aware trigger phrases, enabling scalable poisoning across arbitrary topics with minimal human effort
https://coal-memory-97b.notion.site/Android-Pentest-1f6923af30cc80bdafa4f3c581f4c5f8
Читать полностью…
#tools
#cryptography
Critical cryptography vulnerabilities in the JavaScript elliptic library
https://blog.trailofbits.com/2025/11/18/we-found-cryptography-bugs-in-the-elliptic-library-using-wycheproof
// CVE-2024-48949, CVE-2024-48948 (unresolved)
See also:
]-> repository (updated) of test vectors of cryptographic libraries for known attacks
#AIOps
#MLSecOps
#RAG_Security
#Offensive_security
AI pentest scoping playbook
https://devansh.bearblog.dev/ai-pentest-scoping
// Scoping AI security engagements is harder than traditional pentests because the attack surface is larger, the risks are novel, and the methodologies are still maturing
#CogSec
#MLSecOps
Inside OpenAI Sora 2 -
Uncovering System Prompts Driving Multi-Modal LLMs
https://mindgard.ai/resources/openai-sora-system-prompts
// By chaining cross-modal prompts and clever framing, researchers surfaced hidden instructions from OpenAI’s video generator
#Research
"How Can We Effectively Use LLMs for Phishing Detection?: Evaluating the Effectiveness of Large Language Model-based Phishing Detection Models", 2025.
// This study investigates how to effectively leverage LLMs for phishing detection by examining the impact of input modalities (screenshots, logos, HTML, URLs), temperature settings, and prompt engineering strategies. We evaluate seven LLMs - two commercial models (GPT 4.1, Gemini 2.0 flash) and five open-source models (Qwen, Llama, Janus, DeepSeek-VL2, R1) - alongside two DL-based baselines (PhishIntention and Phishpedia). Our findings reveal that commercial LLMs generally outperform open-source models in phishing detection, while DL models demonstrate better performance on benign samples