The world’s most widely used web app scanner. Free and open source. ZAP is a community project actively maintained by a dedicated international team, and a GitHub Top 1000 project.
Although we can already buy commercial transceiver solutions that allow us to use PCIe devices like GPUs outside of a PC, these use an encapsulating protocol like Thunderbolt rather than straight P…
GitOps policy-as-code: Securing Kubernetes with Argo CD and Kyverno
A hands-on guide to deploying Kyverno with Argo CD and enforcing custom policies As Kubernetes environments develop, GitOps with Argo CD has become the standard for declarative…
Increasing the VRAM allocation on AMD AI APUs under Linux
Since I saw some posts calling out the old (now deprecated) way to increase GTT memory allocations for the iGPU on AMD APUs (like the AI Max+ 395 / Strix Halo I am testing in the Framework Mainboard AI Cluster), I thought I'd document how to increase the VRAM allocation on such boards under Linux—in this case, Fedora:
# To remove an arg: `--remove-args` # Calculation: `([size in GB] * 1024 * 1024) / 4.096` sudo grubby --update-kernel=ALL --args='amdttm.pages_limit=27648000' sudo grubby --update-kernel=ALL --args='amdttm.page_pool_size=27648000' sudo reboot The old way, amdgpu.gttsize, will throw the following warning in the kernel log:
I Set Up My Own Proxy Server—Here's Why, and How You Can Do It Yourself
Want more control over your internet traffic? Follow these simple steps to set up a proxy server, block unwanted websites, mask your IP address, and more.
The Highest Quality Proxies - Residential Ip Resource Service
NaProxy is the highest quality proxy with over 90 million IPs in over 200 countries worldwide. NaProxy provides residential Ip resource services at the cheapest price!
Complete guide to installing Ubuntu on GMKtec EVO-X2 with Ryzen AI 9 HX 395 MAX via PXE boot. Includes memory management comparison, BIOS configuration, SSH setup, and practical troubleshooting for AI inference workloads.
The First Mini-PC to Run 70B LLMs Locally: GMK EVO-X2 Unveiled
The GMK EVO-X2, which was recently showcased at AMD’s “ADVANCING AI” Summit, is designed to meet this need, packing impressive AI processing capabilities into a small form factor.
Running 122B-Parameter LLMs Locally on AMD Strix Halo for OpenClaw: A Deep Dive
Over the past few months, I've been building a local LLM inference stack on an AMD Ryzen AI Max+ 395 (Strix Halo) — a chip with 128GB of unified LPDDR5X memory shared between CPU and GPU. My original goal was to play with local LLMs and learn more about the hardware and software stacks required to r
Public Wi-Fi exposes Linux systems to monitoring, spoofed networks, and data interception. This guide shows how to secure your device with VPNs, firewalls, and browser protections.