Over the past month, I ran three very different projects with AI as my primary collaborator (OpenClaw + Claude Opus 4.6). Each project used a different approach : AI as refactoring assistant, AI as autonomous builder, AI as research partner. Not a tool I prompted occasionally. A coworker I paired with for hours.
The projects :
GIA — refactoring and translating an existing app. AI as assistant.
Chat my Resume — building a chatbot from scratch. AI as autonomous builder.
LLM Bench Lab — benchmarking GPUs and writing a technical blog. AI as research partner.
Three approaches. Three very different results. None of them went smoothly.
For about a year, I have been working daily with various coding assistants, choosing different tools depending on my mood, needs and constraints. My journey has included testing Windsurf and Tabnine professionally, while personally transitioning from being a fervent Copilot user to adopting Claude Code.
During this exploration, I discovered Devstral 2, which ultimately replaced Claude Code in my workflow for several compelling reasons:
Aesthetic Excellence: The tool offers a beautiful user experience.
From the blog post announcement to the API documentation and vibe itself, the color scheme, visual effects, and overall polish create a distinctly pleasant working environment.
Comparable Performance: In the "me, myself & I benchmark", Devstral 2 code suggestion is on par with Claude Code.
While both trend to occasionally overlook framework documentation ; they deliver excellent results overall when refactoring, suggesting commit message, or tweaking CSS.
Cost-Effective and Open Source: Devstral 2 is significantly more affordable than Claude Code and is open source.
Users receive 1 million tokens for trial, with pricing at $0.10/$0.30 for Devstral Small 2 past the 1st million.
With Claude Code, I frequently hit usage limits, even after employing /compact commands and tracking my /usage.
And even if you bust the vibe usage limits it has:
Local Execution Capability: Although vibe time to first token can be slower than claude, Mistral offers a crucial advantage !
Both Devstral 2 & small version are open source with the ability to run entirely on local machines, providing greater control, privacy, and if you have the gear, blazing-fast performance⚡.
The documentation to run it locally is rather sparse and Devstral-2-small is still relatively resource-intensive, therefore needing some tweaks.
Here are the instructions for running Devstral 2 small + vibe on Ubuntu 24.04 with an NVIDIA L40S with 24GB VRAM hosted by Scaleway .
Welcome to LaFabrique.AI! An evolution of storage-chaos.io. This blog tracks and documents the beginning of a journey through the world of artificial intelligence.
Managing multiple networks for storage workloads on OpenShift is not optional: it is essential for performance and isolation. Dell PowerFlex , with its CSI driver, delivers dynamic storage provisioning, but multi-network setups require proper configuration.
This guide explains how to enable multi-network support for CSI PowerFlex on OpenShift, including prerequisites, network attachment definitions, and best practices for high availability.
PowerStore Metro enables active-active synchronous replication between two PowerStore clusters, delivering zero data loss (RPO 0) and continuous availability. Integrated with the Dell CSI driver, it brings high availability to Kubernetes applications that lack native HA by presenting a single Metro-replicated volume accessible from both sites.
Dell PowerScale is a scale-out NAS solution designed for high-performance, enterprise-grade file storage and multi-tenant environments. In multi-tenant environments, such as shared Kubernetes clusters, isolating workloads and data access is critical.
PowerScale addresses this need through Access Zones, which logically partition the cluster to enforce authentication boundaries, export rules, and quota policies. The Dell CSI driver maps Kubernetes StorageClass resources to specific Access Zones, providing per-tenant isolation at the storage layer.
This setup is particularly useful when multiple teams share a common PowerScale backend but require strict separation of data and access controls. This approach proved extremely valuable when building a GPU-as-a-Service AI Factory .
Dell CSI drivers for PowerStore, PowerMax, PowerFlex, and PowerScale have all been tested and are compatible with KubeVirt . This guide provides instructions for installing Dell CSI for PowerMax on Harvester , though the steps are very similar regardless of the storage backend.
Tested on :
Harvester v1.3.1
CSM v2.11
PowerMax protocols : Fibre Channel, iSCSI, and NFS
Kubernetes is no longer just a container orchestrator. As organizations modernize infrastructure, there’s growing interest in using Kubernetes to manage virtual machines (VMs) alongside cloud-native workloads—while still meeting familiar expectations like disaster recovery (DR).
In this post, we’ll walk through a practical, GitOps-friendly DR approach for VMs running on Kubernetes using:
KubeVirt to run VMs on Kubernetes
Dell Container Storage Modules (CSM) for storage and replication
CSM Replication to replicate VM disks across clusters
Argo CD + Kustomize to manage deployment and failover via GitOps
Many Kubernetes storage drivers still rely on the powerful—and notoriously over‑broad—Linux capability CAP_SYS_ADMIN to perform host‑level operations. While it enables critical actions like filesystem mounts, it also substantially expands the attack surface of your cluster.
This post explains why CSI node plugins often end up needing CAP_SYS_ADMIN, what breaks when you remove it, and several concrete hardening strategies using tools like seccomp, AppArmor, SELinux, and controlled privilege elevation.