<?xml version="1.0" encoding="utf-8"?><?xml-stylesheet type="text/xsl" href="rss.xsl"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>HyperSDK Blog</title>
        <link>https://hypersdk.cloud/blog</link>
        <description>HyperSDK Blog</description>
        <lastBuildDate>Tue, 07 Apr 2026 00:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <item>
            <title><![CDATA[Eleven Products for the Complete Infrastructure Lifecycle]]></title>
            <link>https://hypersdk.cloud/blog/eleven-products</link>
            <guid>https://hypersdk.cloud/blog/eleven-products</guid>
            <pubDate>Tue, 07 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[From VM export to GPU compute, HyperSDK now offers eleven purpose-built products covering every stage of the infrastructure lifecycle.]]></description>
            <content:encoded><![CDATA[<p>Infrastructure teams face a fragmented tooling landscape. VM migration requires one vendor, disk conversion another, Kubernetes management a third, and GPU compute yet another. Each tool brings its own API surface, its own authentication model, and its own operational overhead.</p>
<p>Today we are announcing that the HyperSDK platform has grown to eleven products -- a complete infrastructure lifecycle from export through deployment, observation, and optimization.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-complete-pipeline">The Complete Pipeline<a href="https://hypersdk.cloud/blog/eleven-products#the-complete-pipeline" class="hash-link" aria-label="Direct link to The Complete Pipeline" title="Direct link to The Complete Pipeline" translate="no">​</a></h2>
<p><strong>Export and Convert.</strong> HyperSDK Platform and hyper2kvm Engine handle the heavy lifting of extracting VMs from vSphere, AWS, Azure, GCP, Hyper-V, and six other providers. hyper2kvm automates guest OS fixing -- VirtIO driver injection, bootloader repair, network reconfiguration -- so every VM boots on the first try. VMCraft adds format conversion across QCOW2, VMDK, VDI, and raw images.</p>
<p><strong>Build and Deploy.</strong> VirtCraft provides 44 OS templates and multi-VM blueprints for KubeVirt. Orchestr8 handles deployment across Podman, Kubernetes, KubeVirt, and bare metal from a single manifest.</p>
<p><strong>Manage and Observe.</strong> v9s brings the k9s experience to KubeVirt -- terminal UI and web dashboard for full VM lifecycle management. VirtSpawn does the same for libvirt environments. KubeVM Studio layers AI-driven automation and cost optimization on top of Kubernetes VMs. Cilium Flow provides eBPF-based network observability with ML-powered policy automation. VMSpawn rounds out management with 480+ API endpoints for systemd-vmspawn environments, exposed through five interfaces: CLI, TUI, Web UI, Kubernetes Operator, and Terraform Provider.</p>
<p><strong>Inspect and Repair.</strong> GuestKit enables offline disk inspection and repair across QCOW2, VMDK, VDI, and more. AI-powered diagnostics identify boot failures, missing drivers, and configuration drift without ever starting the VM.</p>
<p><strong>GPU Compute for AI.</strong> KubeFabric is our second new product: an enterprise GPU compute fabric built for AI/ML workloads. NVIDIA-native GPU orchestration with RDMA networking and parallel filesystem support, deployed on bare metal. It replaces the functionality teams currently cobble together from EKS, OpenShift AI, Databricks, and CoreWeave -- fully self-hosted and enterprise-ready.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="one-platform-one-api">One Platform, One API<a href="https://hypersdk.cloud/blog/eleven-products#one-platform-one-api" class="hash-link" aria-label="Direct link to One Platform, One API" title="Direct link to One Platform, One API" translate="no">​</a></h2>
<p>Every product in the HyperSDK portfolio shares a common design philosophy: enterprise-grade security (RBAC, audit logging, SOC2 readiness), API-first architecture, and deployment flexibility across on-premises, edge, and cloud environments.</p>
<p>The eleven products map to the natural lifecycle of infrastructure operations:</p>
<ol>
<li class=""><strong>Export</strong> -- HyperSDK Platform, hyper2kvm Engine</li>
<li class=""><strong>Convert</strong> -- VMCraft, GuestKit</li>
<li class=""><strong>Build</strong> -- VirtCraft</li>
<li class=""><strong>Deploy</strong> -- Orchestr8, VMSpawn</li>
<li class=""><strong>Manage</strong> -- v9s, VirtSpawn, KubeVM Studio</li>
<li class=""><strong>Observe</strong> -- Cilium Flow</li>
<li class=""><strong>Compute</strong> -- KubeFabric</li>
</ol>
<p>Teams adopt the products they need today and expand as their infrastructure evolves. No rip-and-replace required.</p>
<p>To learn more about KubeFabric, visit <a class="" href="https://hypersdk.cloud/kube-fabric">KubeFabric</a>, or <a class="" href="https://hypersdk.cloud/contact">schedule a demo</a> with our team.</p>]]></content:encoded>
            <category>Products</category>
            <category>Platform</category>
            <category>Announcement</category>
        </item>
        <item>
            <title><![CDATA[Build a $2.8M Migration Practice: The HyperSDK Partner Program]]></title>
            <link>https://hypersdk.cloud/blog/hypersdk-partner-program-msp-revenue</link>
            <guid>https://hypersdk.cloud/blog/hypersdk-partner-program-msp-revenue</guid>
            <pubDate>Sat, 04 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[There are 300,000+ VMware customers who need to migrate off VMware. Most of them do not have the internal expertise to do it. That is your opportunity.]]></description>
            <content:encoded><![CDATA[<p>There are 300,000+ VMware customers who need to migrate off VMware. Most of them do not have the internal expertise to do it. That is your opportunity.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-market-is-wide-open">The Market Is Wide Open<a href="https://hypersdk.cloud/blog/hypersdk-partner-program-msp-revenue#the-market-is-wide-open" class="hash-link" aria-label="Direct link to The Market Is Wide Open" title="Direct link to The Market Is Wide Open" translate="no">​</a></h2>
<p>The VMware ecosystem represents over $50 billion in annual spending, and it is in transition. Broadcom's acquisition triggered license increases of 2x to 12x across the customer base. Organizations that were paying $350,000 per year for 500 VMs are now facing $1.2 million annual bills under VMware Cloud Foundation.</p>
<p>These companies need help. They need a partner who understands migration, who has the tooling, and who can execute.</p>
<p>Here is the part that matters most: 60% of VMware channel partners were dropped by Broadcom. The customers those partners served are still running VMware. They still need support. And they are actively looking for a new partner who can help them exit.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-economics">The Economics<a href="https://hypersdk.cloud/blog/hypersdk-partner-program-msp-revenue#the-economics" class="hash-link" aria-label="Direct link to The Economics" title="Direct link to The Economics" translate="no">​</a></h2>
<p>The HyperSDK Partner Program is built on a simple model:</p>
<ul>
<li class=""><strong>Competitive partner pricing.</strong> Contact us for partner-tier pricing on HyperSDK platform access. No per-VM licensing fees.</li>
<li class=""><strong>100% service margin.</strong> Every dollar of migration services you deliver is yours. We provide the tools. You provide the expertise.</li>
<li class=""><strong>Year 1 revenue potential: $550,000.</strong> Based on a typical partner engaging 5 to 10 mid-market customers in their first year.</li>
<li class=""><strong>Year 3 revenue potential: $2.8M+.</strong> As your practice matures, recurring migration and managed service contracts compound.</li>
</ul>
<p>This is not a reseller program. You are not marking up software licenses. You are building a professional services practice around the largest infrastructure transition in a decade.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-you-get">What You Get<a href="https://hypersdk.cloud/blog/hypersdk-partner-program-msp-revenue#what-you-get" class="hash-link" aria-label="Direct link to What You Get" title="Direct link to What You Get" translate="no">​</a></h2>
<p>Partners receive full access to the HyperSDK platform, including the export engine, conversion tools, and enterprise dashboard. You also get:</p>
<ul>
<li class="">Technical enablement and certification</li>
<li class="">Co-marketing support and lead sharing</li>
<li class="">Priority access to engineering for complex migrations</li>
<li class="">A dedicated partner success manager</li>
</ul>
<p>The platform handles the hard parts -- vSphere export, automated VirtIO driver injection, bootloader repair, and guest OS fixing. Your team focuses on customer relationships, project management, and the high-value consulting that commands premium rates.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="who-this-is-for">Who This Is For<a href="https://hypersdk.cloud/blog/hypersdk-partner-program-msp-revenue#who-this-is-for" class="hash-link" aria-label="Direct link to Who This Is For" title="Direct link to Who This Is For" translate="no">​</a></h2>
<p>The program is designed for managed service providers, IT consultancies, and systems integrators who serve mid-market and enterprise customers. If your clients run VMware and are facing renewal increases, you already have the relationships. We give you the tools to act on them.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="get-started">Get Started<a href="https://hypersdk.cloud/blog/hypersdk-partner-program-msp-revenue#get-started" class="hash-link" aria-label="Direct link to Get Started" title="Direct link to Get Started" translate="no">​</a></h2>
<p>The application process is straightforward. Contact our partnerships team, and we will schedule a 30-minute call to discuss your practice, your customer base, and how we can support your growth.</p>
<p>The market will not stay this wide open forever. The partners who move now will capture the lion's share of a $50B+ transition.</p>
<p><a class="" href="https://hypersdk.cloud/contact">Join the Partner Program</a> -- competitive pricing, maximum opportunity.</p>]]></content:encoded>
            <category>Partners</category>
            <category>MSP</category>
            <category>Revenue</category>
        </item>
        <item>
            <title><![CDATA[Migrating VMs in Air-Gapped Environments: A Complete Guide]]></title>
            <link>https://hypersdk.cloud/blog/airgap-disconnected-migration</link>
            <guid>https://hypersdk.cloud/blog/airgap-disconnected-migration</guid>
            <pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Air-gapped networks -- environments with no physical or logical connection to the internet -- present unique challenges for VM migration. Standard migration tools assume network connectivity for package downloads, driver repositories, and cloud API calls. HyperSDK was designed from the ground up to operate in fully disconnected environments, making it the platform of choice for government, defense, and compliance-restricted organizations.]]></description>
            <content:encoded><![CDATA[<p>Air-gapped networks -- environments with no physical or logical connection to the internet -- present unique challenges for VM migration. Standard migration tools assume network connectivity for package downloads, driver repositories, and cloud API calls. HyperSDK was designed from the ground up to operate in fully disconnected environments, making it the platform of choice for government, defense, and compliance-restricted organizations.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-air-gap-means-in-practice">What Air-Gap Means in Practice<a href="https://hypersdk.cloud/blog/airgap-disconnected-migration#what-air-gap-means-in-practice" class="hash-link" aria-label="Direct link to What Air-Gap Means in Practice" title="Direct link to What Air-Gap Means in Practice" translate="no">​</a></h2>
<p>An air-gapped network has no connection to the internet or to any untrusted network. This is not the same as a firewall-restricted network where outbound connections are blocked -- air-gap means there is no physical path for data to traverse. These environments are found in SCIFs (Sensitive Compartmented Information Facilities), classified defense networks, certain financial trading floors, and critical infrastructure control systems.</p>
<p>For VM migration, air-gap creates several immediate challenges. There is no access to package repositories (apt, yum, pip) for installing tools or dependencies. There is no access to driver download sites for VirtIO or guest agent packages. There is no access to cloud APIs for authentication or storage. There is no way to pull container images from registries. Every component needed for migration must be pre-staged on portable media and physically carried across the air gap.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="how-hypersdk-handles-air-gap-migration">How HyperSDK Handles Air-Gap Migration<a href="https://hypersdk.cloud/blog/airgap-disconnected-migration#how-hypersdk-handles-air-gap-migration" class="hash-link" aria-label="Direct link to How HyperSDK Handles Air-Gap Migration" title="Direct link to How HyperSDK Handles Air-Gap Migration" translate="no">​</a></h2>
<p>HyperSDK's air-gap migration workflow operates in three stages, each designed for complete offline operation.</p>
<p><strong>Stage 1: Offline Export.</strong> On the source network (typically a VMware vSphere environment), HyperSDK connects to vCenter using only local network access. VMs are exported to local storage with full manifest tracking. Each exported artifact -- disk image, configuration file, metadata -- receives a SHA-256 checksum. The export manifest records the operator identity, timestamp, source VM identifier, and hash of every file. No outbound network access is required.</p>
<p><strong>Stage 2: Physical Transfer.</strong> Exported VM images are transferred to encrypted portable media. HyperSDK generates a chain-of-custody manifest that tracks every file from source to destination. The manifest includes tamper-evident digital signatures so that any modification during physical transport is detectable. Supported media includes encrypted USB drives, removable NVMe drives, and optical media for smaller workloads.</p>
<p><strong>Stage 3: Offline Import.</strong> On the destination network, hyper2kvm reads exported images directly from the portable media. All conversion tools, VirtIO drivers, and guest OS fixup scripts are pre-packaged in the hyper2kvm installation -- nothing is downloaded at runtime. Disk images are converted from VMDK to qcow2, VirtIO drivers are injected, bootloaders are repaired, and the VM is deployed to libvirt. The entire process runs without a single DNS lookup.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="fips-and-compliance-considerations">FIPS and Compliance Considerations<a href="https://hypersdk.cloud/blog/airgap-disconnected-migration#fips-and-compliance-considerations" class="hash-link" aria-label="Direct link to FIPS and Compliance Considerations" title="Direct link to FIPS and Compliance Considerations" translate="no">​</a></h2>
<p>Air-gapped environments typically operate under strict compliance frameworks. HyperSDK uses FIPS 140-2 compatible cryptographic modules for all hashing and signature operations. Audit logs are structured JSON, suitable for ingestion into SIEM platforms operating on the same air-gapped network. Every operation is logged with sufficient detail to satisfy NIST SP 800-53 audit requirements, including operator identification, action performed, objects affected, and result status.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="real-world-use-cases">Real-World Use Cases<a href="https://hypersdk.cloud/blog/airgap-disconnected-migration#real-world-use-cases" class="hash-link" aria-label="Direct link to Real-World Use Cases" title="Direct link to Real-World Use Cases" translate="no">​</a></h2>
<p>We have deployed HyperSDK in air-gapped environments across several sectors: defense contractors migrating from VMware to KVM on classified networks, government agencies modernizing infrastructure in SCIF environments, energy companies migrating control systems on isolated OT networks, and maritime organizations operating on vessels with no satellite connectivity.</p>
<p>In each case, the key to success was pre-staging. All tools, drivers, and dependencies must be validated and packaged before they cross the air gap. HyperSDK provides a single self-contained package that includes everything needed for the complete migration pipeline. If your organization operates in a disconnected environment and needs to migrate VM workloads, <a class="" href="https://hypersdk.cloud/contact">contact our team</a> to discuss your specific requirements.</p>]]></content:encoded>
            <category>Air-Gap</category>
            <category>Security</category>
            <category>Government</category>
        </item>
        <item>
            <title><![CDATA[Industry First: Carbon-Aware VM Migration Scheduling]]></title>
            <link>https://hypersdk.cloud/blog/carbon-aware-migration</link>
            <guid>https://hypersdk.cloud/blog/carbon-aware-migration</guid>
            <pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Data centers account for roughly 1-1.5% of global electricity consumption, and that number is rising. Every VM migration involves sustained disk I/O, network transfer, and CPU-intensive format conversion -- all of which consume energy. What if your migration platform could automatically schedule those workloads for times when the electrical grid is cleanest? That is exactly what HyperSDK's carbon-aware scheduling does.]]></description>
            <content:encoded><![CDATA[<p>Data centers account for roughly 1-1.5% of global electricity consumption, and that number is rising. Every VM migration involves sustained disk I/O, network transfer, and CPU-intensive format conversion -- all of which consume energy. What if your migration platform could automatically schedule those workloads for times when the electrical grid is cleanest? That is exactly what HyperSDK's carbon-aware scheduling does.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-data-center-carbon-problem">The Data Center Carbon Problem<a href="https://hypersdk.cloud/blog/carbon-aware-migration#the-data-center-carbon-problem" class="hash-link" aria-label="Direct link to The Data Center Carbon Problem" title="Direct link to The Data Center Carbon Problem" translate="no">​</a></h2>
<p>A typical VM export and conversion job runs for 10-30 minutes depending on disk size. During that time, the host machine draws significant power for sustained sequential reads, compression, format conversion, and network transfer. When you are migrating hundreds of VMs in a fleet-wide transition away from vSphere, the cumulative energy consumption is substantial.</p>
<p>The carbon intensity of that energy depends entirely on when and where you consume it. A kilowatt-hour of electricity at 2 AM in Norway (dominated by hydropower) produces a fraction of the CO2 compared to the same kilowatt-hour at 5 PM in Poland (dominated by coal). By shifting migration jobs to low-carbon windows, you can dramatically reduce your environmental impact with zero changes to the migration itself.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="electricitymap-integration">ElectricityMap Integration<a href="https://hypersdk.cloud/blog/carbon-aware-migration#electricitymap-integration" class="hash-link" aria-label="Direct link to ElectricityMap Integration" title="Direct link to ElectricityMap Integration" translate="no">​</a></h2>
<p>HyperSDK integrates with ElectricityMap, the leading provider of real-time carbon intensity data for electrical grids worldwide. The integration covers 12 global grid zones spanning North America, Europe, and Asia-Pacific. For each zone, HyperSDK receives real-time carbon intensity measurements in grams of CO2 equivalent per kilowatt-hour (gCO2eq/kWh), along with 24-hour forecasts.</p>
<p>The 12 supported grid zones are:</p>
<ul>
<li class=""><strong>North America</strong>: US-CAL (California), US-TEX (Texas), US-NY (New York), CA-ON (Ontario)</li>
<li class=""><strong>Europe</strong>: DE (Germany), FR (France), GB (Great Britain), NO (Norway), PL (Poland)</li>
<li class=""><strong>Asia-Pacific</strong>: JP-TK (Tokyo), AU-NSW (New South Wales), IN-DL (Delhi)</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="how-it-works">How It Works<a href="https://hypersdk.cloud/blog/carbon-aware-migration#how-it-works" class="hash-link" aria-label="Direct link to How It Works" title="Direct link to How It Works" translate="no">​</a></h2>
<p>When you submit a migration job with carbon-aware scheduling enabled, HyperSDK queries the ElectricityMap API for the current carbon intensity and the 24-hour forecast for your grid zone. If the current intensity is below your configured threshold, the job starts immediately. If not, the scheduler holds the job and monitors the forecast, releasing it during the next predicted low-carbon window.</p>
<p>You can configure the carbon threshold per job or set a global default. The scheduler respects your maximum delay tolerance -- if you set a 12-hour window, the job will run within 12 hours regardless of grid conditions, ensuring migrations complete on time even during high-carbon periods.</p>
<p>The scheduling logic runs in the <code>ScheduleManager</code> port interface, which means it works with any job type -- single VM exports, batch migrations, backup jobs, or any custom workflow you build on the API.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="measured-impact-30-50-co2-reduction">Measured Impact: 30-50% CO2 Reduction<a href="https://hypersdk.cloud/blog/carbon-aware-migration#measured-impact-30-50-co2-reduction" class="hash-link" aria-label="Direct link to Measured Impact: 30-50% CO2 Reduction" title="Direct link to Measured Impact: 30-50% CO2 Reduction" translate="no">​</a></h2>
<p>In testing across multiple grid zones, carbon-aware scheduling reduced migration-related CO2 emissions by 30-50% compared to immediate execution. The savings vary by region -- grids with high renewable penetration like France and Norway see smaller absolute reductions (because the baseline is already clean), while coal-heavy grids like Poland and parts of the US see the largest improvements.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-numbers-100-vms-262-kg-co2-13-trees">The Numbers: 100 VMs, 262 kg CO2, 13 Trees<a href="https://hypersdk.cloud/blog/carbon-aware-migration#the-numbers-100-vms-262-kg-co2-13-trees" class="hash-link" aria-label="Direct link to The Numbers: 100 VMs, 262 kg CO2, 13 Trees" title="Direct link to The Numbers: 100 VMs, 262 kg CO2, 13 Trees" translate="no">​</a></h2>
<p>Here is a concrete example. Migrating 100 VMs with an average disk size of 50 GB each:</p>
<ul>
<li class=""><strong>Total energy consumption</strong>: approximately 85 kWh (including disk I/O, conversion, and transfer)</li>
<li class=""><strong>Without carbon scheduling</strong> (US average grid, 400 gCO2eq/kWh): 34 kg CO2</li>
<li class=""><strong>With carbon scheduling</strong> (shifting to low-carbon windows): 14.6 kg CO2</li>
<li class=""><strong>Annual savings</strong> (assuming quarterly migration batches): 262 kg CO2 per year</li>
<li class=""><strong>Equivalent</strong>: planting 13 trees per year</li>
</ul>
<p>For organizations running thousands of VMs, these numbers scale linearly. A 1,000-VM migration saves over 2.6 metric tons of CO2 annually -- equivalent to taking a car off the road.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="esg-compliance-reporting">ESG Compliance Reporting<a href="https://hypersdk.cloud/blog/carbon-aware-migration#esg-compliance-reporting" class="hash-link" aria-label="Direct link to ESG Compliance Reporting" title="Direct link to ESG Compliance Reporting" translate="no">​</a></h2>
<p>HyperSDK tracks carbon metrics for every migration job and aggregates them into reports suitable for ESG (Environmental, Social, and Governance) compliance. The carbon dashboard in the web interface shows cumulative emissions, emissions avoided through scheduling, and per-job breakdowns. You can export these reports as CSV or JSON for integration with your organization's sustainability reporting tools.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="api-endpoints-for-carbon-status">API Endpoints for Carbon Status<a href="https://hypersdk.cloud/blog/carbon-aware-migration#api-endpoints-for-carbon-status" class="hash-link" aria-label="Direct link to API Endpoints for Carbon Status" title="Direct link to API Endpoints for Carbon Status" translate="no">​</a></h2>
<p>The carbon tracking system is fully accessible through the REST API:</p>
<ul>
<li class=""><code>GET /api/v1/carbon/status</code> -- current grid carbon intensity for your configured zone</li>
<li class=""><code>GET /api/v1/carbon/forecast</code> -- 24-hour carbon intensity forecast</li>
<li class=""><code>GET /api/v1/carbon/report</code> -- aggregated carbon savings report</li>
<li class=""><code>GET /api/v1/carbon/zones</code> -- list of supported grid zones</li>
<li class=""><code>POST /api/v1/jobs</code> with <code>carbon_aware: true</code> -- submit a carbon-aware job</li>
</ul>
<p>Carbon-aware scheduling is optional and off by default. Enable it by setting <code>carbon.enabled: true</code> in your configuration file and providing an ElectricityMap API key. Once enabled, every migration job can opt in to carbon-aware scheduling individually or you can set it as the default for all jobs.</p>]]></content:encoded>
            <category>Carbon</category>
            <category>Sustainability</category>
            <category>ESG</category>
            <category>Scheduling</category>
        </item>
        <item>
            <title><![CDATA[GPU Passthrough on KVM: Running AI/ML Workloads After Migration]]></title>
            <link>https://hypersdk.cloud/blog/gpu-passthrough-kvm</link>
            <guid>https://hypersdk.cloud/blog/gpu-passthrough-kvm</guid>
            <pubDate>Sat, 28 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[One of the most common concerns we hear from organizations migrating from VMware to KVM is GPU support. VMware's vSphere has mature GPU passthrough and vGPU capabilities, and teams running AI/ML training, inference, VDI, or scientific computing workloads need assurance that these capabilities transfer to KVM. The answer is straightforward: KVM's GPU passthrough via VFIO delivers 98%+ of bare-metal GPU performance, and HyperSDK automates the configuration that traditionally requires manual kernel and libvirt setup.]]></description>
            <content:encoded><![CDATA[<p>One of the most common concerns we hear from organizations migrating from VMware to KVM is GPU support. VMware's vSphere has mature GPU passthrough and vGPU capabilities, and teams running AI/ML training, inference, VDI, or scientific computing workloads need assurance that these capabilities transfer to KVM. The answer is straightforward: KVM's GPU passthrough via VFIO delivers 98%+ of bare-metal GPU performance, and HyperSDK automates the configuration that traditionally requires manual kernel and libvirt setup.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="how-gpu-passthrough-works-on-kvm">How GPU Passthrough Works on KVM<a href="https://hypersdk.cloud/blog/gpu-passthrough-kvm#how-gpu-passthrough-works-on-kvm" class="hash-link" aria-label="Direct link to How GPU Passthrough Works on KVM" title="Direct link to How GPU Passthrough Works on KVM" translate="no">​</a></h2>
<p>GPU passthrough on KVM uses the VFIO (Virtual Function I/O) framework to assign a physical PCI device directly to a virtual machine. The guest VM gets exclusive access to the GPU hardware, bypassing the hypervisor for all GPU operations. This is fundamentally the same approach used by VMware's DirectPath I/O, but with the advantage of being built into the Linux kernel rather than requiring a proprietary hypervisor.</p>
<p>The process involves four steps. First, IOMMU (Intel VT-d or AMD-Vi) must be enabled in the host BIOS and kernel parameters. IOMMU provides the memory isolation that allows a PCI device to be safely assigned to a VM. Second, the GPU must be unbound from the host graphics driver (nouveau or nvidia) and bound to the VFIO-PCI driver. This tells the kernel that the GPU is reserved for VM passthrough. Third, the GPU is assigned to a VM via libvirt XML configuration, specifying the PCI bus, slot, and function addresses. Fourth, NVIDIA drivers are installed inside the guest VM, which then sees the GPU as if it were physically installed.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="performance-near-native-results">Performance: Near-Native Results<a href="https://hypersdk.cloud/blog/gpu-passthrough-kvm#performance-near-native-results" class="hash-link" aria-label="Direct link to Performance: Near-Native Results" title="Direct link to Performance: Near-Native Results" translate="no">​</a></h2>
<p>We have benchmarked GPU passthrough on KVM against bare-metal across multiple workloads and GPU models. The results are consistently within 2% of bare-metal performance for compute workloads.</p>
<p>On an NVIDIA A100 80GB running PyTorch ResNet-50 training, KVM with VFIO passthrough delivered 98.3% of bare-metal throughput. CUDA memory bandwidth tests showed 99.1% of native performance. For inference workloads using TensorRT on an NVIDIA T4, latency was within 1% of bare-metal at all batch sizes.</p>
<p>The negligible overhead comes from the IOMMU address translation layer, which adds a small fixed cost to DMA operations. For GPU-bound workloads where the vast majority of time is spent on GPU compute, this overhead is effectively invisible.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="vgpu-for-multi-tenant-scenarios">vGPU for Multi-Tenant Scenarios<a href="https://hypersdk.cloud/blog/gpu-passthrough-kvm#vgpu-for-multi-tenant-scenarios" class="hash-link" aria-label="Direct link to vGPU for Multi-Tenant Scenarios" title="Direct link to vGPU for Multi-Tenant Scenarios" translate="no">​</a></h2>
<p>For environments that need multiple VMs to share a single physical GPU -- common in VDI and inference serving -- NVIDIA vGPU provides time-sliced or MIG (Multi-Instance GPU) partitioning. Each VM receives a guaranteed allocation of GPU memory and compute resources.</p>
<p>HyperSDK supports vGPU configuration for NVIDIA GPUs that support it (A100, A30, H100 with MIG; Tesla T4, L40S with time-slicing). The hyper2kvm conversion engine can pre-configure VMs for vGPU profiles during migration, so GPU-accelerated workloads are ready to run immediately after deployment on KVM.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="migrating-gpu-workloads-from-vmware">Migrating GPU Workloads from VMware<a href="https://hypersdk.cloud/blog/gpu-passthrough-kvm#migrating-gpu-workloads-from-vmware" class="hash-link" aria-label="Direct link to Migrating GPU Workloads from VMware" title="Direct link to Migrating GPU Workloads from VMware" translate="no">​</a></h2>
<p>When migrating GPU-accelerated VMs from vSphere to KVM, the GPU assignment changes from VMware's DirectPath I/O to KVM's VFIO passthrough. The guest-side NVIDIA drivers remain the same -- a Windows or Linux VM running CUDA workloads needs only the standard NVIDIA driver package, regardless of whether the underlying hypervisor is ESXi or KVM.</p>
<p>HyperSDK handles the infrastructure side automatically. During export from vSphere, the VM's GPU configuration is captured in the migration manifest. During deployment on KVM, HyperSDK generates the correct libvirt XML with VFIO hostdev entries, verifies IOMMU group isolation, and configures the necessary kernel module parameters. For organizations running AI/ML workloads on VMware, the migration to KVM preserves full GPU performance while eliminating VMware licensing costs.</p>
<p>If your organization runs GPU-accelerated workloads on VMware and is evaluating a KVM migration, <a class="" href="https://hypersdk.cloud/contact">talk to our team</a> about GPU passthrough configuration and performance validation.</p>]]></content:encoded>
            <category>GPU</category>
            <category>AI/ML</category>
            <category>Performance</category>
        </item>
        <item>
            <title><![CDATA[Chunked File Upload with Resume: Browser to Server]]></title>
            <link>https://hypersdk.cloud/blog/chunked-upload-resume</link>
            <guid>https://hypersdk.cloud/blog/chunked-upload-resume</guid>
            <pubDate>Sat, 21 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[When you need to upload a 50 GB VM disk image through a web browser, a single HTTP POST is not going to work. Network interruptions, browser tab crashes, and corporate proxy timeouts all conspire to make large uploads fail. HyperSDK solves this with a chunked upload protocol that splits files into 10 MB pieces and supports resume from the last successful chunk.]]></description>
            <content:encoded><![CDATA[<p>When you need to upload a 50 GB VM disk image through a web browser, a single HTTP POST is not going to work. Network interruptions, browser tab crashes, and corporate proxy timeouts all conspire to make large uploads fail. HyperSDK solves this with a chunked upload protocol that splits files into 10 MB pieces and supports resume from the last successful chunk.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-problem-with-large-uploads">The Problem with Large Uploads<a href="https://hypersdk.cloud/blog/chunked-upload-resume#the-problem-with-large-uploads" class="hash-link" aria-label="Direct link to The Problem with Large Uploads" title="Direct link to The Problem with Large Uploads" translate="no">​</a></h2>
<p>Standard HTML file uploads use a single multipart/form-data request. This works for megabyte-sized files but breaks down at scale. Most reverse proxies have request body limits (often 10-100 MB). Browser memory usage spikes when loading a multi-gigabyte file into an ArrayBuffer. Network drops at 80% completion mean starting over from zero. And there is no way to show granular progress -- the browser either knows the request is in-flight or it does not.</p>
<p>We needed an approach that works reliably for files up to 50 GB, provides real-time progress feedback, and recovers gracefully from network failures.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="protocol-design">Protocol Design<a href="https://hypersdk.cloud/blog/chunked-upload-resume#protocol-design" class="hash-link" aria-label="Direct link to Protocol Design" title="Direct link to Protocol Design" translate="no">​</a></h2>
<p>Our chunked upload protocol uses four API endpoints:</p>
<ol>
<li class="">
<p><strong><code>POST /upload/init</code></strong> -- Initialize a session. The client sends the filename, total size, and preferred chunk size. The server allocates an upload ID and returns the total chunk count.</p>
</li>
<li class="">
<p><strong><code>POST /upload/{id}/chunk/{n}</code></strong> -- Upload a single chunk. The body is raw bytes (<code>application/octet-stream</code>). The server writes the chunk to a temporary file and records it as received.</p>
</li>
<li class="">
<p><strong><code>GET /upload/{id}/status</code></strong> -- Query progress. Returns the count of received chunks, bytes received, and overall percentage. This is the key endpoint for resume -- the client checks which chunks are missing and picks up where it left off.</p>
</li>
<li class="">
<p><strong><code>POST /upload/{id}/complete</code></strong> -- Finalize the upload. The server reassembles all chunks into the final file, computes a SHA-256 checksum, and returns the file path.</p>
</li>
</ol>
<p>Each chunk upload is idempotent. If the same chunk is uploaded twice (e.g., after a retry where the server received it but the client did not get the acknowledgment), the server simply overwrites the existing chunk data. This eliminates an entire class of duplication bugs.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="client-side-implementation">Client-Side Implementation<a href="https://hypersdk.cloud/blog/chunked-upload-resume#client-side-implementation" class="hash-link" aria-label="Direct link to Client-Side Implementation" title="Direct link to Client-Side Implementation" translate="no">​</a></h2>
<p>On the browser side, we use the File API's <code>Blob.slice()</code> method to split the file into chunks without loading the entire file into memory. For a 50 GB file with 10 MB chunks, we create 5,000 slice references without allocating any additional memory -- the browser reads each slice from disk on demand.</p>
<div class="language-typescript codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#F8F8F2;--prism-background-color:#282A36"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-typescript codeBlock_bY9V thin-scrollbar" style="color:#F8F8F2;background-color:#282A36"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#F8F8F2"><span class="token keyword" style="color:rgb(189, 147, 249);font-style:italic">const</span><span class="token plain"> </span><span class="token constant" style="color:rgb(189, 147, 249)">CHUNK_SIZE</span><span class="token plain"> </span><span class="token operator">=</span><span class="token plain"> </span><span class="token number">10</span><span class="token plain"> </span><span class="token operator">*</span><span class="token plain"> </span><span class="token number">1024</span><span class="token plain"> </span><span class="token operator">*</span><span class="token plain"> </span><span class="token number">1024</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"> </span><span class="token comment" style="color:rgb(98, 114, 164)">// 10 MB</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain"></span><span class="token keyword" style="color:rgb(189, 147, 249);font-style:italic">const</span><span class="token plain"> totalChunks </span><span class="token operator">=</span><span class="token plain"> Math</span><span class="token punctuation" style="color:rgb(248, 248, 242)">.</span><span class="token function" style="color:rgb(80, 250, 123)">ceil</span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token plain">file</span><span class="token punctuation" style="color:rgb(248, 248, 242)">.</span><span class="token plain">size </span><span class="token operator">/</span><span class="token plain"> </span><span class="token constant" style="color:rgb(189, 147, 249)">CHUNK_SIZE</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain" style="display:inline-block"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain"></span><span class="token keyword" style="color:rgb(189, 147, 249);font-style:italic">for</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token keyword" style="color:rgb(189, 147, 249);font-style:italic">let</span><span class="token plain"> i </span><span class="token operator">=</span><span class="token plain"> </span><span class="token number">0</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"> i </span><span class="token operator">&lt;</span><span class="token plain"> totalChunks</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"> i</span><span class="token operator">++</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">{</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token keyword" style="color:rgb(189, 147, 249);font-style:italic">const</span><span class="token plain"> start </span><span class="token operator">=</span><span class="token plain"> i </span><span class="token operator">*</span><span class="token plain"> </span><span class="token constant" style="color:rgb(189, 147, 249)">CHUNK_SIZE</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token keyword" style="color:rgb(189, 147, 249);font-style:italic">const</span><span class="token plain"> end </span><span class="token operator">=</span><span class="token plain"> Math</span><span class="token punctuation" style="color:rgb(248, 248, 242)">.</span><span class="token function" style="color:rgb(80, 250, 123)">min</span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token plain">start </span><span class="token operator">+</span><span class="token plain"> </span><span class="token constant" style="color:rgb(189, 147, 249)">CHUNK_SIZE</span><span class="token punctuation" style="color:rgb(248, 248, 242)">,</span><span class="token plain"> file</span><span class="token punctuation" style="color:rgb(248, 248, 242)">.</span><span class="token plain">size</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token keyword" style="color:rgb(189, 147, 249);font-style:italic">const</span><span class="token plain"> chunk </span><span class="token operator">=</span><span class="token plain"> file</span><span class="token punctuation" style="color:rgb(248, 248, 242)">.</span><span class="token function" style="color:rgb(80, 250, 123)">slice</span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token plain">start</span><span class="token punctuation" style="color:rgb(248, 248, 242)">,</span><span class="token plain"> end</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain" style="display:inline-block"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token keyword" style="color:rgb(189, 147, 249);font-style:italic">await</span><span class="token plain"> </span><span class="token function" style="color:rgb(80, 250, 123)">uploadChunk</span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token plain">uploadId</span><span class="token punctuation" style="color:rgb(248, 248, 242)">,</span><span class="token plain"> i</span><span class="token punctuation" style="color:rgb(248, 248, 242)">,</span><span class="token plain"> chunk</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token function" style="color:rgb(80, 250, 123)">onProgress</span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token punctuation" style="color:rgb(248, 248, 242)">(</span><span class="token plain">i </span><span class="token operator">+</span><span class="token plain"> </span><span class="token number">1</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token plain"> </span><span class="token operator">/</span><span class="token plain"> totalChunks </span><span class="token operator">*</span><span class="token plain"> </span><span class="token number">100</span><span class="token punctuation" style="color:rgb(248, 248, 242)">)</span><span class="token punctuation" style="color:rgb(248, 248, 242)">;</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain"></span><span class="token punctuation" style="color:rgb(248, 248, 242)">}</span><br></span></code></pre></div></div>
<p>Each chunk is uploaded using <code>XMLHttpRequest</code> rather than <code>fetch()</code>. The reason is XHR's <code>upload.onprogress</code> event, which fires during the request body transmission and provides bytes-sent granularity. With <code>fetch()</code>, you only know when the response arrives -- for a 10 MB chunk on a slow connection, that could be 30 seconds of silence. XHR gives us a smooth progress bar even within a single chunk.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="resume-on-network-failure">Resume on Network Failure<a href="https://hypersdk.cloud/blog/chunked-upload-resume#resume-on-network-failure" class="hash-link" aria-label="Direct link to Resume on Network Failure" title="Direct link to Resume on Network Failure" translate="no">​</a></h2>
<p>When the network drops mid-upload, the React component catches the XHR error and enters a retry state. Before retrying the failed chunk, it queries the status endpoint to confirm the server's view of progress. This handles the case where the chunk was actually received but the response was lost.</p>
<p>The status response includes the count of received chunks. The client computes the set of missing chunks (which may not be contiguous if earlier retries partially succeeded) and uploads only those. In practice, since we upload sequentially, the missing chunks are always a contiguous range from the last received chunk to the end.</p>
<p>The dashboard UI shows the retry state clearly: a yellow progress bar with a "Resuming..." label and the count of remaining chunks. The user can also manually trigger a resume if they closed and reopened the tab -- the upload ID is persisted in <code>localStorage</code>, and the component checks for incomplete uploads on mount.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="server-side-handling">Server-Side Handling<a href="https://hypersdk.cloud/blog/chunked-upload-resume#server-side-handling" class="hash-link" aria-label="Direct link to Server-Side Handling" title="Direct link to Server-Side Handling" translate="no">​</a></h2>
<p>On the server side, each upload session creates a temporary directory with one file per chunk, named by index (e.g., <code>chunk_0000</code>, <code>chunk_0001</code>). The session metadata (filename, total size, expected chunk count, received chunks) is stored in memory and periodically flushed to a JSON file in the temporary directory.</p>
<p>When the client calls the complete endpoint, the server opens the final output file and copies each chunk file in order using <code>io.Copy</code>. After reassembly, it computes the SHA-256 checksum of the final file and compares it to the expected size. If everything checks out, the temporary directory is removed and the upload is marked as ready.</p>
<p>We chose sequential chunk files over a single sparse file because it is simpler to reason about, works on any filesystem, and makes the status endpoint trivial to implement -- just count the files in the directory.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="progress-tracking-with-callbackprogressreader">Progress Tracking with CallbackProgressReader<a href="https://hypersdk.cloud/blog/chunked-upload-resume#progress-tracking-with-callbackprogressreader" class="hash-link" aria-label="Direct link to Progress Tracking with CallbackProgressReader" title="Direct link to Progress Tracking with CallbackProgressReader" translate="no">​</a></h2>
<p>HyperSDK's <code>pkg/ioutil</code> package provides a <code>CallbackProgressReader</code> that wraps an <code>io.Reader</code> and invokes a callback function on every read. We use this throughout the codebase for tracking progress on both uploads and exports. On the upload path, it feeds the dashboard's progress bar. On the export path, it drives the job progress percentage that appears in the Jobs Table view.</p>
<p>The callback receives the number of bytes read so far and the total expected bytes. The dashboard component uses this to compute transfer rate (bytes per second), estimated time remaining, and a percentage for the progress bar. All of this updates in real time as chunks flow through the reader.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-we-would-do-differently">What We Would Do Differently<a href="https://hypersdk.cloud/blog/chunked-upload-resume#what-we-would-do-differently" class="hash-link" aria-label="Direct link to What We Would Do Differently" title="Direct link to What We Would Do Differently" translate="no">​</a></h2>
<p>If we were starting over, we would add parallel chunk uploads. Currently, chunks are uploaded sequentially because it simplifies the server-side reassembly and avoids complications with upload ordering. But for high-bandwidth connections, uploading 3-4 chunks in parallel would significantly reduce total upload time for large files. The protocol already supports it -- chunks can arrive in any order since they are written to separate files -- but the client currently does not take advantage of this.</p>
<p>We would also add server-side resumability across daemon restarts. Currently, the in-memory session state is lost when the daemon restarts. The chunk files survive on disk, but the metadata needs to be reconstructed. Adding a simple JSON metadata file (which we partially do for crash recovery) and a scan-on-startup routine would make the upload truly durable.</p>
<p>Even without these improvements, the current implementation handles the common case well: upload large files through the browser with progress feedback and automatic recovery from transient network failures.</p>]]></content:encoded>
            <category>Upload</category>
            <category>Features</category>
            <category>React</category>
        </item>
        <item>
            <title><![CDATA[Building Production-Grade System Observability in Go]]></title>
            <link>https://hypersdk.cloud/blog/system-observability</link>
            <guid>https://hypersdk.cloud/blog/system-observability</guid>
            <pubDate>Sat, 14 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[When we set out to build HyperSDK's observability layer, we had a clear constraint: no external dependencies. No Prometheus node exporter, no collectd, no StatsD sidecar. The daemon had to collect, store, analyze, and serve system metrics entirely on its own. This is the story of how we built a self-contained observability stack in Go using nothing but /proc, /sys, and a ring buffer.]]></description>
            <content:encoded><![CDATA[<p>When we set out to build HyperSDK's observability layer, we had a clear constraint: no external dependencies. No Prometheus node exporter, no collectd, no StatsD sidecar. The daemon had to collect, store, analyze, and serve system metrics entirely on its own. This is the story of how we built a self-contained observability stack in Go using nothing but <code>/proc</code>, <code>/sys</code>, and a ring buffer.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="collecting-metrics-from-proc-and-sys">Collecting Metrics from /proc and /sys<a href="https://hypersdk.cloud/blog/system-observability#collecting-metrics-from-proc-and-sys" class="hash-link" aria-label="Direct link to Collecting Metrics from /proc and /sys" title="Direct link to Collecting Metrics from /proc and /sys" translate="no">​</a></h2>
<p>Linux exposes virtually everything about system state through its pseudo-filesystems. CPU utilization comes from <code>/proc/stat</code>, memory from <code>/proc/meminfo</code>, disk I/O from <code>/proc/diskstats</code>, and network throughput from <code>/proc/net/dev</code>. We parse these files at a configurable interval (default 15 seconds) and compute derived metrics like CPU percentage, memory pressure, and I/O wait.</p>
<p>The key insight is that most <code>/proc</code> files report cumulative counters, not instantaneous values. CPU times in <code>/proc/stat</code> are monotonically increasing tick counts. To compute utilization percentage, you need two readings and some arithmetic: <code>usage = (active_delta / total_delta) * 100</code>. We store the previous reading in memory and compute deltas on each collection cycle. The same pattern applies to disk I/O (sectors read/written) and network (bytes received/transmitted).</p>
<p>One subtlety we discovered early: <code>/proc/meminfo</code> reports <code>MemAvailable</code> on modern kernels (3.14+), which is a much better indicator of actual available memory than <code>MemFree</code>. The latter ignores buffers, caches, and reclaimable slab memory that the kernel will happily give back under pressure. We use <code>MemAvailable</code> when present and fall back to <code>MemFree + Buffers + Cached</code> on older kernels.</p>
<p>For per-process metrics, we read <code>/proc/[pid]/stat</code> and <code>/proc/[pid]/status</code> for each process. This gives us per-process CPU time, resident set size, virtual memory size, and thread count. We sort by CPU and memory usage and expose the top N processes through the API and dashboard.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-health-score-algorithm">The Health Score Algorithm<a href="https://hypersdk.cloud/blog/system-observability#the-health-score-algorithm" class="hash-link" aria-label="Direct link to The Health Score Algorithm" title="Direct link to The Health Score Algorithm" translate="no">​</a></h2>
<p>Raw metrics are useful for monitoring tools, but operators want a quick answer: is this system healthy? We distill all collected metrics into a single health score from 0 to 100.</p>
<p>The algorithm applies weighted penalties for resource exhaustion. Each resource category (CPU, memory, disk, network) has a threshold and a penalty function. If CPU usage exceeds 90%, the penalty is proportional to how far above the threshold it is. If disk usage exceeds 90%, the penalty is higher because disk exhaustion is harder to recover from.</p>
<p>The formula is straightforward: start at 100, subtract penalties. A system running at 95% CPU, 60% memory, 40% disk, and normal network gets a penalty of roughly 10 points for CPU, resulting in a score of 90. A system at 95% CPU and 92% disk gets penalties from both, dropping to around 70. The score naturally reflects the severity and breadth of resource pressure.</p>
<p>We also track which resources are bottlenecks and expose them in the API response. When the health score drops, the operator immediately knows whether the problem is CPU, memory, disk, or network without digging into charts.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="explain-mode-why-is-cpu-high">Explain Mode: Why Is CPU High?<a href="https://hypersdk.cloud/blog/system-observability#explain-mode-why-is-cpu-high" class="hash-link" aria-label="Direct link to Explain Mode: Why Is CPU High?" title="Direct link to Explain Mode: Why Is CPU High?" translate="no">​</a></h2>
<p>The most interesting feature we built is the explain mode. Instead of just showing that CPU is at 95%, explain mode answers why. It identifies the top contributing processes, correlates them with known patterns (e.g., "qemu-img process suggests a disk conversion is running"), and generates actionable recommendations.</p>
<p>The explain engine works in three stages. First, it collects current and recent metrics. Second, it ranks contributing factors by impact -- for CPU, this means listing processes sorted by CPU usage with context about what they are doing. Third, it applies a rules engine that matches patterns and generates recommendations. If the top CPU consumer is a qemu-img process, the recommendation might be to schedule disk conversions during off-peak hours or to use carbon-aware scheduling to shift the workload.</p>
<p>Explain mode is available for CPU, memory, disk, and network, each with its own set of patterns and recommendations. The rules are implemented as a simple Go slice of rule structs with match functions and response templates. Adding new rules is a matter of adding a struct to the slice.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="time-series-storage-the-ring-buffer">Time-Series Storage: The Ring Buffer<a href="https://hypersdk.cloud/blog/system-observability#time-series-storage-the-ring-buffer" class="hash-link" aria-label="Direct link to Time-Series Storage: The Ring Buffer" title="Direct link to Time-Series Storage: The Ring Buffer" translate="no">​</a></h2>
<p>We store 24 hours of metric history in a ring buffer. The implementation is a fixed-size slice with a write pointer that wraps around. Each data point is a timestamp-value pair. With a 15-second collection interval, we store approximately 5,760 points per metric per day, consuming roughly 92 KB per metric (16 bytes per point).</p>
<p>The ring buffer has several advantages over a database. It requires no external dependencies, has O(1) insert and O(n) scan performance, naturally evicts old data without cleanup jobs, and uses a fixed, predictable amount of memory. For a system that stores 10 metrics, the total memory footprint is under 1 MB.</p>
<p>Time-series queries support a <code>step</code> parameter for downsampling. When querying with <code>step=5m</code>, the API averages all data points within each 5-minute window and returns one point per window. This keeps response sizes manageable when graphing 24 hours of data in the dashboard.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="smart-alerts">Smart Alerts<a href="https://hypersdk.cloud/blog/system-observability#smart-alerts" class="hash-link" aria-label="Direct link to Smart Alerts" title="Direct link to Smart Alerts" translate="no">​</a></h2>
<p>The alert engine evaluates rules against collected metrics on each collection cycle. Default rules cover common failure modes: CPU above 90% for 2 minutes, memory above 85%, disk above 90%, swap above 50%, and OOM kills. Each alert has a severity (info, warning, critical) and a suppression window to prevent duplicate alerts from flooding the notification system.</p>
<p>Alerts are delivered through two channels: the REST API (polled by the dashboard) and webhooks (pushed to external systems). Webhook delivery is resilient to transient failures with exponential backoff retry. The webhook payload includes the alert details, current metric values, and a link to the explain mode endpoint for the affected resource.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="lessons-learned">Lessons Learned<a href="https://hypersdk.cloud/blog/system-observability#lessons-learned" class="hash-link" aria-label="Direct link to Lessons Learned" title="Direct link to Lessons Learned" translate="no">​</a></h2>
<p>Building observability from scratch taught us several things. First, <code>/proc</code> parsing is cheap -- reading and parsing all system metrics takes under a millisecond on modern hardware. There is no reason to rely on external agents for basic system metrics. Second, a ring buffer is an excellent data structure for bounded time-series storage when you do not need persistence across restarts. Third, the explain mode concept -- turning raw metrics into structured diagnostics with recommendations -- is far more useful to operators than raw dashboards. It eliminates the step where someone has to look at five charts and reason about what they mean together.</p>
<p>The observability layer now serves as the foundation for carbon-aware scheduling (which needs to know current system load), the health check endpoint (which powers the dashboard home page), and the alert system (which drives webhook notifications). Building it self-contained means the entire stack deploys as a single binary with zero runtime dependencies.</p>]]></content:encoded>
            <category>Observability</category>
            <category>Go</category>
            <category>Monitoring</category>
        </item>
        <item>
            <title><![CDATA[45 Dashboard Views: A Tour of HyperSDK's Web Interface]]></title>
            <link>https://hypersdk.cloud/blog/45-dashboard-views</link>
            <guid>https://hypersdk.cloud/blog/45-dashboard-views</guid>
            <pubDate>Sat, 07 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[VM migration is inherently complex. You are dealing with multiple source and target hypervisors, disk format conversions, network reconfiguration, and dozens of jobs running in parallel. A CLI is great for automation, but when you need to understand the state of a migration at a glance, a well-designed web interface makes all the difference. That is why HyperSDK ships with 45 dashboard views covering every aspect of the migration lifecycle.]]></description>
            <content:encoded><![CDATA[<p>VM migration is inherently complex. You are dealing with multiple source and target hypervisors, disk format conversions, network reconfiguration, and dozens of jobs running in parallel. A CLI is great for automation, but when you need to understand the state of a migration at a glance, a well-designed web interface makes all the difference. That is why HyperSDK ships with 45 dashboard views covering every aspect of the migration lifecycle.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="five-navigation-groups">Five Navigation Groups<a href="https://hypersdk.cloud/blog/45-dashboard-views#five-navigation-groups" class="hash-link" aria-label="Direct link to Five Navigation Groups" title="Direct link to Five Navigation Groups" translate="no">​</a></h2>
<p>The dashboard organizes its 46 views into five top-level navigation groups. Each group maps to a distinct phase or concern in the migration workflow, so you always know where to find what you need.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="migration-views-20-views">Migration Views (20 views)<a href="https://hypersdk.cloud/blog/45-dashboard-views#migration-views-20-views" class="hash-link" aria-label="Direct link to Migration Views (20 views)" title="Direct link to Migration Views (20 views)" translate="no">​</a></h3>
<p>The Migration group is the largest, with 20 views dedicated to the core workflow of moving VMs between providers. The VM browser displays all discovered virtual machines with OS-specific icons -- Windows, Linux distributions, BSD variants -- so you can visually identify workloads at a glance. Each VM row shows CPU, memory, disk size, and provider-specific metadata.</p>
<p>From the VM browser you can trigger single-click exports. The export workflow view guides you through selecting a target format (qcow2, VMDK, raw, VHD), choosing a destination provider, and configuring conversion options. For vSphere sources, a dedicated VSphere Export Workflow view handles vCenter authentication, datacenter selection, and datastore browsing.</p>
<p>Upload and download views let you push disk images directly through the browser or pull exported artifacts to your local machine. A readiness check view scans source VMs for compatibility issues -- unsupported disk controllers, snapshots that need consolidation, or guest tools that should be removed before migration.</p>
<p>The jobs table shows all active, queued, completed, and failed migration jobs with real-time progress bars, elapsed time, and estimated completion. You can filter by provider, status, or date range, and drill into any job for detailed logs.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="observability-views-8-views">Observability Views (8 views)<a href="https://hypersdk.cloud/blog/45-dashboard-views#observability-views-8-views" class="hash-link" aria-label="Direct link to Observability Views (8 views)" title="Direct link to Observability Views (8 views)" translate="no">​</a></h3>
<p>The Observability group provides system-wide visibility into HyperSDK's health and performance. The health score view displays a single 0-100 number that aggregates metrics from all connected providers, storage backends, and internal services. When the score drops, color-coded indicators show which component is degraded.</p>
<p>The explain mode view is one of the most powerful debugging tools in the dashboard. Select any failed or degraded component and the explain view walks you through the root cause using structured diagnostic data. Instead of dumping raw logs, it presents a chain of causation: which API call failed, what the provider returned, and what you can do to fix it.</p>
<p>Additional observability views include an alerts list with configurable severity thresholds, a metrics dashboard with time-series charts for throughput and latency, a provider status matrix showing connectivity to all 10 supported providers, and debug tools for inspecting internal state.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="infrastructure-views-4-views">Infrastructure Views (4 views)<a href="https://hypersdk.cloud/blog/45-dashboard-views#infrastructure-views-4-views" class="hash-link" aria-label="Direct link to Infrastructure Views (4 views)" title="Direct link to Infrastructure Views (4 views)" translate="no">​</a></h3>
<p>The Infrastructure group covers resources that exist outside the migration workflow but are essential to it. The snapshots view lets you manage VM snapshots across providers -- create, delete, revert, and compare. The storage view shows disk utilization across all configured storage backends with capacity forecasting.</p>
<p>The ISO manager view handles boot media for target VMs, letting you upload, catalog, and attach ISO images. The VM create view provides a form-driven interface for provisioning new virtual machines on any supported provider, useful for testing migration targets before committing to a full cutover.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="tools-views-11-views">Tools Views (11 views)<a href="https://hypersdk.cloud/blog/45-dashboard-views#tools-views-11-views" class="hash-link" aria-label="Direct link to Tools Views (11 views)" title="Direct link to Tools Views (11 views)" translate="no">​</a></h3>
<p>The Tools group contains utility views that support the migration process. The cost estimator lets you compare running costs across providers before migrating -- enter your VM specifications and see monthly estimates for AWS, Azure, GCP, OCI, and others side by side.</p>
<p>The backup scheduler view lets you configure recurring backup jobs with cron-like scheduling. The manifest builder provides a visual editor for creating migration manifests that describe multi-VM migration plans as structured YAML. The API playground is a built-in HTTP client that lets you test any of the 205 API endpoints directly from the browser with auto-populated authentication headers.</p>
<p>Additional tools include a carbon emissions tracker, a webhook manager for configuring notification endpoints, an audit log viewer, a secrets manager for storing provider credentials securely, and an RBAC configuration view for managing user roles and permissions.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="system-views-2-views">System Views (2 views)<a href="https://hypersdk.cloud/blog/45-dashboard-views#system-views-2-views" class="hash-link" aria-label="Direct link to System Views (2 views)" title="Direct link to System Views (2 views)" translate="no">​</a></h3>
<p>The System group includes the login view with session management and a settings view for configuring global preferences like theme, default provider, and notification channels.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="keyboard-shortcuts-and-theming">Keyboard Shortcuts and Theming<a href="https://hypersdk.cloud/blog/45-dashboard-views#keyboard-shortcuts-and-theming" class="hash-link" aria-label="Direct link to Keyboard Shortcuts and Theming" title="Direct link to Keyboard Shortcuts and Theming" translate="no">​</a></h2>
<p>Every major action in the dashboard has a keyboard shortcut. Press <code>?</code> to see the full shortcut reference. Common shortcuts include <code>n</code> for new migration, <code>j/k</code> for navigating lists, <code>Enter</code> to drill into details, and <code>Escape</code> to go back.</p>
<p>The dashboard supports both dark and light themes with a toggle in the top navigation bar. Theme preference is persisted in local storage and respects the operating system's color scheme preference on first visit.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="built-with-react-18-and-tailwind-css">Built with React 18 and Tailwind CSS<a href="https://hypersdk.cloud/blog/45-dashboard-views#built-with-react-18-and-tailwind-css" class="hash-link" aria-label="Direct link to Built with React 18 and Tailwind CSS" title="Direct link to Built with React 18 and Tailwind CSS" translate="no">​</a></h2>
<p>The dashboard is built with React 18, using functional components and hooks throughout. Styling is handled entirely by Tailwind CSS utility classes, which keeps the design consistent and makes customization straightforward. The chart components use a dedicated <code>ChartContainer</code> wrapper that handles responsive sizing and loading states. State management uses React's built-in context and reducer patterns -- no external state library is required.</p>
<p>The entire dashboard builds to a single static bundle that is embedded in the HyperSDK binary, so there is no separate frontend server to deploy. Just start the daemon and open your browser.</p>]]></content:encoded>
            <category>Dashboard</category>
            <category>React</category>
            <category>Features</category>
        </item>
        <item>
            <title><![CDATA[VMCraft: Why We Built a Pure Python VM Engine (And Ditched libguestfs)]]></title>
            <link>https://hypersdk.cloud/blog/vmcraft-pure-python</link>
            <guid>https://hypersdk.cloud/blog/vmcraft-pure-python</guid>
            <pubDate>Sat, 28 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[When we started building hyper2kvm, the VM conversion engine that powers HyperSDK migrations, we assumed we would use libguestfs for disk image manipulation. It is the standard tool, backed by Red Hat, with a mature API and wide Linux distribution support. Six months in, we replaced it entirely with VMCraft -- a pure Python VM manipulation engine that reads and writes disk images directly without booting an appliance. This post explains why, and what we gained.]]></description>
            <content:encoded><![CDATA[<p>When we started building hyper2kvm, the VM conversion engine that powers HyperSDK migrations, we assumed we would use libguestfs for disk image manipulation. It is the standard tool, backed by Red Hat, with a mature API and wide Linux distribution support. Six months in, we replaced it entirely with VMCraft -- a pure Python VM manipulation engine that reads and writes disk images directly without booting an appliance. This post explains why, and what we gained.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-libguestfs-problem">The libguestfs Problem<a href="https://hypersdk.cloud/blog/vmcraft-pure-python#the-libguestfs-problem" class="hash-link" aria-label="Direct link to The libguestfs Problem" title="Direct link to The libguestfs Problem" translate="no">​</a></h2>
<p>libguestfs works by booting a lightweight virtual machine (the "appliance") every time you need to access a disk image. This appliance contains a Linux kernel, a minimal userspace, and filesystem drivers. Your application communicates with the appliance over a socket, sending commands to read files, write data, or modify configurations inside the disk image.</p>
<p>This architecture has three fundamental problems that became deal-breakers for hyper2kvm.</p>
<p><strong>Startup latency.</strong> Every time you open a disk image, libguestfs boots the appliance. This takes 15-30 seconds depending on hardware. For a migration pipeline that processes hundreds of VMs, this adds hours of idle waiting. We measured 22 seconds average appliance boot time on our reference hardware -- multiplied by 200 VMs, that is 73 minutes spent doing nothing but waiting for appliances to start.</p>
<p><strong>Memory consumption.</strong> The appliance requires 512 MB or more of memory per instance. Running parallel conversions -- essential for large-scale migrations -- means allocating gigabytes of memory just for the disk manipulation layer, before you account for the actual conversion work.</p>
<p><strong>Deployment complexity.</strong> The appliance depends on supermin, a specific kernel version, and /dev/kvm access on the host. In containerized deployments, this means running privileged containers with device access -- a non-starter for many enterprise security policies. In air-gapped environments, pre-staging the appliance kernel and initrd adds another layer of complexity to an already constrained deployment process.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-vmcraft-architecture">The VMCraft Architecture<a href="https://hypersdk.cloud/blog/vmcraft-pure-python#the-vmcraft-architecture" class="hash-link" aria-label="Direct link to The VMCraft Architecture" title="Direct link to The VMCraft Architecture" translate="no">​</a></h2>
<p>VMCraft replaces the appliance with direct block device access. The architecture is simple: Python application code calls VMCraft APIs, which use qemu-nbd to expose the disk image as a block device, then parse filesystem structures (ext4, NTFS, XFS, FAT32) directly in Python.</p>
<p>This sounds like it should be slower -- Python parsing filesystem metadata instead of a native kernel driver. In practice, it is dramatically faster for the operations that matter in a migration pipeline, because there is no 22-second appliance boot. VMCraft opens a disk image in under one second.</p>
<p>For bulk operations (read a registry key, inject a driver file, modify fstab), the per-operation overhead of Python parsing versus kernel drivers is negligible compared to the I/O time. The bottleneck is always disk I/O, not CPU time spent parsing superblock structures.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="480-api-functions">480+ API Functions<a href="https://hypersdk.cloud/blog/vmcraft-pure-python#480-api-functions" class="hash-link" aria-label="Direct link to 480+ API Functions" title="Direct link to 480+ API Functions" translate="no">​</a></h2>
<p>VMCraft provides 480+ functions organized across six categories that cover every operation needed for VM migration.</p>
<p><strong>Disk image operations</strong> handle format support (QCOW2, VMDK, VHD, VHDX, RAW), partition table parsing (MBR and GPT), and image manipulation (resize, compact, convert, snapshot).</p>
<p><strong>Filesystem access</strong> provides read/write access to ext2/3/4, XFS, NTFS, and FAT32 filesystems. You can list directories, copy files in and out, modify permissions, and create or delete entries -- all without mounting the filesystem on the host.</p>
<p><strong>Windows-specific operations</strong> include full Windows registry hive read/write, VirtIO driver injection into the driver store, BCD (Boot Configuration Data) editing for bootloader repair, service configuration, and Sysprep/unattend.xml generation.</p>
<p><strong>Linux-specific operations</strong> cover fstab modification, GRUB and systemd-boot configuration, kernel module injection, network configuration across NetworkManager and systemd-networkd, and cloud-init seed injection.</p>
<p><strong>Guest OS detection</strong> automatically identifies the operating system type, version, installed applications, hardware drivers, network configuration, and boot method (BIOS vs. UEFI) -- critical metadata for migration planning.</p>
<p><strong>Security and integrity</strong> functions provide SHA-256 checksum generation and verification, disk image encryption, secure wipe of free space, and certificate injection.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="performance-numbers">Performance Numbers<a href="https://hypersdk.cloud/blog/vmcraft-pure-python#performance-numbers" class="hash-link" aria-label="Direct link to Performance Numbers" title="Direct link to Performance Numbers" translate="no">​</a></h2>
<p>We benchmarked VMCraft against libguestfs across the operations most common in our migration pipeline.</p>
<p>Opening a disk image and reading a single file: libguestfs averaged 23.4 seconds (dominated by appliance boot), VMCraft averaged 0.8 seconds -- a 29x improvement. Injecting VirtIO drivers into a Windows VM: libguestfs took 31.2 seconds, VMCraft took 4.6 seconds -- 6.8x faster. Modifying fstab and GRUB configuration in a Linux VM: libguestfs took 24.1 seconds, VMCraft took 3.2 seconds -- 7.5x faster.</p>
<p>For batch processing of 200 VMs (the common enterprise migration scenario), the cumulative time savings exceeded 90 minutes. More importantly, VMCraft's lower memory footprint (under 50 MB versus 512 MB+) allows running more parallel conversions on the same hardware, further reducing total migration time.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="container-friendly-by-design">Container-Friendly by Design<a href="https://hypersdk.cloud/blog/vmcraft-pure-python#container-friendly-by-design" class="hash-link" aria-label="Direct link to Container-Friendly by Design" title="Direct link to Container-Friendly by Design" translate="no">​</a></h2>
<p>VMCraft runs inside standard containers without /dev/kvm, without privileged mode, and without device access. The only requirement is qemu-nbd, which operates as a regular userspace process. This makes VMCraft compatible with Kubernetes pods, rootless Podman containers, and CI/CD pipeline runners -- environments where libguestfs appliance boot is typically impossible or requires security exceptions.</p>
<p>For organizations evaluating hyper2kvm for VM migration, VMCraft's container-friendly architecture means the conversion pipeline can run anywhere your container orchestrator runs, without special host configuration. <a class="" href="https://hypersdk.cloud/contact">Schedule a demo</a> to see VMCraft's 480+ APIs in action.</p>]]></content:encoded>
            <category>VMCraft</category>
            <category>Engineering</category>
            <category>Python</category>
        </item>
        <item>
            <title><![CDATA[The Complete VM Migration Checklist (2026 Edition)]]></title>
            <link>https://hypersdk.cloud/blog/migration-checklist</link>
            <guid>https://hypersdk.cloud/blog/migration-checklist</guid>
            <pubDate>Sat, 21 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[VM migration projects fail most often due to inadequate planning, not technical limitations. Whether you are migrating 20 VMs or 2,000, following a structured checklist ensures nothing falls through the cracks. This guide covers every phase of a production VM migration, from initial assessment through post-migration validation.]]></description>
            <content:encoded><![CDATA[<p>VM migration projects fail most often due to inadequate planning, not technical limitations. Whether you are migrating 20 VMs or 2,000, following a structured checklist ensures nothing falls through the cracks. This guide covers every phase of a production VM migration, from initial assessment through post-migration validation.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="phase-1-pre-migration-assessment">Phase 1: Pre-Migration Assessment<a href="https://hypersdk.cloud/blog/migration-checklist#phase-1-pre-migration-assessment" class="hash-link" aria-label="Direct link to Phase 1: Pre-Migration Assessment" title="Direct link to Phase 1: Pre-Migration Assessment" translate="no">​</a></h2>
<p>Before touching a single VM, your team needs a complete picture of what you are working with and where you are going.</p>
<p><strong>Inventory and Discovery</strong></p>
<ul class="contains-task-list containsTaskList_mC6p">
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Generate a complete VM inventory from vCenter, Hyper-V Manager, or your current hypervisor management tool</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Document each VM's resource allocation: vCPUs, memory, disk size, and network configuration</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Identify OS versions and editions for every VM (Windows Server 2016/2019/2022, RHEL 7/8/9, Ubuntu 20.04/22.04, etc.)</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Map network dependencies: VLANs, static IPs, DNS entries, firewall rules, and load balancer configurations</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Identify storage dependencies: shared storage, NFS mounts, iSCSI targets, and local disk layouts</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Catalog installed applications and their licensing requirements on the target platform</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Flag VMs with hardware dependencies: USB passthrough, GPU passthrough, SR-IOV, or TPM requirements</li>
</ul>
<p><strong>Risk Classification</strong></p>
<ul class="contains-task-list containsTaskList_mC6p">
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Classify each VM by criticality: Tier 1 (mission-critical), Tier 2 (important), Tier 3 (non-critical)</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Identify VMs that require zero-downtime migration vs. those that can tolerate a maintenance window</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Document compliance requirements: HIPAA, PCI-DSS, SOX, FedRAMP, or internal security policies</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Establish rollback criteria: what conditions trigger a rollback, and what is the rollback procedure</li>
</ul>
<p><strong>Target Environment Preparation</strong></p>
<ul class="contains-task-list containsTaskList_mC6p">
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Provision target hypervisor infrastructure (KVM hosts, Proxmox cluster, or cloud accounts)</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Configure networking on target: bridges, VLANs, DNS, and DHCP</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Set up shared storage if required: Ceph, NFS, or local ZFS pools</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Install and configure HyperSDK on a management node with API access to both source and target environments</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Verify connectivity between source hypervisor, HyperSDK management node, and target infrastructure</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Configure backup strategy for the target environment before migration begins</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="phase-2-pilot-migration">Phase 2: Pilot Migration<a href="https://hypersdk.cloud/blog/migration-checklist#phase-2-pilot-migration" class="hash-link" aria-label="Direct link to Phase 2: Pilot Migration" title="Direct link to Phase 2: Pilot Migration" translate="no">​</a></h2>
<p>Never go straight to production. A pilot migration with non-critical VMs validates your process and surfaces issues early.</p>
<p><strong>Pilot Execution</strong></p>
<ul class="contains-task-list containsTaskList_mC6p">
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Select 5-10 Tier 3 VMs representing your most common OS and application configurations</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Run a test export using HyperSDK to verify connectivity and credential configuration</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Export and convert each pilot VM, documenting the time required per VM and any errors encountered</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Verify disk image integrity using SHA-256 checksums from the export manifest</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Deploy pilot VMs to the target environment and verify first-boot success</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Test application functionality on each migrated VM: services running, network connectivity, data integrity</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Measure performance baselines: CPU utilization, memory usage, disk I/O, and network throughput</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Compare performance baselines against pre-migration measurements</li>
</ul>
<p><strong>Pilot Review</strong></p>
<ul class="contains-task-list containsTaskList_mC6p">
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Document any VMs that required manual intervention after migration</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Identify OS-specific issues: driver installation, bootloader configuration, or service startup failures</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Calculate actual migration throughput (GB/hour) to estimate production migration timelines</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Update the migration plan based on pilot findings</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Obtain stakeholder sign-off on pilot results before proceeding to production migration</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="phase-3-production-migration">Phase 3: Production Migration<a href="https://hypersdk.cloud/blog/migration-checklist#phase-3-production-migration" class="hash-link" aria-label="Direct link to Phase 3: Production Migration" title="Direct link to Phase 3: Production Migration" translate="no">​</a></h2>
<p>With pilot results validated, execute the production migration in planned waves.</p>
<p><strong>Wave Planning</strong></p>
<ul class="contains-task-list containsTaskList_mC6p">
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Group VMs into migration waves based on application dependencies and criticality</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Schedule each wave during approved maintenance windows</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Notify application owners and support teams of migration schedules</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Ensure rollback resources are available: source VMs remain running until target validation is complete</li>
</ul>
<p><strong>Migration Execution</strong></p>
<ul class="contains-task-list containsTaskList_mC6p">
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Execute pre-migration VM snapshots on the source hypervisor as a safety net</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Run HyperSDK export jobs for each wave, monitoring progress through the dashboard</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Verify export manifests: disk checksums, metadata integrity, and conversion status</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Deploy converted VMs to target infrastructure</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Validate first-boot success for each VM (HyperSDK reports this automatically)</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Run application-level health checks immediately after deployment</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Update DNS records, load balancer configurations, and monitoring systems to point to migrated VMs</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Verify backup jobs are running on the target environment for each migrated VM</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="phase-4-post-migration-validation">Phase 4: Post-Migration Validation<a href="https://hypersdk.cloud/blog/migration-checklist#phase-4-post-migration-validation" class="hash-link" aria-label="Direct link to Phase 4: Post-Migration Validation" title="Direct link to Phase 4: Post-Migration Validation" translate="no">​</a></h2>
<p>The migration is not complete until everything is verified and documented.</p>
<p><strong>Validation Checklist</strong></p>
<ul class="contains-task-list containsTaskList_mC6p">
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Confirm all VMs are running and accessible on the target platform</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Verify application functionality with end-users or application owners</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Compare performance metrics against pre-migration baselines</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Confirm backup and disaster recovery procedures are operational on the target</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Validate monitoring and alerting: all migrated VMs are visible in your monitoring system</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Update CMDB and asset management records with new infrastructure details</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Verify compliance controls: audit logging, access controls, and encryption are configured on the target</li>
</ul>
<p><strong>Cleanup</strong></p>
<ul class="contains-task-list containsTaskList_mC6p">
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Retain source VM snapshots for a defined rollback period (typically 7-30 days)</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Decommission source VMs after the rollback window expires</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Revoke source hypervisor credentials from HyperSDK once migration is finalized</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Generate a final migration report: VMs migrated, success rate, time elapsed, and cost savings</li>
<li class="task-list-item"><input type="checkbox" disabled=""> <!-- -->Conduct a retrospective with the migration team to document lessons learned</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="timeline-guidance">Timeline Guidance<a href="https://hypersdk.cloud/blog/migration-checklist#timeline-guidance" class="hash-link" aria-label="Direct link to Timeline Guidance" title="Direct link to Timeline Guidance" translate="no">​</a></h2>
<p>Based on production migrations using HyperSDK:</p>
<table><thead><tr><th>Deployment Size</th><th>Pilot Phase</th><th>Production Migration</th><th>Total Timeline</th></tr></thead><tbody><tr><td>20-50 VMs</td><td>1 week</td><td>1-2 weeks</td><td>3-4 weeks</td></tr><tr><td>50-200 VMs</td><td>1-2 weeks</td><td>3-4 weeks</td><td>5-6 weeks</td></tr><tr><td>200-500 VMs</td><td>2 weeks</td><td>4-8 weeks</td><td>6-10 weeks</td></tr><tr><td>500+ VMs</td><td>2-3 weeks</td><td>8-16 weeks</td><td>10-20 weeks</td></tr></tbody></table>
<p>The single most important factor in migration success is thorough pre-migration assessment. Invest the time upfront, and the execution phases will run predictably.</p>]]></content:encoded>
            <category>Migration</category>
            <category>Checklist</category>
            <category>Guide</category>
        </item>
        <item>
            <title><![CDATA[From vSphere to KVM in 5 Minutes: A Step-by-Step Guide]]></title>
            <link>https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes</link>
            <guid>https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes</guid>
            <pubDate>Sat, 14 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[This guide walks you through migrating a virtual machine from VMware vSphere to KVM using HyperSDK. The entire process -- from connecting to vCenter to booting the converted VM on libvirt -- takes about five minutes for a typical workload.]]></description>
            <content:encoded><![CDATA[<p>This guide walks you through migrating a virtual machine from VMware vSphere to KVM using HyperSDK. The entire process -- from connecting to vCenter to booting the converted VM on libvirt -- takes about five minutes for a typical workload.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="prerequisites">Prerequisites<a href="https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes#prerequisites" class="hash-link" aria-label="Direct link to Prerequisites" title="Direct link to Prerequisites" translate="no">​</a></h2>
<p>Before you begin, make sure you have:</p>
<ul>
<li class=""><strong>HyperSDK installed</strong> and the <code>hypervisord</code> daemon running (see the installation guide)</li>
<li class=""><strong>vCenter Server credentials</strong> with at least read-only access to the VMs you want to migrate</li>
<li class=""><strong>libvirt and qemu-kvm</strong> installed on the target host</li>
<li class=""><strong>Sufficient disk space</strong> for the exported qcow2 image (roughly the same size as the source VMDK)</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="step-1-configure-vcenter-connection">Step 1: Configure vCenter Connection<a href="https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes#step-1-configure-vcenter-connection" class="hash-link" aria-label="Direct link to Step 1: Configure vCenter Connection" title="Direct link to Step 1: Configure vCenter Connection" translate="no">​</a></h2>
<p>Edit your HyperSDK configuration file at <code>/etc/hypersdk/config.yaml</code> (or <code>~/.config/hypersdk/config.yaml</code> for user-level configuration) and add your vCenter credentials:</p>
<div class="language-yaml codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#F8F8F2;--prism-background-color:#282A36"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-yaml codeBlock_bY9V thin-scrollbar" style="color:#F8F8F2;background-color:#282A36"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#F8F8F2"><span class="token key atrule">providers</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token key atrule">vsphere</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token key atrule">enabled</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token boolean important">true</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token key atrule">endpoint</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"vcenter.example.com"</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token key atrule">username</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"administrator@vsphere.local"</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token key atrule">password</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"your-password"</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token key atrule">datacenter</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"DC1"</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token key atrule">insecure</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token boolean important">false</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain" style="display:inline-block"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token key atrule">kvm</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token key atrule">enabled</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token boolean important">true</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token key atrule">connection</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"qemu:///system"</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token key atrule">storage_pool</span><span class="token punctuation" style="color:rgb(248, 248, 242)">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"default"</span><br></span></code></pre></div></div>
<p>After saving the file, restart the daemon:</p>
<div class="language-bash codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#F8F8F2;--prism-background-color:#282A36"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-bash codeBlock_bY9V thin-scrollbar" style="color:#F8F8F2;background-color:#282A36"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#F8F8F2"><span class="token function" style="color:rgb(80, 250, 123)">sudo</span><span class="token plain"> systemctl restart hypervisord</span><br></span></code></pre></div></div>
<p>HyperSDK will validate the connection to vCenter on startup and report any issues in the system log.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="step-2-browse-vms-in-the-dashboard">Step 2: Browse VMs in the Dashboard<a href="https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes#step-2-browse-vms-in-the-dashboard" class="hash-link" aria-label="Direct link to Step 2: Browse VMs in the Dashboard" title="Direct link to Step 2: Browse VMs in the Dashboard" translate="no">​</a></h2>
<p>Open your browser and navigate to <code>http://localhost:8080/web/dashboard/</code>. Log in with your configured credentials and navigate to <strong>Migration &gt; VMs</strong> in the left sidebar.</p>
<p>The VM browser displays all virtual machines discovered from your vCenter. Each VM is shown with its operating system icon (Windows, Ubuntu, CentOS, Debian, and others are all recognized automatically), along with CPU count, memory allocation, total disk size, and power state. You can filter by name, OS type, or power state using the search bar at the top.</p>
<p>Find the VM you want to migrate and verify its details. HyperSDK shows a readiness indicator for each VM -- a green checkmark means the VM is ready for export, while a yellow warning indicates potential issues like active snapshots that should be consolidated first.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="step-3-export-the-vm">Step 3: Export the VM<a href="https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes#step-3-export-the-vm" class="hash-link" aria-label="Direct link to Step 3: Export the VM" title="Direct link to Step 3: Export the VM" translate="no">​</a></h2>
<p>Click the <strong>Export</strong> button next to your chosen VM. The export dialog opens with the following options:</p>
<ul>
<li class=""><strong>Target format</strong>: Select <code>qcow2</code> for KVM (this is the default)</li>
<li class=""><strong>Compression</strong>: Enable to reduce transfer size (recommended for network transfers)</li>
<li class=""><strong>Destination</strong>: Choose <code>local</code> to save to the HyperSDK server, or select a configured target provider</li>
</ul>
<p>Click <strong>Start Export</strong> to begin. HyperSDK connects to vCenter, creates a temporary snapshot (if the VM is running), streams the disk data, and converts from VMDK to qcow2 format in a single pass. The original VM is not modified or powered off during this process.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="step-4-monitor-progress">Step 4: Monitor Progress<a href="https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes#step-4-monitor-progress" class="hash-link" aria-label="Direct link to Step 4: Monitor Progress" title="Direct link to Step 4: Monitor Progress" translate="no">​</a></h2>
<p>Navigate to <strong>Migration &gt; Jobs</strong> in the sidebar to watch the export progress. The jobs table shows:</p>
<ul>
<li class=""><strong>Progress bar</strong> with percentage complete</li>
<li class=""><strong>Transfer speed</strong> in MB/s</li>
<li class=""><strong>Elapsed time</strong> and <strong>estimated time remaining</strong></li>
<li class=""><strong>Current phase</strong> (snapshot, stream, convert, finalize)</li>
</ul>
<p>For a 50 GB VM on a gigabit network, expect the export to complete in 3-4 minutes. The progress updates in real time through the dashboard's WebSocket connection -- no need to refresh the page.</p>
<p>If anything goes wrong, click on the job row to see detailed logs. The explain mode (available under <strong>Observability &gt; Explain</strong>) can diagnose common issues like network timeouts, permission errors, or storage space problems.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="step-5-download-the-converted-image">Step 5: Download the Converted Image<a href="https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes#step-5-download-the-converted-image" class="hash-link" aria-label="Direct link to Step 5: Download the Converted Image" title="Direct link to Step 5: Download the Converted Image" translate="no">​</a></h2>
<p>Once the job completes, navigate to <strong>Migration &gt; Downloads</strong>. Your converted qcow2 file is listed with its size, checksum, and creation timestamp. Click <strong>Download</strong> to transfer it to your local machine, or note the server-side path if your KVM host is the same machine running HyperSDK.</p>
<p>The exported file includes a manifest (<code>export-manifest.json</code>) with metadata about the source VM: original disk layout, network configuration, boot firmware (BIOS or UEFI), and any guest customization that was applied.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="step-6-deploy-to-libvirt">Step 6: Deploy to libvirt<a href="https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes#step-6-deploy-to-libvirt" class="hash-link" aria-label="Direct link to Step 6: Deploy to libvirt" title="Direct link to Step 6: Deploy to libvirt" translate="no">​</a></h2>
<p>On your KVM host, use <code>virt-install</code> to create a new VM from the exported qcow2 image:</p>
<div class="language-bash codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#F8F8F2;--prism-background-color:#282A36"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-bash codeBlock_bY9V thin-scrollbar" style="color:#F8F8F2;background-color:#282A36"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#F8F8F2"><span class="token plain">virt-install </span><span class="token punctuation" style="color:rgb(248, 248, 242)">\</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--name</span><span class="token plain"> my-migrated-vm </span><span class="token punctuation" style="color:rgb(248, 248, 242)">\</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--memory</span><span class="token plain"> </span><span class="token number">4096</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">\</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--vcpus</span><span class="token plain"> </span><span class="token number">2</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">\</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--disk</span><span class="token plain"> </span><span class="token assign-left variable" style="color:rgb(189, 147, 249);font-style:italic">path</span><span class="token operator">=</span><span class="token plain">/var/lib/libvirt/images/exported-vm.qcow2,format</span><span class="token operator">=</span><span class="token plain">qcow2 </span><span class="token punctuation" style="color:rgb(248, 248, 242)">\</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--import</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">\</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  --os-variant ubuntu22.04 </span><span class="token punctuation" style="color:rgb(248, 248, 242)">\</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--network</span><span class="token plain"> </span><span class="token assign-left variable" style="color:rgb(189, 147, 249);font-style:italic">bridge</span><span class="token operator">=</span><span class="token plain">br0 </span><span class="token punctuation" style="color:rgb(248, 248, 242)">\</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--noautoconsole</span><br></span></code></pre></div></div>
<p>The <code>--import</code> flag tells <code>virt-install</code> to skip installation and boot directly from the existing disk image. Adjust the <code>--os-variant</code>, <code>--memory</code>, and <code>--vcpus</code> values to match your source VM's configuration (these are listed in the export manifest).</p>
<p>The VM should boot successfully on the first attempt. Linux VMs generally require no additional changes. The kernel detects the new virtio hardware and loads the appropriate drivers automatically.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="batch-migration-for-multiple-vms">Batch Migration for Multiple VMs<a href="https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes#batch-migration-for-multiple-vms" class="hash-link" aria-label="Direct link to Batch Migration for Multiple VMs" title="Direct link to Batch Migration for Multiple VMs" translate="no">​</a></h2>
<p>For migrating multiple VMs at once, use the <strong>Manifest Builder</strong> under <strong>Tools &gt; Manifest Builder</strong> in the dashboard. Create a migration manifest that lists all the VMs you want to export, their target formats, and any per-VM configuration overrides. Submit the manifest and HyperSDK will process all exports as a batch job, running multiple conversions in parallel up to your configured concurrency limit.</p>
<p>Alternatively, use the CLI for scripted batch migrations:</p>
<div class="language-bash codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#F8F8F2;--prism-background-color:#282A36"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-bash codeBlock_bY9V thin-scrollbar" style="color:#F8F8F2;background-color:#282A36"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#F8F8F2"><span class="token plain">hyperctl </span><span class="token builtin class-name" style="color:rgb(189, 147, 249)">export</span><span class="token plain"> </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--provider</span><span class="token plain"> vsphere </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--format</span><span class="token plain"> qcow2 </span><span class="token parameter variable" style="color:rgb(189, 147, 249);font-style:italic">--batch</span><span class="token plain"> manifest.yaml</span><br></span></code></pre></div></div>
<p>The batch job respects dependency ordering, so if you have VMs that must be migrated in a specific sequence (for example, a database server before its application servers), you can define those dependencies in the manifest.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="windows-vm-considerations">Windows VM Considerations<a href="https://hypersdk.cloud/blog/vsphere-to-kvm-5-minutes#windows-vm-considerations" class="hash-link" aria-label="Direct link to Windows VM Considerations" title="Direct link to Windows VM Considerations" translate="no">​</a></h2>
<p>Windows VMs require VirtIO drivers to run on KVM. HyperSDK handles this automatically when it detects a Windows guest operating system. During the conversion process, it injects the VirtIO storage and network drivers into the Windows image so the VM can boot without manual driver installation.</p>
<p>The auto-detection works for Windows Server 2016 and later, as well as Windows 10 and 11 desktop editions. For older Windows versions, you may need to install VirtIO drivers manually before or after migration. The dashboard's readiness check view will warn you if a Windows VM requires manual driver intervention.</p>
<p>After booting a migrated Windows VM on KVM, you will need to reactivate the Windows license, as the hardware fingerprint will have changed. This is expected behavior for any hypervisor migration.</p>]]></content:encoded>
            <category>Tutorial</category>
            <category>vSphere</category>
            <category>KVM</category>
            <category>Migration</category>
        </item>
        <item>
            <title><![CDATA[VMware Exit: How We Saved $113K/Year for a Fortune 500 Client]]></title>
            <link>https://hypersdk.cloud/blog/vmware-exit-cost-savings</link>
            <guid>https://hypersdk.cloud/blog/vmware-exit-cost-savings</guid>
            <pubDate>Sat, 07 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[When Broadcom completed its acquisition of VMware, our client -- a Fortune 500 financial services company -- received a renewal quote that was 4.2x their previous annual spend. What had been a manageable $29,000 per year for vSphere Enterprise Plus licensing was now $122,000, with no option to return to perpetual licensing. This is the story of how we helped them exit VMware entirely and reduce their virtualization costs by 93%.]]></description>
            <content:encoded><![CDATA[<p>When Broadcom completed its acquisition of VMware, our client -- a Fortune 500 financial services company -- received a renewal quote that was 4.2x their previous annual spend. What had been a manageable $29,000 per year for vSphere Enterprise Plus licensing was now $122,000, with no option to return to perpetual licensing. This is the story of how we helped them exit VMware entirely and reduce their virtualization costs by 93%.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-vmware-licensing-problem">The VMware Licensing Problem<a href="https://hypersdk.cloud/blog/vmware-exit-cost-savings#the-vmware-licensing-problem" class="hash-link" aria-label="Direct link to The VMware Licensing Problem" title="Direct link to The VMware Licensing Problem" translate="no">​</a></h2>
<p>The client operated 200 virtual machines across 12 ESXi hosts running a mix of Windows Server 2019, RHEL 8, and Ubuntu 22.04 workloads. Their VMware stack included vSphere Enterprise Plus, vCenter Server, vSAN, and vRealize Operations -- a common enterprise configuration.</p>
<p>Under the new Broadcom licensing model, several changes hit simultaneously. Per-socket licensing was replaced with per-core subscriptions, dramatically increasing costs for their dual-socket servers. Perpetual licenses were eliminated entirely, forcing a move to annual subscriptions. Products were bundled into suites, requiring purchase of components they did not use. Support tiers were consolidated, removing the option for lower-cost support plans.</p>
<p>The combined effect was a renewal quote of $122,000 per year -- a 320% increase that the IT budget simply could not absorb.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-migration-approach">The Migration Approach<a href="https://hypersdk.cloud/blog/vmware-exit-cost-savings#the-migration-approach" class="hash-link" aria-label="Direct link to The Migration Approach" title="Direct link to The Migration Approach" translate="no">​</a></h2>
<p>We designed a 10-week migration plan using three tools in sequence: HyperSDK for VM export from vSphere, hyper2kvm for disk conversion and guest OS preparation, and libvirt for deployment on KVM hosts.</p>
<p><strong>Week 1-2: Assessment.</strong> HyperSDK connected to the client's vCenter Server and cataloged all 200 VMs, mapping CPU, memory, disk, and network configurations. We identified 15 VMs with VMware-specific dependencies (VMware Tools custom scripts, vSphere API integrations) that required additional preparation. Dependency mapping revealed that 80% of VMs could be migrated independently.</p>
<p><strong>Week 3-4: Pilot.</strong> We selected 10 non-critical VMs spanning Windows Server, RHEL, and Ubuntu for pilot migration. Each VM was exported from vSphere using HyperSDK's manifest-tracked export, converted from VMDK to qcow2 with automatic VirtIO driver injection via hyper2kvm, and deployed on KVM hosts running libvirt. All 10 VMs booted successfully on first attempt. Application owners validated functionality within 48 hours.</p>
<p><strong>Week 5-8: Production Migration.</strong> We migrated the remaining 190 VMs in four waves of approximately 50 VMs each. Changed Block Tracking (CBT) enabled incremental exports for VMs that could not tolerate extended downtime -- the final sync required transferring only the delta, reducing cutover windows to under 15 minutes per VM. Source VMs remained running on vSphere during validation.</p>
<p><strong>Week 9-10: Validation and Decommission.</strong> Performance benchmarks confirmed that migrated VMs matched or exceeded their vSphere baseline. Backup and recovery procedures were tested end-to-end. After sign-off from all application owners, the vSphere infrastructure was decommissioned and licenses were not renewed.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="cost-before-and-after">Cost Before and After<a href="https://hypersdk.cloud/blog/vmware-exit-cost-savings#cost-before-and-after" class="hash-link" aria-label="Direct link to Cost Before and After" title="Direct link to Cost Before and After" translate="no">​</a></h2>
<p>The annual cost comparison tells the story clearly. VMware licensing, support, and management tools totaled $122,000 per year. The equivalent KVM infrastructure -- including RHEL subscriptions for host OS, basic monitoring, and the HyperSDK migration license -- came to $8,800 per year. That is a $113,200 annual savings, or 93% reduction.</p>
<p>Over a three-year horizon, the savings exceed $339,000. The one-time migration cost (engineering time and HyperSDK licensing) was recovered within the first 6 weeks of operation.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="lessons-learned">Lessons Learned<a href="https://hypersdk.cloud/blog/vmware-exit-cost-savings#lessons-learned" class="hash-link" aria-label="Direct link to Lessons Learned" title="Direct link to Lessons Learned" translate="no">​</a></h2>
<p>Three lessons stood out from this engagement. First, start with a thorough VM inventory -- we discovered 23 VMs that were powered off and had not been used in over a year. These were decommissioned rather than migrated, saving additional storage costs. Second, VirtIO driver injection is critical for Windows VMs. Without the correct storage and network drivers, Windows VMs will not boot on KVM. hyper2kvm handles this automatically, but it must be verified during the pilot phase. Third, keep source VMs running during validation. The ability to fall back to vSphere if any issue arose gave application owners confidence in the migration process.</p>
<p>The VMware exit is not just possible -- for most enterprises, it is the financially responsible decision. If your renewal quote has doubled or tripled, <a class="" href="https://hypersdk.cloud/contact">schedule an assessment</a> to see what your organization could save.</p>]]></content:encoded>
            <category>VMware</category>
            <category>Cost Savings</category>
            <category>Case Study</category>
        </item>
        <item>
            <title><![CDATA[315% -- The Real Cost of VMware Renewal Under Broadcom]]></title>
            <link>https://hypersdk.cloud/blog/broadcom-315-percent-vmware-renewal</link>
            <guid>https://hypersdk.cloud/blog/broadcom-315-percent-vmware-renewal</guid>
            <pubDate>Sat, 31 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[A mid-market financial services company received their VMware renewal notice last quarter. The previous year, they paid $45,000. The new quote: $187,000. That is a 315% increase -- and they are far from alone.]]></description>
            <content:encoded><![CDATA[<p>A mid-market financial services company received their VMware renewal notice last quarter. The previous year, they paid $45,000. The new quote: $187,000. That is a 315% increase -- and they are far from alone.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="this-is-not-an-outlier">This Is Not an Outlier<a href="https://hypersdk.cloud/blog/broadcom-315-percent-vmware-renewal#this-is-not-an-outlier" class="hash-link" aria-label="Direct link to This Is Not an Outlier" title="Direct link to This Is Not an Outlier" translate="no">​</a></h2>
<p>Since Broadcom completed its acquisition of VMware, enterprise customers across every industry have reported renewal increases ranging from 200% to over 500%. Perpetual licenses have been eliminated entirely. Every customer has been forced onto subscription pricing with mandatory 3-year minimum commitments.</p>
<p>For a 500-VM enterprise, the math is brutal. VMware Cloud Foundation now costs approximately $1.2 million per year. Before the acquisition, that same environment ran on $350,000 annually. That is not a rounding error. That is a budget-breaking change that forces difficult conversations at the executive level.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="small-businesses-hit-hardest">Small Businesses Hit Hardest<a href="https://hypersdk.cloud/blog/broadcom-315-percent-vmware-renewal#small-businesses-hit-hardest" class="hash-link" aria-label="Direct link to Small Businesses Hit Hardest" title="Direct link to Small Businesses Hit Hardest" translate="no">​</a></h2>
<p>Large enterprises at least have the leverage to negotiate. Small and mid-market organizations have no such luxury. Broadcom has implemented minimum order thresholds of $100,000 or more. Customers who fall below that line have been told, in effect, to find a different vendor.</p>
<p>The licensing model itself has also changed. Per-core licensing now enforces a 16-core minimum per socket. If you are running older servers with 8-core processors, you are paying for cores you do not have. That represents a 60% cost increase on legacy hardware before you even consider the subscription price increases.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-3-year-trap">The 3-Year Trap<a href="https://hypersdk.cloud/blog/broadcom-315-percent-vmware-renewal#the-3-year-trap" class="hash-link" aria-label="Direct link to The 3-Year Trap" title="Direct link to The 3-Year Trap" translate="no">​</a></h2>
<p>Broadcom's mandatory 3-year commitment compounds the problem. Organizations that sign today are locked into inflated pricing through 2029. If costs continue to rise -- and there is no indication they will not -- customers who wait will face even steeper renewals at the end of their term.</p>
<p>The window to act is now, while migration options are mature and proven.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-500-vm-enterprises-are-doing">What 500-VM Enterprises Are Doing<a href="https://hypersdk.cloud/blog/broadcom-315-percent-vmware-renewal#what-500-vm-enterprises-are-doing" class="hash-link" aria-label="Direct link to What 500-VM Enterprises Are Doing" title="Direct link to What 500-VM Enterprises Are Doing" translate="no">​</a></h2>
<p>The organizations that have moved fastest are seeing the best outcomes. One Fortune 500 financial services company migrated 350 VMs from VMware to KVM in 6 weeks, reducing annual virtualization costs from $1.4 million to $96,000. That is a 93% reduction and a payback period measured in weeks, not years.</p>
<p>The migration path is straightforward: export from vSphere, convert with automated guest OS fixing, and deploy on KVM or KubeVirt. Modern tooling achieves a 99.7% first-boot success rate, meaning minimal disruption to production workloads.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-decision-framework">The Decision Framework<a href="https://hypersdk.cloud/blog/broadcom-315-percent-vmware-renewal#the-decision-framework" class="hash-link" aria-label="Direct link to The Decision Framework" title="Direct link to The Decision Framework" translate="no">​</a></h2>
<p>Every VMware customer now faces three options:</p>
<ol>
<li class=""><strong>Pay the increase.</strong> Accept the 315% renewal and lock in for 3 years. Budget for further increases at the next renewal.</li>
<li class=""><strong>Negotiate.</strong> Possible for large enterprises, but Broadcom has shown limited flexibility. Most customers report single-digit percentage reductions at best.</li>
<li class=""><strong>Migrate.</strong> Move to KVM-based infrastructure and eliminate VMware licensing entirely. Proven at scale, with documented cost reductions of 90% or more.</li>
</ol>
<p>The math speaks for itself. At $187,000 per year and rising, the cost of staying on VMware now exceeds the cost of leaving.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="next-steps">Next Steps<a href="https://hypersdk.cloud/blog/broadcom-315-percent-vmware-renewal#next-steps" class="hash-link" aria-label="Direct link to Next Steps" title="Direct link to Next Steps" translate="no">​</a></h2>
<p>If you are facing a VMware renewal, get a cost analysis before you sign. Understand what your environment would cost on KVM. Compare the 3-year total cost of ownership for both paths.</p>
<p>The organizations that act now will save millions over the next three years. The ones that wait will pay Broadcom's price.</p>
<p><a class="" href="https://hypersdk.cloud/contact">Schedule a VMware Exit Assessment</a> to see your specific numbers.</p>]]></content:encoded>
            <category>VMware</category>
            <category>Broadcom</category>
            <category>Licensing</category>
            <category>TCO</category>
        </item>
        <item>
            <title><![CDATA[KVM vs VMware: A 2026 Comparison for Enterprise IT]]></title>
            <link>https://hypersdk.cloud/blog/kvm-vs-vmware-2026-comparison</link>
            <guid>https://hypersdk.cloud/blog/kvm-vs-vmware-2026-comparison</guid>
            <pubDate>Sat, 24 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[The enterprise hypervisor landscape has shifted dramatically. With VMware's pricing restructured under Broadcom and KVM continuing to mature as the backbone of every major public cloud, IT leaders are re-evaluating their virtualization strategy. Here is a direct comparison of KVM and VMware in 2026, covering the dimensions that matter most to enterprise teams.]]></description>
            <content:encoded><![CDATA[<p>The enterprise hypervisor landscape has shifted dramatically. With VMware's pricing restructured under Broadcom and KVM continuing to mature as the backbone of every major public cloud, IT leaders are re-evaluating their virtualization strategy. Here is a direct comparison of KVM and VMware in 2026, covering the dimensions that matter most to enterprise teams.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="performance">Performance<a href="https://hypersdk.cloud/blog/kvm-vs-vmware-2026-comparison#performance" class="hash-link" aria-label="Direct link to Performance" title="Direct link to Performance" translate="no">​</a></h2>
<p>KVM runs as a kernel module within Linux, giving it near-native performance for CPU-intensive workloads. There is no separate hypervisor layer consuming resources. VMware ESXi is a purpose-built bare-metal hypervisor with decades of optimization, and its performance is excellent. In practice, benchmark comparisons between the two show KVM and VMware within 2-5% of each other for most workloads. For I/O-intensive applications, KVM's VirtIO paravirtualized drivers often outperform VMware's PVSCSI, particularly for storage throughput.</p>
<p>The performance gap that existed a decade ago has effectively closed. Both hypervisors deliver enterprise-grade performance for database servers, application servers, and general-purpose workloads.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="cost">Cost<a href="https://hypersdk.cloud/blog/kvm-vs-vmware-2026-comparison#cost" class="hash-link" aria-label="Direct link to Cost" title="Direct link to Cost" translate="no">​</a></h2>
<p>This is where the comparison diverges sharply. KVM is included in every Linux distribution at no additional cost. RHEL, Ubuntu, SUSE, and Fedora all ship with KVM built into the kernel. The total licensing cost for the hypervisor itself is zero.</p>
<p>VMware's post-Broadcom pricing starts at approximately $250 per CPU per year for VMware Cloud Foundation, which is the minimum offering now available. For a 200-CPU deployment, that translates to $50,000 annually just for the hypervisor license, before adding support contracts. Organizations previously running vSphere Standard at $600 per CPU (perpetual) are now paying recurring subscription fees that exceed their previous one-time purchase within two years.</p>
<p>Management tooling adds to the cost differential. vCenter Server requires its own license. KVM management options include Proxmox VE (open-source with optional enterprise support), oVirt (open-source), and Cockpit (included with RHEL). HyperSDK provides a commercial management layer with 45 dashboard views and 205 API endpoints for organizations that need enterprise-grade operational tooling.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="management-and-ecosystem">Management and Ecosystem<a href="https://hypersdk.cloud/blog/kvm-vs-vmware-2026-comparison#management-and-ecosystem" class="hash-link" aria-label="Direct link to Management and Ecosystem" title="Direct link to Management and Ecosystem" translate="no">​</a></h2>
<p>VMware's strongest advantage has always been its management ecosystem. vCenter, vMotion, DRS, and HA provide a mature, integrated management experience that IT teams know well. The tooling is polished and the documentation is extensive.</p>
<p>KVM's management ecosystem has caught up significantly. Proxmox VE provides a web-based management interface with clustering, live migration, backup, and high availability. For organizations running Kubernetes, KubeVirt enables VMs to run as Kubernetes pods with full lifecycle management through standard Kubernetes tooling. Red Hat's OpenShift Virtualization (built on KubeVirt) provides enterprise support for this approach.</p>
<p>The gap remains in advanced features like distributed resource scheduling, though Proxmox's HA manager and QEMU's live migration capabilities cover the most critical use cases.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="security-and-compliance">Security and Compliance<a href="https://hypersdk.cloud/blog/kvm-vs-vmware-2026-comparison#security-and-compliance" class="hash-link" aria-label="Direct link to Security and Compliance" title="Direct link to Security and Compliance" translate="no">​</a></h2>
<p>Both platforms provide the security features enterprises require. VMware offers vSphere Trust Authority, encrypted vMotion, and VM encryption. KVM leverages Linux's built-in security stack: SELinux, AppArmor, sVirt for mandatory access control, and LUKS for disk encryption. The Linux kernel's security track record is well-established, and patches are typically available faster than for proprietary hypervisors.</p>
<p>For compliance frameworks like PCI-DSS, HIPAA, and FedRAMP, both platforms have established certification paths. KVM's open-source nature provides an advantage for organizations that require source code review as part of their security evaluation.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="migration-path">Migration Path<a href="https://hypersdk.cloud/blog/kvm-vs-vmware-2026-comparison#migration-path" class="hash-link" aria-label="Direct link to Migration Path" title="Direct link to Migration Path" translate="no">​</a></h2>
<p>Moving from VMware to KVM is no longer the multi-month, high-risk project it once was. Tools like HyperSDK automate the export of VMs from vSphere, convert VMDK disk images to QCOW2 format, inject VirtIO drivers into Windows and Linux guests, and deploy to KVM targets. The process achieves a 99.7% first-boot success rate, meaning most VMs require zero manual intervention after migration.</p>
<p>A typical migration timeline for a 200-VM environment is 60-90 days, including planning, pilot migrations, and production cutover. Organizations running HyperSDK's automated pipeline have completed migrations of 350+ VMs in as little as six weeks.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-verdict">The Verdict<a href="https://hypersdk.cloud/blog/kvm-vs-vmware-2026-comparison#the-verdict" class="hash-link" aria-label="Direct link to The Verdict" title="Direct link to The Verdict" translate="no">​</a></h2>
<p>For new deployments, KVM is the clear choice in 2026. The cost savings are substantial, performance is equivalent, and the management ecosystem has matured to enterprise standards.</p>
<p>For existing VMware customers, the decision depends on your renewal timeline and migration readiness. If your VMware renewal is approaching and costs are increasing, migrating to KVM offers 60-93% cost reduction with minimal operational disruption when using automated migration tooling.</p>
<p>The hypervisor is no longer the differentiator it once was. The value has shifted to the management, automation, and migration tooling that sits above it. Choose the platform that gives your team the most operational capability at the lowest total cost of ownership.</p>]]></content:encoded>
            <category>KVM</category>
            <category>VMware</category>
            <category>Comparison</category>
        </item>
        <item>
            <title><![CDATA[VMware Licensing After Broadcom: What You Need to Know]]></title>
            <link>https://hypersdk.cloud/blog/vmware-broadcom-licensing-costs</link>
            <guid>https://hypersdk.cloud/blog/vmware-broadcom-licensing-costs</guid>
            <pubDate>Sat, 17 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Broadcom's acquisition of VMware has fundamentally reshaped the enterprise virtualization landscape. For thousands of organizations that built their infrastructure on vSphere, the licensing changes have been nothing short of seismic. Here is what you need to know about the new reality and what your options are.]]></description>
            <content:encoded><![CDATA[<p>Broadcom's acquisition of VMware has fundamentally reshaped the enterprise virtualization landscape. For thousands of organizations that built their infrastructure on vSphere, the licensing changes have been nothing short of seismic. Here is what you need to know about the new reality and what your options are.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-broadcom-effect">The Broadcom Effect<a href="https://hypersdk.cloud/blog/vmware-broadcom-licensing-costs#the-broadcom-effect" class="hash-link" aria-label="Direct link to The Broadcom Effect" title="Direct link to The Broadcom Effect" translate="no">​</a></h2>
<p>When Broadcom finalized its $61 billion acquisition of VMware, the immediate impact was felt across the customer base. Perpetual licenses were eliminated entirely. Every customer was forced onto subscription-based pricing, and the cost increases have been staggering. Reports from enterprise IT teams consistently cite renewal quotes that are 200% to 500% higher than their previous annual spend.</p>
<p>The bundling strategy has made things worse. VMware's product portfolio was consolidated into fewer, larger bundles. Organizations that previously purchased only vSphere now find themselves paying for a full VMware Cloud Foundation suite, regardless of whether they use the additional components. A mid-market company that was spending $30,000 per year on vSphere licensing might now face a $120,000 annual bill for VMware Cloud Foundation.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="who-is-most-affected">Who Is Most Affected<a href="https://hypersdk.cloud/blog/vmware-broadcom-licensing-costs#who-is-most-affected" class="hash-link" aria-label="Direct link to Who Is Most Affected" title="Direct link to Who Is Most Affected" translate="no">​</a></h2>
<p>Small and mid-market customers have been hit hardest. Enterprise accounts with direct Broadcom relationships have some negotiating leverage, but organizations running fewer than 500 CPUs often find themselves with no room to negotiate. Channel partners that previously offered competitive pricing have seen their margins squeezed, leaving fewer options for customers seeking discounts.</p>
<p>Government agencies and educational institutions, which often operated under favorable licensing agreements, are seeing those agreements expire without renewal options. Healthcare organizations running critical EMR systems on vSphere face difficult choices between absorbing massive cost increases or undertaking complex migrations under tight timelines.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-real-numbers">The Real Numbers<a href="https://hypersdk.cloud/blog/vmware-broadcom-licensing-costs#the-real-numbers" class="hash-link" aria-label="Direct link to The Real Numbers" title="Direct link to The Real Numbers" translate="no">​</a></h2>
<p>Based on data from organizations that have approached HyperSDK for migration assistance, the average cost increase after Broadcom's changes breaks down as follows:</p>
<ul>
<li class=""><strong>Small deployments (under 100 VMs):</strong> 300-500% increase in annual licensing costs</li>
<li class=""><strong>Mid-market (100-500 VMs):</strong> 200-400% increase, often with reduced support tiers</li>
<li class=""><strong>Enterprise (500+ VMs):</strong> 150-300% increase, with pressure to adopt VMware Cloud Foundation</li>
<li class=""><strong>Government and education:</strong> Loss of preferential pricing, effective increases of 250-600%</li>
</ul>
<p>These numbers translate to real budget impact. A 200-host VMware environment that cost $180,000 annually might now cost $540,000 or more, with no additional functionality.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-are-your-alternatives">What Are Your Alternatives<a href="https://hypersdk.cloud/blog/vmware-broadcom-licensing-costs#what-are-your-alternatives" class="hash-link" aria-label="Direct link to What Are Your Alternatives" title="Direct link to What Are Your Alternatives" translate="no">​</a></h2>
<p>The good news is that the hypervisor market has matured significantly. KVM, the kernel-based virtual machine technology built into Linux, now powers the majority of public cloud infrastructure worldwide. AWS, Google Cloud, and Azure all run on KVM-based hypervisors. The technology is proven at a scale that dwarfs any VMware deployment.</p>
<p>Migration platforms like HyperSDK have emerged specifically to address this transition. HyperSDK can export VMs directly from vSphere, convert disk images to KVM-compatible formats, inject the necessary VirtIO drivers, and deploy to your target infrastructure. The entire process is automated, tracked, and verified with checksum validation.</p>
<p>Proxmox VE offers a mature, open-source virtualization platform with a web interface that many administrators find comparable to vCenter. For organizations moving toward cloud-native infrastructure, KubeVirt enables VM workloads to run alongside containers on Kubernetes.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-migration-window">The Migration Window<a href="https://hypersdk.cloud/blog/vmware-broadcom-licensing-costs#the-migration-window" class="hash-link" aria-label="Direct link to The Migration Window" title="Direct link to The Migration Window" translate="no">​</a></h2>
<p>Organizations that have not yet renewed their VMware agreements are in the best position to evaluate alternatives. The typical migration timeline for a mid-size deployment (200-500 VMs) is 60-90 days when using automated tooling. HyperSDK's 99.7% first-boot success rate means most VMs require no manual intervention after migration.</p>
<p>The key is starting the evaluation before your renewal deadline. Running a proof-of-concept migration with 10-20 non-critical VMs takes less than a week and provides the data your team needs to make an informed decision.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="next-steps">Next Steps<a href="https://hypersdk.cloud/blog/vmware-broadcom-licensing-costs#next-steps" class="hash-link" aria-label="Direct link to Next Steps" title="Direct link to Next Steps" translate="no">​</a></h2>
<p>If your organization is facing a VMware renewal, consider these immediate actions:</p>
<ol>
<li class=""><strong>Inventory your VMware footprint</strong> -- document every VM, its resource requirements, and its criticality</li>
<li class=""><strong>Calculate your true cost of ownership</strong> -- include licensing, support, training, and operational overhead</li>
<li class=""><strong>Run a proof-of-concept</strong> -- use HyperSDK to migrate a small batch of test VMs to KVM or Proxmox</li>
<li class=""><strong>Build a business case</strong> -- compare 3-year TCO of staying on VMware versus migrating to open-source alternatives</li>
</ol>
<p>The Broadcom acquisition has created urgency, but it has also created opportunity. Organizations that act now can reduce their virtualization costs by 60-93% while gaining independence from single-vendor lock-in.</p>]]></content:encoded>
            <category>VMware</category>
            <category>Licensing</category>
            <category>Broadcom</category>
        </item>
        <item>
            <title><![CDATA[Introducing HyperSDK: Multi-Cloud VM Migration Platform]]></title>
            <link>https://hypersdk.cloud/blog/introducing-hypersdk</link>
            <guid>https://hypersdk.cloud/blog/introducing-hypersdk</guid>
            <pubDate>Sat, 10 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[Today we are publicly releasing HyperSDK, an enterprise platform for migrating virtual machines across cloud providers and hypervisors. HyperSDK was born out of a real-world pain point: the sudden and dramatic shift in VMware licensing that left thousands of organizations scrambling for alternatives. We built a tool that makes leaving vSphere straightforward, repeatable, and observable.]]></description>
            <content:encoded><![CDATA[<p>Today we are publicly releasing HyperSDK, an enterprise platform for migrating virtual machines across cloud providers and hypervisors. HyperSDK was born out of a real-world pain point: the sudden and dramatic shift in VMware licensing that left thousands of organizations scrambling for alternatives. We built a tool that makes leaving vSphere straightforward, repeatable, and observable.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-vmware-licensing-problem">The VMware Licensing Problem<a href="https://hypersdk.cloud/blog/introducing-hypersdk#the-vmware-licensing-problem" class="hash-link" aria-label="Direct link to The VMware Licensing Problem" title="Direct link to The VMware Licensing Problem" translate="no">​</a></h2>
<p>When Broadcom acquired VMware and restructured its licensing model, many enterprises saw their virtualization costs increase by 3x to 10x overnight. Perpetual licenses disappeared, bundled products were forced into expensive suites, and smaller customers lost access to affordable tiers entirely. Organizations that had built their entire infrastructure on vSphere were suddenly locked into unsustainable costs with no clear migration path.</p>
<p>HyperSDK exists to solve that problem. It provides a unified platform for exporting VMs from vSphere and deploying them to KVM, Proxmox, AWS, Azure, GCP, OCI, OpenStack, Alibaba Cloud, Hyper-V, or any combination of these providers.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="10-cloud-providers-one-interface">10 Cloud Providers, One Interface<a href="https://hypersdk.cloud/blog/introducing-hypersdk#10-cloud-providers-one-interface" class="hash-link" aria-label="Direct link to 10 Cloud Providers, One Interface" title="Direct link to 10 Cloud Providers, One Interface" translate="no">​</a></h2>
<p>HyperSDK supports 10 cloud providers and hypervisors through a consistent provider interface. Each provider implements the same set of port interfaces defined in our hexagonal architecture, meaning you interact with AWS the same way you interact with Proxmox or KVM. The supported providers are:</p>
<ul>
<li class=""><strong>VMware vSphere</strong> -- source provider for exports</li>
<li class=""><strong>KVM/libvirt</strong> -- the most common migration target</li>
<li class=""><strong>Proxmox VE</strong> -- popular open-source hypervisor</li>
<li class=""><strong>AWS EC2</strong> -- import as AMI or run directly</li>
<li class=""><strong>Microsoft Azure</strong> -- managed disk import</li>
<li class=""><strong>Google Cloud Platform</strong> -- Compute Engine integration</li>
<li class=""><strong>Oracle Cloud Infrastructure</strong> -- custom image import</li>
<li class=""><strong>OpenStack</strong> -- private cloud deployments</li>
<li class=""><strong>Alibaba Cloud</strong> -- ECS instance support</li>
<li class=""><strong>Microsoft Hyper-V</strong> -- Windows-based hypervisors</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="45-dashboard-views">45 Dashboard Views<a href="https://hypersdk.cloud/blog/introducing-hypersdk#45-dashboard-views" class="hash-link" aria-label="Direct link to 45 Dashboard Views" title="Direct link to 45 Dashboard Views" translate="no">​</a></h2>
<p>The web dashboard is not an afterthought. It ships with 45 distinct views organized into five navigation groups: Migration, Observability, Infrastructure, Tools, and System. You can browse VMs with OS-specific icons, trigger exports with a single click, monitor job progress in real time, estimate costs across providers, schedule carbon-aware migrations, and debug issues with an integrated explain mode -- all from your browser.</p>
<p>The dashboard is built with React 18 and Tailwind CSS, supports dark and light themes, and includes keyboard shortcuts for power users. Every view communicates with the backend through the same versioned REST API that CLI users and automation scripts consume.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="205-api-endpoints">205 API Endpoints<a href="https://hypersdk.cloud/blog/introducing-hypersdk#205-api-endpoints" class="hash-link" aria-label="Direct link to 205 API Endpoints" title="Direct link to 205 API Endpoints" translate="no">​</a></h2>
<p>HyperSDK exposes 205 REST API endpoints under the <code>/api/v1/</code> prefix. These cover the full lifecycle of VM migration: discovery, capability detection, export, import, job management, scheduling, cost estimation, carbon tracking, RBAC, audit logging, secrets management, webhooks, backups, and system health. Every endpoint is documented, versioned, and enforces authentication through a middleware chain that includes rate limiting, request size limits, CORS, and security headers.</p>
<p>The API is designed for automation. You can script an entire fleet migration using <code>curl</code> or integrate HyperSDK into your existing CI/CD pipelines. The <code>hyperctl</code> CLI wraps the same API for interactive use.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="hexagonal-architecture">Hexagonal Architecture<a href="https://hypersdk.cloud/blog/introducing-hypersdk#hexagonal-architecture" class="hash-link" aria-label="Direct link to Hexagonal Architecture" title="Direct link to Hexagonal Architecture" translate="no">​</a></h2>
<p>Under the hood, HyperSDK follows a hexagonal (ports and adapters) architecture. The domain layer in <code>internal/domain/</code> defines 70+ canonical types across 12 files. Port interfaces in <code>internal/ports/</code> define 17 contracts that adapters must implement. This separation means you can swap out any infrastructure component -- database, provider, auth system -- without touching business logic.</p>
<p>The architecture was implemented across 10 phases, each adding a new layer of abstraction. The result is a codebase where every dependency points inward, every adapter is independently testable, and every provider satisfies compile-time interface assertions.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="upload-and-deploy-from-the-browser">Upload and Deploy from the Browser<a href="https://hypersdk.cloud/blog/introducing-hypersdk#upload-and-deploy-from-the-browser" class="hash-link" aria-label="Direct link to Upload and Deploy from the Browser" title="Direct link to Upload and Deploy from the Browser" translate="no">​</a></h2>
<p>One of the most requested features during development was the ability to upload VM disk images directly through the browser and deploy them to a target hypervisor. HyperSDK supports this end-to-end. You can upload a VMDK, qcow2, or raw disk image through the dashboard, and the platform will handle format conversion, storage allocation, and deployment. Progress is tracked in real time with a callback-based progress reader that streams updates to the UI.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="system-observability">System Observability<a href="https://hypersdk.cloud/blog/introducing-hypersdk#system-observability" class="hash-link" aria-label="Direct link to System Observability" title="Direct link to System Observability" translate="no">​</a></h2>
<p>Migrating hundreds of VMs is complex, and things will go wrong. HyperSDK includes a system health score that aggregates metrics from all connected providers, storage backends, and internal services into a single 0-100 score. When something degrades, the explain mode walks you through exactly what went wrong and why, using structured diagnostic data rather than raw log dumps.</p>
<p>Debug tools in the dashboard let you inspect individual jobs, replay failed exports, view audit trails, and check provider capabilities without leaving the browser. Alerts are configurable through webhooks, so you can pipe notifications to Slack, PagerDuty, or any HTTP endpoint.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-is-next">What is Next<a href="https://hypersdk.cloud/blog/introducing-hypersdk#what-is-next" class="hash-link" aria-label="Direct link to What is Next" title="Direct link to What is Next" translate="no">​</a></h2>
<p>HyperSDK is under active development. On the roadmap:</p>
<ul>
<li class=""><strong>Kubernetes operator</strong> for managing migrations as custom resources, with condition-based status reporting already implemented in <code>pkg/operator/controllers/</code></li>
<li class=""><strong>Parallel batch exports</strong> to migrate entire clusters in hours rather than days</li>
<li class=""><strong>Incremental sync</strong> for near-zero-downtime migration of production workloads</li>
<li class=""><strong>Terraform provider</strong> so you can declare migration workflows as infrastructure-as-code</li>
<li class=""><strong>Extended carbon reporting</strong> with historical trend analysis and ESG compliance dashboards</li>
</ul>
<p>We believe VM migration should be boring infrastructure -- reliable, predictable, and invisible. HyperSDK is our contribution toward making that a reality. Try it out, file issues, and let us know what you think.</p>]]></content:encoded>
            <category>Announcement</category>
            <category>Release</category>
            <category>Multi-Cloud</category>
        </item>
    </channel>
</rss>