Skip to main content
Redhat Developers  Logo
  • Products

    Featured

    • Red Hat Enterprise Linux
      Red Hat Enterprise Linux Icon
    • Red Hat OpenShift AI
      Red Hat OpenShift AI
    • Red Hat Enterprise Linux AI
      Linux icon inside of a brain
    • Image mode for Red Hat Enterprise Linux
      RHEL image mode
    • Red Hat OpenShift
      Openshift icon
    • Red Hat Ansible Automation Platform
      Ansible icon
    • Red Hat Developer Hub
      Developer Hub
    • View All Red Hat Products
    • Linux

      • Red Hat Enterprise Linux
      • Image mode for Red Hat Enterprise Linux
      • Red Hat Universal Base Images (UBI)
    • Java runtimes & frameworks

      • JBoss Enterprise Application Platform
      • Red Hat build of OpenJDK
    • Kubernetes

      • Red Hat OpenShift
      • Microsoft Azure Red Hat OpenShift
      • Red Hat OpenShift Virtualization
      • Red Hat OpenShift Lightspeed
    • Integration & App Connectivity

      • Red Hat Build of Apache Camel
      • Red Hat Service Interconnect
      • Red Hat Connectivity Link
    • AI/ML

      • Red Hat OpenShift AI
      • Red Hat Enterprise Linux AI
    • Automation

      • Red Hat Ansible Automation Platform
      • Red Hat Ansible Lightspeed
    • Developer tools

      • Red Hat Trusted Software Supply Chain
      • Podman Desktop
      • Red Hat OpenShift Dev Spaces
    • Developer Sandbox

      Developer Sandbox
      Try Red Hat products and technologies without setup or configuration fees for 30 days with this shared Openshift and Kubernetes cluster.
    • Try at no cost
  • Technologies

    Featured

    • AI/ML
      AI/ML Icon
    • Linux
      Linux Icon
    • Kubernetes
      Cloud icon
    • Automation
      Automation Icon showing arrows moving in a circle around a gear
    • View All Technologies
    • Programming Languages & Frameworks

      • Java
      • Python
      • JavaScript
    • System Design & Architecture

      • Red Hat architecture and design patterns
      • Microservices
      • Event-Driven Architecture
      • Databases
    • Developer Productivity

      • Developer productivity
      • Developer Tools
      • GitOps
    • Secure Development & Architectures

      • Security
      • Secure coding
    • Platform Engineering

      • DevOps
      • DevSecOps
      • Ansible automation for applications and services
    • Automated Data Processing

      • AI/ML
      • Data Science
      • Apache Kafka on Kubernetes
      • View All Technologies
    • Start exploring in the Developer Sandbox for free

      sandbox graphic
      Try Red Hat's products and technologies without setup or configuration.
    • Try at no cost
  • Learn

    Featured

    • Kubernetes & Cloud Native
      Openshift icon
    • Linux
      Rhel icon
    • Automation
      Ansible cloud icon
    • Java
      Java icon
    • AI/ML
      AI/ML Icon
    • View All Learning Resources

    E-Books

    • GitOps Cookbook
    • Podman in Action
    • Kubernetes Operators
    • The Path to GitOps
    • View All E-books

    Cheat Sheets

    • Linux Commands
    • Bash Commands
    • Git
    • systemd Commands
    • View All Cheat Sheets

    Documentation

    • API Catalog
    • Product Documentation
    • Legacy Documentation
    • Red Hat Learning

      Learning image
      Boost your technical skills to expert-level with the help of interactive lessons offered by various Red Hat Learning programs.
    • Explore Red Hat Learning
  • Developer Sandbox

    Developer Sandbox

    • Access Red Hat’s products and technologies without setup or configuration, and start developing quicker than ever before with our new, no-cost sandbox environments.
    • Explore Developer Sandbox

    Featured Developer Sandbox activities

    • Get started with your Developer Sandbox
    • OpenShift virtualization and application modernization using the Developer Sandbox
    • Explore all Developer Sandbox activities

    Ready to start developing apps?

    • Try at no cost
  • Blog
  • Events
  • Videos

OpenShift Virtualization for vSphere admins: Introduction to network configurations

May 16, 2024
Alan Cowles
Related topics:
ContainersKubernetesVirtualization
Related products:
Red Hat OpenShiftRed Hat OpenShift Virtualization

Share:

    Note: This post was originally published on the Red Hat Blog.

    Red Hat OpenShift Virtualization is Red Hat's solution for companies trending toward modernization by adopting a containerized architecture for their applications, but find virtualization remains a necessary part of their data center deployment strategy.

    Frankly, some applications are still simply unable to be containerized, and that's okay. OpenShift Virtualization addresses this by providing a more modern paradigm that cohesively manages the need for containers and virtual machines in a unified platform.

    One concern we at Red Hat must address is this: in addition to providing feature parity, how do we maintain a customer experience that is similar to that of the products and configurations that the customer is used to so that adoption of the new solution is essentially pain-free?

    Once comfortable working within an information technology ecosystem, people generally attempt to adapt familiar principles to new systems. For many system administrators, a common solution for data center virtualization has been VMware vSphere. People working in this environment introduced many design solutions that have been adopted as best practices for the deployment and configuration of virtual machines or virtual data centers. These design decisions often seem like second nature to those who work in this sphere of IT, and they would likely attempt to apply similar ones to other environments they are planning.

    This series will explore the differences between how enterprises and their virtualization administrators have interacted with previous hypervisor environments and how they can expect to perform similar actions or configurations in the world of OpenShift Virtualization.

    alt text
    Figure 1:
    A familiar look at a virtual distributed switch (VDS) with multiple uplinks, VLANs, and vNICs as pictured in VMware vSphere.

    Multiple network design

    First, focus on the multiple network design, which is popular with other virtualization solutions such as VMware vSphere. Those familiar with the environment know that this is where the ESXi hosts often support a various networks, where each virtual network is dedicated to a different purpose within the virtual data center. This network segregation can be accomplished in several ways, from cabling different physical NICs on the hosts to different switches or networks upstream or by trunking additional VLANs over an uplink and verifying they are tagging appropriately on the vSwitch or Distributed vSwitch within the hypervisor itself to indicate their desired purpose. Most commonly, the networking is at least subdivided into the following networks:

    • A management network for the ESXi hosts and vCenter virtual machine, as well as other third-party management hosts that may be deployed.
    • A host migration network for the ESXi hosts to support the hot migration of guests, known as vMotion, between hosts to provide high availability.
    • A host storage network for the ESXi hosts to access storage resources that provide backing storage or guest-mapped storage for the virtual machines.
    • A virtual machine network that provides network connectivity to the virtual guests themselves.

    In addition to the networks listed above, it's common to come across unique networking configurations and purposes depending on the use cases of that particular customers environment. Virtual switching, by its nature, allows for a large number of varying configurations to be easily implemented. It's no surprise that people who are used to defining their networks in a commonly understood way may be surprised when deciding to adopt a solution like OpenShift Virtualization that does things differently.

    The following sections look at each of the primary use cases identified above and demonstrate how OpenShift Virtualization addresses these needs.

    By default, OpenShift Virtualization is installed with a single internal pod network. Like any other application deployed in a Kubernetes environment, this internal network is used to communicate with the pod itself for the duration of its lifecycle. By using the OpenShift console, the virtctl CLI utility, or the oc command, you can perform all essential management functions for virtual guests over the internal API network. Essentially, this network in OpenShift Virtualization performs the same actions that an administrator would expect from their internal management network in VMware vSphere.

    While all of the networking functions in an OpenShift environment can be handled over the default pod network, like the management functions detailed above for a simple configuration, this is often not desirable. Some tasks, such as virtual guest migrations and storage access, benefit greatly from direct access on dedicated networks. While there are ways to tune the performance of certain actions like guest migrations through Migration Policies so that they limit the amount of bandwidth used, most enterprise users would agree that having dedicated networks for additional functions like guest migrations, storage, and virtual guest access are both desirable and more familiar to those that have operated in a VMware vSphere environment before.

    alt text
    Figure 2:
    Creating a MigrationPolicy to limit the amount of bandwidth available for virtual machine live migrations.

    For this purpose, OpenShift uses the Multus CNI plug-in, which allows you to configure additional network interfaces, SR-IOV, or Linux bridge devices that are available to the worker nodes, enabling them to be attached to individual pods as needed.

    Before installing Multus, the client must configure the underlying network appropriately. In the case of Linux bridges, this will be performed by using the nmstate operator. For SR-IOV devices, you would use the SR-IOV operator. With the networking environment configured, the administrator can begin to configure a Network Attachment Definition, which instructs OpenShift how to use additional physical interfaces, SR-IOV devices, or Linux bridges that define them.

    Next, take a deeper look at the configuration for live migrations. Many deployments of VMware vSphere have a dedicated network defined to support vMotion due to the bandwidth demands of live migration. When a live migration event occurs in an infrastructure without a network dedicated to that purpose, it can negatively affect shared network resources, such as virtual guests. It can also impact the management of the environment itself should a node become unavailable and several guests be forced to migrate simultaneously. In OpenShift Virtualization, there are several options that protect live migration events from adversely affecting the primary networking or performance of the cluster.

    The first is to define a migration policy that includes rules, including some that limit the amount of bandwidth available for migrations. It's also possible to define a dedicated network for live migrations by modifying the cluster settings under the OpenShift Virtualization overview settings. In this menu, you can limit the number of live migrations that can take place at any given time, preserving bandwidth. You can also modify the network being used for those live migrations.

    alt text
    Figure 3:
    Configuring the limits for virtual machine live migration, including setting migration limitations and defining a dedicated network for live migration.

    To do this, create a network attachment definition for a dedicated network interface on your worker nodes and define a private network that allows for connectivity just between the worker nodes that are configured to support virtualization and select it. This way, when a guest is selected for migration or a host is placed into maintenance, forcing all guests to vacate, the dedicated network for live migrations will be used, preventing additional strain on the primary OpenShift networking infrastructure.

    Networking for storage resources will depend on the storage provider being used for the OpenShift cluster. Many storage providers have native CSI-compliant drivers for their storage systems that automatically apply their specified best practices for storage connectivity.

    From the OpenShift Virtualization perspective, there are a few suggested best practices for configuring the storage classes to support virtual guests. For example, persistent volumes provisioned for VMs should be in RWX mode so that additional nodes can access the persistent volume should the virtual guest need to be migrated to another node. Like with migration, the worker nodes can be configured to access storage over the default management network or they can have a separate network attached to provide connectivity for NFS or iSCSI-based persistent volumes. In many cases, the management or default pod network in OpenShift will be tasked with initializing API calls to the indicated storage system. The CSI driver will verify that the provisioned volume is mapped correctly over the available storage network.

    Finally, consider public access to applications running in the VM or to the virtual machine directly. For users who are not quite ready to containerize their applications, it's possible to treat applications currently running on virtual machines like those that run alongside them in containers. To accomplish this, apply a tag to a virtual machine that identifies the application you wish to provide access to. Using that tag to identify it, create a Kubernetes service for the necessary ports that the application on the virtual machine may need. This service can then be exposed by creating an OpenShift route and allowing access to the application directly instead of the virtual guest as a whole over the default OpenShift pod network, exactly like a containerized application would be.

    Accessing a virtual guest this way is a transitional step between virtualization and containerization of applications. For those who would like a more traditional approach by having direct access to the guest operating system of a virtual machine, create a Network Attachment Definition and binding to a Linux bridge or SR-IOV device that is available on each worker node that is enabled for hosting virtual guests. This confirms the virtual guest gets an external IP assigned to it and that users can directly connect to the guest using SSH or their remote access protocol of choice.

    One caveat to keep in mind when attaching additional networks to virtual guests: the Linux bridge, the available physical interfaces, and the VLAN tags need to be defined identically and available on each worker node in the cluster to support high availability of the guest virtual machines should they need to migrate to a different node in the cluster.

    Identifying ways that the network architecture of a deployed OpenShift Virtualization solution can be configured to provide similar performance and behavior to other popular hypervisor environments like VMware vSphere can help customers have an engaging experience working with the product. At Red Hat, we believe that positive experiences increase customer satisfaction and product  adoption, which in turn facilitates application modernization.

    To learn more about application modernization and how OpenShift Virtualization can help, consider attending one of our upcoming live events, such as Red Hat Summit Connect or the OpenShift Virtualization Road Show, where we feature a product overview and hands-on lab dedicated to OpenShift Virtualization.

    Read Part 2 in this series: OpenShift Virtualization for vSphere admins: A change in the traditional storage paradigm

    Last updated: June 5, 2025

    Related Posts

    • OpenShift Virtualization for vSphere admins: A change in the traditional storage paradigm

    • First steps with the data virtualization Operator for Red Hat OpenShift

    • External materialized views demystified in Red Hat JBoss Data Virtualization and Red Hat JBoss Data Grid

    • Virtio live migration technical deep dive

    • Save memory with OpenShift Virtualization using Free Page Reporting

    Recent Posts

    • GuideLLM: Evaluate LLM deployments for real-world inference

    • Unleashing multimodal magic with RamaLama

    • Integrate Red Hat AI Inference Server & LangChain in agentic workflows

    • Streamline multi-cloud operations with Ansible and ServiceNow

    • Automate dynamic application security testing with RapiDAST

    What’s up next?

    Download the OpenShift Virtualization for VMware admins cheat sheet for a quick guide to managing virtual machines within the powerful OpenShift environment.

    Get the cheat sheet
    Red Hat Developers logo LinkedIn YouTube Twitter Facebook

    Products

    • Red Hat Enterprise Linux
    • Red Hat OpenShift
    • Red Hat Ansible Automation Platform

    Build

    • Developer Sandbox
    • Developer Tools
    • Interactive Tutorials
    • API Catalog

    Quicklinks

    • Learning Resources
    • E-books
    • Cheat Sheets
    • Blog
    • Events
    • Newsletter

    Communicate

    • About us
    • Contact sales
    • Find a partner
    • Report a website issue
    • Site Status Dashboard
    • Report a security problem

    RED HAT DEVELOPER

    Build here. Go anywhere.

    We serve the builders. The problem solvers who create careers with code.

    Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

    Sign me up

    Red Hat legal and privacy links

    • About Red Hat
    • Jobs
    • Events
    • Locations
    • Contact Red Hat
    • Red Hat Blog
    • Inclusion at Red Hat
    • Cool Stuff Store
    • Red Hat Summit

    Red Hat legal and privacy links

    • Privacy statement
    • Terms of use
    • All policies and guidelines
    • Digital accessibility

    Report a website issue