VCF 4 on VxRail Architecture Diagrams

This post contains some sample high-level, non-official VCF 4 on VxRail architecture diagrams and notes that I have put together for enablement purposes, and may be of use to others.

These diagrams are reflective of VCF 4.2 on VxRail, and are based on the following supported architectures:

  • Standard Architecture

  • Consolidated Architecture

  • Multi-AZ Architecture

  • Multi-Region (tbc)

Before diving into the architectures, first some high-level details to consider with VCF 4 on VxRail:


  • Architecture

  • VCF 4.0 is New Deployments/Greenfield only

  • No Upgrade Path to VCF 4.0 from 3.9.x

  • vSphere 7 only

  • Single Embedded PSC

  • Consolidated Architecture now supported

  • Remote VI WLDs supported

  • NSX Updates

  • NSX-T only

  • NSX-T Edges in Mgmt WLD if AVN configured

  • Options NSX-T:VIWLD

  • 1:1

  • 1:Many

  • Hardware

  • Must use VxRail systems only (no non-VxRail h/w)

  • Min 2 NICs per VxRail node

  • Standard Architecture minimum of 7 nodes (4+3)

  • 4-node VxRail for VCF Mgmt

  • 3-node VxRail min for VI WLDs

  • Consolidated Architecture requires 4-node VxRail min

Please refer to the official VCF 4.2 on VxRail 7.0 Architecture Guide for more details.


The architectures in this post are available in Visio format to download at the end of this post.


For general VCF 4.0 on VxRail release details, take a look at this post DELL EMC VXRAIL – INTRODUCING VCF 4.0 from my colleague @DavidCRing.

 

Sample Standard Architecture

Management with a Single Virtual Infrastructure (VI) WLD

In the above example, note the following details:

  • All Management WLD and VI WLD vCenter Server and NSX-T Manager instances are located in Management WLD

  • VI WLD VxRail(s) use External vCenter Server(s)

  • NSX-T Edges deployed in Mgmt WLD only if AVN configured

  • NSX-T can be 1:1 or 1:many for VI Workload Domains. In this example it is 1:1 as we have only a single VI WLD.

  • The 1st VI WLD hosts Shared Edge devices

  • Each cluster hosts it’s own VxRail Mgr

 

Management WLD + Single VI WLD including multiple Clusters

In the above example, note the following details:

  • Single instance of NSX-T for VI Workload Domains

  • Shared VI WLD vCenter and NSX-T instance

  • 2nd cluster in VI WLD01 uses the existing Edges

  • Each cluster hosts own VxRail Mgr

 

Management WLD + Two VI WLDs, using single NSX-T (1:many)

In the above example, note the following details:

  • Single NSX-T for VI Workload Domains (1:Many)

  • 1st VI WLD hosts Shared Edge devices

  • 2nd vCenter for VI WLD02

  • Shared VI WLD NSX-T instance

  • Uses existing Edges in VI WLD01

  • Each cluster hosts own VxRail Mgr

 

Management WLD + Two VI WLDs, using two NSX-T (1:1) instances

In the above example, note the following details:

  • Each VI WLD has own vCenter Server

  • Each VI WLD has own NSX-T (1:1) instance

  • Each VI WLD has own Edge devices

  • Each cluster hosts own VxRail Mgr

 

added/updated March 2nd 2021

Management WLD + Two VI WLDs, 1 Remote, using single NSX-T (1:many)

In the above example, note the following details:

  • Single NSX-T for VI Workload Domains (1:Many)

  • 1st VI WLD hosts Shared Edge devices for VI WLD01

  • 2nd vCenter for the remote VI WLD02R

  • Shared VI WLD NSX-T instance

  • Requires Edges in the remote VI WLD02R

  • Each cluster hosts own VxRail Mgr

 

Sample Consolidated Architecture

In the above example, note the following details:

  • Single WLD

  • Single vCenter Server

  • Single NSX-T instance

  • Single VxRail Mgr

  • Resource Pools used for workload Isolation

  • Single WLD for VCF Mgmt and Tenant workloads (4-node minimum, recommend 8 min)

  • Scales to 64 hosts, as per vSphere maximums for total hosts (64)

  • Multi-cluster is possible

  • Cannot manage additional VI WLDs

 

Sample Multi-AZ Architectures


Note that for all VCF on VxRail Multi-AZ architectures, the Management WLD MUST be stretched.

  • VI WLDs, and the clusters within them, can be stretched or Local to either AZ.

  • The Mgmt WLD requires a min of 4 nodes per site

  • The 1st VI WLD requires a min 4 nodes per site

  • Secondary VI WLD Clusters require a min 3 of nodes per site

  • Node Count must be balanced per site

Stretched Mgmt WLD + Stretched VI WLD

In the above example, note the following details:

  • Stretched Mgmt WLD

  • 1 x Stretched VI WLD

  • 2 x vCenter Servers

  • 2 x NSX-T instances

  • Mgmt 1:1

  • VI WLDs (1:1)

  • 2 x VxRail Mgr

 

Stretched Mgmt WLD + 2 x Stretched VI WLD, using single NSX-T


In the above example, note the following details:

  • Stretched Mgmt WLD

  • 2 x Stretched VI WLD

  • 3 x vCenter Servers

  • 2 x NSX-T instances

  • Mgmt 1:1

  • VI WLDs (1:many)

  • 2 x VxRail Mgr

 

Stretched Mgmt WLD + 2 VI WLDs with mix of stretched and local WLDs and clusters, using single NSX-T instance

In the above example, note the following details:

  • Stretched Mgmt WLD

  • 1 x Stretched VI WLD

  • 2 x Local VI WLDs

  • 4 x vCenter Servers

  • 2 x NSX-T instances

  • Mgmt 1:1

  • VI WLDs (1:many)

  • 3 x NSX-T Edges

  • Mgmt

  • VI WLD – local to each AZ

  • 4 x VxRail Mgr

 

Stretched Mgmt WLD + Stretched VI WLD, 2 x Local VI WLDs, using Two NSX-T instances

In the above example, note the following details:

  • Stretched Mgmt WLD

  • 1 x Stretched VI WLD

  • 2 x Local VI WLDs

  • 4 x vCenter Servers

  • 3 x NSX-T instances

  • Mgmt 1:1

  • VI WLDs (1:1)

  • 3 x NSX-T Edges

  • Mgmt

  • VI WLD – local to each AZ

  • 4 x VxRail Mgr

 

Multi Region / DR Architecture

Coming soon on Steve's blog.


102 views0 comments

Recent Posts

See All