Posts

NSX ALB LetsEncrypt with DNS-01 challenge - BIND example

    In some of my previous posts, mentioned here LetsEncrypt script integration and Script parameter usage , I explained how useful it can be for NSX advanced load balancing solutions to utilise this kind of approach for free and automatic certificate manipulation, especially in environments with large number of web services inside. This approach utilises HTTP-01 based challenge with LetsEncrypt systems and L7 HTTP/S virtual services on NSX ALB side.     Now, from some time ago there is enhancement in this area, developed by official Avi Networks devops page, in terms of using DNS-01 challenge also. I tried on-prem option using Bind DNS as server which works very well.     Steps are pretty much similar to HTTP-01 option, which can be summarised as following:      - Create L7 virtual service with publicly available FQDN - certificates resolved can be in both options RSA or ECDSA, as configured during creation;        - Download required DNS-01 challenge script HERE      - Useful help

NSX ALB (Avi Networks) HTTP policy set removal

     As I wrote on some of my previous posts, Lets Encrypt automation of certs renewal inside NSX ALB (Avi) platform is very useful for Customer environments utilising large number of web oriented services, which needs to be on free HTTPS setup. These blogs about integration could be found here LINK-1 or LINK-2 for DNS based configurations.     What I saw from some operational systems is that,  occasionally, there is small piece left under HTTP policies (for HTTP-01 based challenges), in parent-child setups of virtual services, where HTTP policy is not removed properly from parent virtual service and that prohibits further renewal of certificate when it's needed. Typical removal of that piece is making troubles because it's associated to virtual service and system is not allowing that change.     What's needed is to de-associate problematic HTTP policy set from virtual service and then it can be freely removed from system without any issue. This can be accomplished in 2 d

NSX ALB routing - multiple floating IP and BGP setup

Image
     Recently, I had very interesting scenario around NSX ALB (ex Avi Networks) setup with multiple networks, NAT's and no-NAT's, but more important routing requirement inside Customer environment. As you are aware of - NSX ALB Service engines have multiple NICs - to be more accurate there are 1 management + 9 data interfaces, which can be used with different configurations depending on actual needs and infrastructure. In my specific case, there were following assumptions which were successfully deployed across virtual service configuration: - external network (from NSX ALB perspective) - based on Cisco ACI SDN solution, where basically different L3-outs (specific ACI setup) for multiple NSX ALB needs were configured directly on Cisco platform. For this purpose, we will introduce VRF named XYZ, specifically created for connections mentioned above; - there is a need for multiple floating IP + BGP config in place on NSX ALB SE's, which can be found on this link  Default Gatew

VMware SD WAN (Velocloud) on prem lab guide

Image
    For the purpose of lab playground and explore on different features from VMware SD WAN (ex Velocloud) solution, it's possible to relatively easy deploy required solutions inside demo environments. Full on prem production infrastructure requires use of VMware professional services for proper deployment and installation, with cloud based as most preferred option by vendor itself.     Setup requires a couple of OVA files for deployment, like in  typical VMware environments: - vCO - orchestrator, for the purpose of configuration and management plane, - vCG - gateway, for the purpose of control plane function, - vCE - edge, data plane establishment and possibly the only hardware piece in SD WAN setup (also available as OVA of course).     Successful setup comprises next steps: 1) classic OVA deployment of vCO and vCG components - vCO/vCG. For vCO and vCG you have option to dedicate 1 or 2 interfaces for the purpose of communication with external/internal (ie second vCO or vCG) world

VMware SD WAN - multiple locations - LAN IP address space overlapping with NAT

Image
     Different scenarios are possible in terms of routing, NAT-ing and IP overlapping setups using VMware Velocloud SD WAN technology in Customer environments.      Recently I had an PoC with my Customer for the on-prem option with this VMware solution, where different use cases were interesting to show and demonstrate - one of them is something I would like to share and it relates to possibility of LAN-side NAT on Edge (branch) locations, with purpose to have IP overlapped on these setups. Next picture is showing typical Hub&Spoke setup where it can be possible to make this type of configuration: Picture 1. VMware SD WAN lab on-prem environment     Basically thing which needs to be accomplished is appropriate NAT solution for LANs on every branch Edge which are and needs to be the same (192.168.1.0/24 in this example) - as it is shown on Picture 1.      Honestly speaking, NAT is not one of so powerful things inside Velocloud SD WAN solution, if you compares it to traditional netwo

NSX ALB (Avi) BGP scaling using ECMP and RHI

Image
    NSX ALB (ex Avi Networks) solution supports different methods for scaling inside data plane, providing required resources in throughput, high availability and scalability vertically and horizontally. In typical scale-out mechanism for virtual service, which btw can be manual or automatic setup, service engines (SE) are added to group and existing VIP (Virtual Service IP) is maintained (scale-in process is of course opposite one). Basically different options are available for these scaling methods, known as: L2 scale-out mode - VIP always mapped to "primary" SE, which implies that "organization" of traffic and sending to others SE's in group is also responsibility of primary SE --> heavier operations and dependency on this node are possible. Return traffic  can go directly from the secondary SE's via the Direct Secondary Return mode or via the primary SE (Tunnel mode).  Direct Secondary Return  (DSR) mode gives option to  return traffic to use VIP as

VMware NSX - Multi-site design options - recommendations - scenarios

DISCLAIMER - THIS IS REMINDER BASED POST - ALL INFORMATION AVAILABLE AT OFFICIAL MULTI LOCATION DESIGN GUIDE HERE       One of the fundamental scenarios which can be fulfilled using datacenter SDN solution like VMware NSX is capability to implement multisite setup, giving option to Customer to utilise multiple DC locations over any L1/L2/L3 inter site link with larger MTU support because of overlay encapsulation inside NSX. Worth mentioning is fact that security, incorporated inside NSX, is also capable to follow multi location logic needed for these type of implementations. There are two main concepts behind multisite setup offered by NSX: NSX Multisite - single NSX manager cluster managing transport nodes in different locations, and NSX Federation - introduced from NSX v3.x, with 1 central global management cluster (global manager) and 1 NSX manager cluster per location (local manager). In this article I will follow up on design decisions and recommendations for first option, as I