load balancing Archives - NGINX https://www.nginx.com/blog/tag/load-balancing/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Tue, 15 Aug 2023 14:59:52 +0000 en-US hourly 1 Use Infrastructure as Code to Deploy F5 NGINX Management Suite https://www.nginx.com/blog/use-infrastructure-as-code-to-deploy-f5-nginx-management-suite/ Tue, 08 Aug 2023 15:00:17 +0000 https://www.nginx.com/?p=72609 Unlocking the full potential of F5 NGINX Management Suite can help your organization simplify app and API deployment, management, and security. The new NGINX Management Suite Infrastructure as Code (IaC) project aims to help you get started as quickly as possible, while also encouraging the best practices for your chosen deployment environment. If you are [...]

Read More...

The post Use Infrastructure as Code to Deploy F5 NGINX Management Suite appeared first on NGINX.

]]>
Unlocking the full potential of F5 NGINX Management Suite can help your organization simplify app and API deployment, management, and security. The new NGINX Management Suite Infrastructure as Code (IaC) project aims to help you get started as quickly as possible, while also encouraging the best practices for your chosen deployment environment.

If you are responsible for building software infrastructure, you’re likely familiar with IaC as a modern approach to getting consistent results. However, because there are many ways to achieve an IaC setup, it may be daunting to get started or time consuming to create from scratch.

This blog post introduces the NGINX Management Suite Infrastructure as Code repository and outlines how to set up its individual modules to quickly get them up and running.

Project Overview

There are two established methods to design your IaC. One method is the baked approach, where images are created with the required software and configuration. The other method, the fried approach, is to deploy your servers and continuously configure them using a configuration management tool. You can watch this NGINX talk to learn about immutable infrastructure, including the differences between baked and fried images.

In the NGINX Management Suite IaC repository, we take the baked approach – using Packer to bake the images and then Terraform to deploy instances of these images. By creating a pre-baked image, you can speed up the deployment process of your individual NGINX Management Suite systems as well as the consistency of your infrastructure.

Baked Approach – using Packer to bake and then Terraform to deploy instances.

Working with the GitHub Repo

The Packer output is an image/machine with NGINX Management Suite and all supported modules installed (at the time of writing, these are Instance Manager, API Connectivity Manager, Security Monitoring, and Application Delivery Manager ). The license you apply determines which modules you are able to utilize. You can find your license information in the MyF5 Customer Portal or, if you’re not already a customer, you can request a 30-day free trial of API Connectivity Stack or App Delivery Stack to get started.

Confidential information, such as passwords or certificates, are removed during the image generation process. The images can be built using any NGINX Management Suite supported OS and by modifying build parameters. NGINX provides support for several cloud and on-premises environments for both image building and deployment with the intent to actively add support for more. At the time of writing, the setups in the table below are supported.

Cloud Provider

Packer for NGINX Management Suite

Packer for NGINX Plus

Terraform for Basic Reference Architecture

Terraform for Standalone NGINX Management Suite

AWS

GCP

Azure

vSphere

The basic reference architecture deploys an NGINX Management Suite instance with the required amount of NGINX Plus instances. The network topology deployed adheres to best practices for the targeted cloud provider.

For example, if you are using Amazon Web Services (AWS), you can deploy this infrastructure:

AWS Infrastructure example

How to Get Started

To start using IaC for NGINX Management Suite, clone this repository and follow the README for building your images. For the basic reference architecture, you will need to follow the Packer guides to generate an NGINX Management Suite and NGINX Plus image.

After you have generated your images, you can use them to deploy your reference architecture. The Terraform stack uses sensible defaults with configuration options that can be edited to suit your needs.

How to Contribute

This repository is in active development and we welcome contributions from the community. For more information please view our contributing guide.

Additional Resources

The post Use Infrastructure as Code to Deploy F5 NGINX Management Suite appeared first on NGINX.

]]>
Optimizing MQTT Deployments in Enterprise Environments with NGINX Plus https://www.nginx.com/blog/optimizing-mqtt-deployments-in-enterprise-environments-nginx-plus/ Tue, 06 Jun 2023 15:01:38 +0000 https://www.nginx.com/?p=71662 When announcing the R29 release of NGINX Plus, we briefly covered its new native support for parsing MQTT messages. In this post, we’ll build on that and discuss how NGINX Plus can be configured to optimize MQTT deployments in enterprise environments. What Is MQTT? MQTT stands for Message Queuing Telemetry Transport. It’s a very popular, [...]

Read More...

The post Optimizing MQTT Deployments in Enterprise Environments with NGINX Plus appeared first on NGINX.

]]>
When announcing the R29 release of NGINX Plus, we briefly covered its new native support for parsing MQTT messages. In this post, we’ll build on that and discuss how NGINX Plus can be configured to optimize MQTT deployments in enterprise environments.

What Is MQTT?

MQTT stands for Message Queuing Telemetry Transport. It’s a very popular, lightweight publish-subscribe messaging protocol, ideal for connecting Internet of Things (IoT) or machine-to-machine (M2M) devices and applications over the internet. MQTT is designed to operate efficiently in low-bandwidth or low-power environments, making it an ideal choice for applications with a large number of remote clients. It’s used in a variety of industries, including consumer electronics, automotive, transportation, manufacturing, and healthcare.

NGINX Plus MQTT Message Processing

NGINX Plus R29 supports MQTT 3.1.1 and MQTT 5.0. It acts as a proxy between clients and brokers, offloading tasks from core systems, simplifying scalability, and reducing compute costs. Specifically, NGINX Plus parses and rewrites portions of MQTT CONNECT messages, enabling features like:

  • MQTT broker load balancing 
  • Session persistence (reconnecting clients to the same broker) 
  • SSL/TLS termination 
  • Client certificate authentication 

MQTT message processing directives must be defined in the stream context of an NGINX configuration file and are provided by the ngx_stream_mqtt_preread_module
and ngx_stream_mqtt_filter_module.

The preread module processes MQTT data prior to NGINX’s internal proxying, allowing load balancing and upstream routing decisions to be made based on parsed message data.

The filter module enables rewriting of the clientid, username, and password fields within received CONNECT messages. The ability to set these fields to variables and complex values expands configuration options significantly, enabling NGINX Plus to mask sensitive device information or insert data like a TLS certificate distinguished name.

MQTT Directives and Variables

Several new directives and embedded variables are now available for tuning your NGINX configuration to optimize MQTT deployments and meet your specific needs.

Preread Module Directives and Embedded Variables

  • mqtt_preread – Enables MQTT parsing, extracting the clientid and username fields from CONNECT messages sent by client devices. These values are made available via embedded variables and help hash sessions to load balanced upstream servers (examples below).
  • $mqtt_preread_clientid – Represents the MQTT client identifier sent by the device.
  • $mqtt_preread_username – Represents the username sent by the client for authentication purposes.

Filter Module Directives

  • mqtt – Defines whether MQTT rewriting is enabled.
  • mqtt_buffers – Overrides the maximum number of MQTT processing buffers that can be allocated per connection and the size of each buffer. By default, NGINX will impose a limit of 100 buffers per connection, each 1k in length. Typically, this is optimal for performance, but may require tuning in special situations. For example, longer MQTT messages require a larger buffer size. Systems processing a larger volume of MQTT messages for a given connection within a short period of time may benefit from an increased number of buffers. In most cases, tuning buffer parameters has little bearing on underlying system performance, as NGINX constructs buffers from an internal memory pool.
  • mqtt_rewrite_buffer_size – Specifies the size of the buffer used for constructing MQTT messages. This directive has been deprecated and is obsolete since NGINX Plus R30.
  • mqtt_set_connect – Rewrites parameters of the CONNECT message sent from a client. Supported parameters include: clientid, username, and password.

MQTT Examples

Let’s explore the benefits of processing MQTT messages with NGINX Plus and the associated best practices in more detail. Note that we use ports 1883 and 8883 in the examples below. Port 1883 is the default unsecured MQTT port, while 8883 is the default SSL/TLS encrypted port.

MQTT Broker Load Balancing

The ephemeral nature of MQTT devices may cause client IPs to change unexpectedly. This can create challenges when routing device connections to the correct upstream broker. The subsequent movement of device connections from one upstream broker to another can result in expensive syncing operations between brokers, adding latency and cost.

By parsing the clientid field in an MQTT CONNECT message, NGINX can establish sticky sessions to upstream service brokers. This is achieved by using the clientid as a hash key for maintaining connections to broker services on the backend.

In this example, we proxy MQTT device data using the clientid as a token for establishing sticky sessions to three upstream brokers. We use the consistent parameter so that if an upstream server fails, its share of the traffic is evenly distributed across the remaining servers without affecting sessions that are already established on those servers.

stream {
      mqtt_preread on; 
     
      upstream backend {
          zone tcp_mem 64k;
          hash $mqtt_preread_clientid consistent;
    
          server 10.0.0.7:1883; # upstream mqtt broker 1
          server 10.0.0.8:1883; # upstream mqtt broker 2
          server 10.0.0.9:1883; # upstream mqtt broker 3 
      }
    
      server {
          listen 1883;
          proxy_pass backend;
          proxy_connect_timeout 1s;
      }
  }

NGINX Plus can also parse the username field of an MQTT CONNECT message. For more details, see the ngx_stream_mqtt_preread_module specification

SSL/TLS Termination

Encrypting device communications is key to ensuring data confidentiality and protecting against man-in-the-middle attacks. However, TLS handshaking, encryption, and decryption can be a resource burden on an MQTT broker. To solve this, NGINX Plus can offload data encryption from a broker (or a cluster of brokers), simplifying security rules and allowing brokers to focus on processing device messages. 

In this example, we show how NGINX can be used to proxy TLS-encrypted MQTT traffic from devices to a backend broker. The ssl_session_cache directive defines a 5-megabyte cache, which is enough to store approximately 20,000 SSL sessions. NGINX will attempt to reach the proxied broker for five seconds before timing out, as defined by the proxy_connect_timeout directive.

stream {
      server {
          listen 8883 ssl;
          ssl_certificate /etc/nginx/certs/tls-cert.crt;
          ssl_certificate_key /etc/nginx/certs/tls-key.key;
          ssl_session_cache shared:SSL:5m;
          proxy_pass 10.0.0.8:1883;
          proxy_connect_timeout 5s;
      }
  } 

Client ID Substitution

For security reasons, you may opt to not store client-identifiable information in the MQTT broker’s database. For example, a device may send a serial number or other sensitive data as part of an MQTT CONNECT message. By replacing a device’s identifier with other known static values received from a client, an alternate unique key can be established for every device attempting to reach NGINX Plus proxied brokers.

In this example, we extract a unique identifier from a device’s client SSL certificate and use it to mask its MQTT client ID. Client certificate authentication (mutual TLS) is controlled with the ssl_verify_client directive. When set to the on parameter, NGINX ensures that client certificates are signed by a trusted Certificate Authority (CA). The list of trusted CA certificates is defined by the ssl_client_certificate directive. 

stream {
      mqtt on; 
    
      server {
          listen 8883 ssl;
          ssl_certificate /etc/nginx/certs/tls-cert.crt;
          ssl_certificate_key /etc/nginx/certs/tls-key.key;
          ssl_client_certificate /etc/nginx/certs/client-ca.crt;
          ssl_session_cache shared:SSL:10m;
          ssl_verify_client on;
          proxy_pass 10.0.0.8:1883;
          proxy_connect_timeout 1s;
          
          mqtt_set_connect clientid $ssl_client_serial;
      }
  }

Client Certificate as an Authentication Credential

One common approach to authenticating MQTT clients is to use data stored in a client certificate as the username. NGINX Plus can parse client certificates and rewrite the MQTT username field, offloading this task from backend brokers. In the following example, we extract the client certificate’s Subject Distinguished Name (Subject DN) and copy it to the username portion of an MQTT CONNECT message.

stream {
      mqtt on; 
     
      server {
          listen 8883 ssl;
          ssl_certificate /etc/nginx/certs/tls-cert.crt;
          ssl_certificate_key /etc/nginx/certs/tls-key.key;
          ssl_client_certificate /etc/nginx/certs/client-ca.crt;
          ssl_session_cache shared:SSL:10m;
          ssl_verify_client on;
          proxy_pass 10.0.0.8:1883;
          proxy_connect_timeout 1s;
          
          mqtt_set_connect username $ssl_client_s_dn;
      }
  } 

For a complete specification on NGINX Plus MQTT CONNECT message rewriting, see the ngx_stream_mqtt_filter_module specification.

Get Started Today

Future developments to MQTT in NGINX Plus may include parsing of other MQTT message types, as well as deeper parsing of the CONNECT message to enable functions like:

  • Additional authentication and access control mechanisms
  • Protecting brokers by rate limiting “chatty” clients
  • Message telemetry and connection metrics

If you’re new to NGINX Plus, sign up for a free 30-day trial to get started with MQTT. We would also love to hear your feedback on the features that matter most to you. Let us know what you think in the comments.

The post Optimizing MQTT Deployments in Enterprise Environments with NGINX Plus appeared first on NGINX.

]]>
Simplify and Accelerate Cloud Migrations with F5 NGINXaaS for Azure https://www.nginx.com/blog/simplify-and-accelerate-cloud-migrations-with-nginxaas-for-azure/ Mon, 30 Jan 2023 14:31:22 +0000 https://www.nginx.com/?p=71003 F5 NGINX is excited to announce the general availability of our first as-a-Service (aaS) product: F5 NGINXaaS for Azure. We are committed to providing flexible aaS options to fulfill the evolving needs of our customers. The rapid expansion of aaS offerings parallels the explosive growth in the popularity of cloud computing. The market has quickly warmed [...]

Read More...

The post Simplify and Accelerate Cloud Migrations with F5 NGINXaaS for Azure appeared first on NGINX.

]]>
F5 NGINX is excited to announce the general availability of our first as-a-Service (aaS) product: F5 NGINXaaS for Azure. We are committed to providing flexible aaS options to fulfill the evolving needs of our customers.

The rapid expansion of aaS offerings parallels the explosive growth in the popularity of cloud computing. The market has quickly warmed to the aaS concept because of some clear advantages compared to hardware and software solutions:

  • Faster feature velocity – Enhancements are added on a rolling basis, as opposed to the restrictions of a traditional scheduled cadence of product releases.
  • Consistent pricing model – Lower up‑front capital expenses – including infrastructure costs – and more predictable pricing that allows you to address usage needs more flexibly and pay only for what you need.
  • Ease of use – Fully managed offerings abstract away complexity, letting organizations focus on business value rather than the effort of standing up the service, and can be accessed by familiar tools including third‑party products like Terraform, SDKs, CLIs, and cloud vendor portals.

What Is NGINXaaS for Azure?

NGINXaaS for Azure is an Infrastructure-as-a-Service (IaaS) version of NGINX Plus that enables application developers to deliver consistent, secure, and highly available high‑performance apps in Microsoft Azure. With NGINXaaS for Azure, organizations can now deploy NGINX Plus across environments in the way that best suits their needs, whether that be as an Azure‑native solution, a hybrid approach, or strictly on premises.

Topology diagram showing F5 NGINXaaS for Azure in the Microsoft Azure ecosystem

We announced the public preview of NGINXaaS for Azure in May 2022. Customers welcomed the power it gave them to extend existing workloads and applications to Azure and the flexibility of consuming NGINX products as a service. During the public preview period, we made many enhancements in response to much‑appreciated customer feedback.

Some of the new capabilities in NGINXaaS for Azure include:

  • Content caching to speed delivery to clients while also reducing the workload on servers
  • Rate limiting for protection of upstream web and application servers
  • Metrics‑based alerts for proactive monitoring
  • Assurance that NGINXaaS for Azure deployments are running the most secure, up-to-date, and stable versions of NGINX Plus
  • Manual scaling so users can ensure that deployments match the requirements for their service
  • Availability in additional Azure regions and zones for improved performance and security of business‑critical workloads

To discover more about NGINXaaS for Azure and its capabilities, please refer to the documentation and watch the livestream (debuts March 14, 2023 at 9:00 AM PST):

Lift and Shift NGINX Plus Configurations to Azure

Lifting and shifting applications to the cloud is a standard part of most cloud adoption journeys, but it’s still complicated and time‑consuming. The most empowering feature of NGINXaaS for Azure is the ease with which you can port existing NGINX Plus configurations to Azure.

Bring-your-own-configuration (BYOC) reduces migration pains across three categories:

  • Migrate faster by using existing configurations

    Your configurations represent your organization’s tried-and-true solutions to your particular app delivery needs. Being able to port them into Azure and reliably manage them is a significant competitive advantage for your entire business. It brings a level of consistency to seemingly mundane, but immensely critical, functions like traffic management.

  • Migrate without impacting team workflows

    Migrations are generally disruptive and time‑consuming, but with NGINXaaS for Azure they don’t have to be. Porting your NGINX Plus configurations into NGINXaaS for Azure means you can migrate with minimal distraction and help team members rapidly achieve value from the service. With NGINXaaS for Azure there’s no interruption to the efficiency, performance, and reliability you’ve come to expect from NGINX.

  • Reduce time-to-value in Azure and leverage integrations

    Once your configurations are ported into Azure, there are further benefits. Native GitHub CI/CD workflows for creating and updating NGINX Plus configuration files make deployment fast and convenient. You can also leverage Microsoft’s integrations with GitHub to manage Azure credentials and secrets.

For a real‑life example of how a financial services company migrated NGINX Plus configs to Azure using a BYOC model, read Causeway Capital Management to Simplify Deployment and Save Time with F5 NGINXaaS for Azure.

Simplify Hybrid-Cloud App Delivery Tool Stack

In addition to on-premises deployments, most organizations operate in multiple clouds, which commonly come with issues relating to complexity, resilience, and security. One major cause of these issues is tool sprawl: the unnecessary duplication of tools that perform the same functions. Tool sprawl forces teams to maintain different policies for each environment, slows troubleshooting, and limits visibility across the infrastructure. One tool it’s easy to standardize across an entire organization is the load balancer.

NGINXaaS for Azure’s close tie to NGINX Plus is great news for NGINX shops that want consistent app‑delivery tooling across multiple environments.

  • For customers wanting to lift and shift NGINX Plus configurations to the Azure cloud, NGINXaaS for Azure is a great choice which can be purchased on the Azure Marketplace and deployed with just a few clicks in the Azure Portal.
  • For customers requiring on‑premises deployments or more advanced, customized Azure deployments, NGINX Plus can be purchased on the Azure Marketplace or as a separately purchased product using a bring-your-own-license approach.

Looking Ahead

NGINXaaS for Azure is the result of an F5 and Microsoft co‑development effort – three years in the making – to provide a fully Azure‑native experience of NGINX Plus. In the spirit of the aaS model, this collaboration will continue to deliver exciting new possibilities in the coming months. Stay tuned for the latest news of feature upgrades and increased capabilities that will further help your organization securely address app delivery needs with greater flexibility and resiliency.

Learn more about NGINXaaS for Azure in the product documentation.

Want to give F5 NGINXaaS for Azure a try? Visit the Azure Marketplace or contact us to discuss your use cases.

The post Simplify and Accelerate Cloud Migrations with F5 NGINXaaS for Azure appeared first on NGINX.

]]>
NGINX Plus and Microsoft Azure Load Balancers https://www.nginx.com/blog/nginx-plus-and-azure-load-balancers-on-microsoft-azure/ Fri, 25 Jun 2021 22:04:18 +0000 https://www.nginx.com/?p=16152 [Editor – This post has been updated to reflect the features supported by NGINX Plus and Azure load balancing services as of June 2021. It also refers to the NGINX Plus API, which replaces and deprecates the separate dynamic configuration module mentioned in the original version of the post.] Customers using Microsoft Azure have three options for load [...]

Read More...

The post NGINX Plus and Microsoft Azure Load Balancers appeared first on NGINX.

]]>
table.nginx-blog, table.nginx-blog th, table.nginx-blog td { border: 2px solid black; border-collapse: collapse; } table.nginx-blog { width: 100%; } table.nginx-blog th { background-color: #d3d3d3; align: left; padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 2px; line-height: 120%; } table.nginx-blog td { padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 5px; line-height: 120%; } table.nginx-blog td.center { text-align: center; padding-bottom: 2px; padding-top: 5px; line-height: 120%; }

[Editor – This post has been updated to reflect the features supported by NGINX Plus and Azure load balancing services as of June 2021. It also refers to the NGINX Plus API, which replaces and deprecates the separate dynamic configuration module mentioned in the original version of the post.]

Customers using Microsoft Azure have three options for load balancing: NGINX Plus, the Azure load balancing services, or NGINX Plus in conjunction with the Azure load balancing services. This post aims to give you enough information to make a decision and also shows you how using NGINX Plus with Azure Load Balancer can give you a highly available HTTP load balancer with rich Layer 7 functionality.

Overview

Microsoft Azure gives its users two choices of a load balancer: Azure Load Balancer for basic TCP/UDP load balancing (at Layer 4, the network layer) and Azure Application Gateway for HTTP/HTTPS load balancing (at Layer 7, the application layer). While these solutions work for simple use cases, they do not provide many features that come standard with NGINX Plus.

Here is a general comparison between NGINX Plus and the Azure load‑balancing offerings:

Feature NGINX Plus Azure Load Balancer Azure Application Gateway NGINX Plus & Azure Load Balancer
HTTP and HTTPS load balancing
HTTP/2 load balancing
WebSocket load balancing
TCP/UDP load balancing
Load balancing methods Advanced Simple Simple Advanced
Session persistence Advanced Simple Simple Advanced
HTTP health checks Advanced Simple Simple Advanced
TCP/UDP health checks Advanced Simple Advanced
SSL/TLS termination
Rate and connection limits
URL rewriting and redirecting
URL request mapping
Active-active NGINX Plus cluster

Now let’s now explore some of the differences between NGINX Plus and the Azure load balancing services, their unique features, and how NGINX Plus and Azure load balancers can work together.

Comparing NGINX Plus and Azure Load Balancing Services

Load Balancing Methods

NGINX Plus offers a choice of several load‑balancing methods in addition to the default Round Robin method:

  • Least Connections – Each request is sent to the server with the lowest number of active connections.
  • Least Time – Each request is sent to the server with the lowest score, which is calculated from a weighted combination of average latency and lowest number of active connections.
  • IP Hash – Each request is sent to the server determined by the source IP address of the request.
  • Generic Hash – Each request is sent to the server determined from a user‑defined key, which can contain any combination of text and NGINX variables, for example the variables corresponding to the Source IP Address and Source Port header fields, or the URI.
  • Random – Each request is sent to a server selected at random. When the two parameter is included, NGINX Plus selects two servers at random and then chooses between them using either the Least Connections algorithm (the default) or Least Time, as configured.

All of the methods can be extended by assigning different weight values to each backend server. For details on the methods, see the NGINX Plus Admin Guide.

Azure Load Balancer offers one load balancing method, Hash, which by default uses a key based on the Source IP Address, Source Port, Destination IP Address, Destination Port, and Protocol header fields to choose a backend server.

Azure Application Gateway provides only a round‑robin method.

Session Persistence

Session persistence, also known as sticky sessions or session affinity, is needed when an application requires that all requests from a specific client continue to be sent to the same backend server because client state is not shared across backend servers.

NGINX Plus supports three advanced session‑persistence methods:

  • Sticky Cookie – NGINX Plus adds a session cookie to the first response from the upstream group for a given client. This cookie identities the backend server that was used to process the request. The client includes this cookie in subsequent requests and NGINX Plus uses it to direct the client request to the same backend server.
  • Sticky Learn – NGINX Plus monitors requests and responses to locate session identifiers (usually cookies) and uses them to determine the server for subsequent requests in a session.
  • Sticky Route – A mapping between route values and backend servers can be configured so that NGINX Plus monitors requests for a route value and chooses the matching backend server.

NGINX Plus also offers two basic session‑persistence methods, implemented as two of the load‑balancing methods described above:

  • IP Hash – The backend server is determined by the IP address of the request.
  • Hash – The backend server is determined from a user-defined key, for example Source IP Address and Source Port, or the URI.

Azure Load Balancer supports the equivalent of the NGINX Plus Hash method, although the key is limited to certain combinations of the Source IP Address, Source Port, Destination IP Address, Destination Port, and Protocol header fields.

Azure Application Gateway supports the equivalent of the NGINX Plus Sticky Cookie method with the following limitations: you cannot configure the name of the cookie, its expiration date, the domain, the path, or the HttpOnly or Secure cookie attributes.

Note: When you use Azure Load Balancer, the NGINX Plus IP Hash method, or the NGINX Plus Hash method with the Source IP Address included in the key, session persistence works correctly only if the client’s IP address remains the same throughout the session. This is not always the case, as when a mobile client switches from a WiFi network to a cellular one, for example. To make sure requests continue hitting the same backend server, it is better to use one of the advanced session‑persistence methods listed above.

Health Checks

Azure Load Balancer and Azure Application Gateway support basic application health checks. You can specify the URL that the load balancer requests, and it considers the backend server healthy if it receives the expected HTTP 200 return code. You can specify the health check frequency and the timeout period before the server is considered unhealthy. With Azure Application Gateway, you can also customize the expected response code and match against the contents of the response body.

NGINX Plus extends this functionality with advanced health checks. In addition to specifying the URL to use, with NGINX Plus you can insert headers into the request and look for different response codes, and examine both the headers and body of the response.

A useful related feature in NGINX Plus is slow start. NGINX Plus slowly ramps up the load to a new or recently recovered server so that it doesn’t become overwhelmed by connections.This is useful when your backend servers require some warm‑up time and will fail if they are given their full share of traffic as soon as they show as healthy.

NGINX Plus also supports health checks to TCP and UDP servers, which allow you to specify a string to send and a string to look for in the response.

Azure Load Balancer supports TCP health checks, but does not offer this level of monitoring.

SSL/TLS Termination

NGINX Plus supports SSL/TLS termination, as does Azure Application Gateway. Azure Load Balancer does not.

Connection and Rate Limits

With NGINX Plus, you can configure multiple limits to control the traffic to and from your NGINX Plus instance. These include limiting inbound connections, the connections to backend nodes, the rate of inbound requests, and the rate of data transmission from NGINX Plus to clients.

Azure Application Gateway and Azure Load Balancer do not support rate or connection limits. However, you can use other Azure services to configure and enable rate limiting.

Protocol Support and URL Rewriting and Redirecting

NGINX Plus, Azure Application Gateway, and Azure Load Balancer all support the following:

  • HTTP/2 – NGINX Plus has accepted HTTP/2 requests from clients since 2016. Azure added WebSocket support more recently.
  • WebSocket – NGINX Plus has accepted WebSocket connections from clients since 2014. Azure added WebSocket support more recently.
  • URL rewriting and request redirect – The URL of a request can be changed before it is passed to a backend server, meaning you can change request paths and file locations internally without modifying the URLs advertised to clients. You can also redirect requests, for example by changing the scheme on an HTTP request to HTTPS.

NGINX Plus with Azure Load Balancing Services

When used together with Azure Load Balancer and Azure Traffic Manager, NGINX Plus becomes a highly available load balancer solution with rich Layer 7 functionality.

Active-Active High Availability

By using Azure Load Balancer to load balance across NGINX Plus instances in an Availability Set, you create a highly available load balancer within a region.

Autoscaling NGINX Plus

You can set up autoscaling of NGINX Plus instances based on average CPU usage. This is possible by creating Availability Sets in the Azure Cloud Service that hosts your NGINX Plus instances. You need to take care of synchronization of NGINX Plus config files.

Autoscaling Backend Instances

You can also set up autoscaling of your backend instances based on average CPU usage. This is possible by creating Availability Sets in the Azure Cloud Service that hosts your backend instances. You need to take care of adding or removing backend instances from the NGINX Plus configuration, which is possible with the NGINX Plus API.

To automate updates to the NGINX Plus configuration (either in combination with Availability Sets or when using NGINX Plus on its own), you can integrate a service discovery system with NGINX Plus, either via the NGINX Plus API or via DNS, if the system has a DNS interface. Check out our blog posts on using NGINX Plus with popular service discovery systems:

Integration with Azure Traffic Manager

For a globally distributed environment you can use Azure Traffic Manager to distribute traffic from clients across many regions.

Additional Features in Azure Load Balancing Services

Azure Load Balancer and Application Gateway are managed by Azure Cloud and both provide a highly available load‑balancing solution.

A feature of Azure Load Balancer that is not available in NGINX Plus is source NAT, in which traffic outbound from backend instances has the same source IP address as the load balancer.

Azure Load Balancer provides automatic reconfiguration when using Azure Cloud’s autoscaling feature.

Summary

If your load balancing requirements are simple, the Azure load balancing offerings can provide a good solution. When the requirements get more complex, NGINX Plus is a good choice. You can use NGINX Plus either alone or in conjunction with Azure Load Balancer for high availability of NGINX Plus instances.

To try NGINX Plus on Microsoft Azure, start your free 30-day trial today or contact us to discuss your use cases.

The post NGINX Plus and Microsoft Azure Load Balancers appeared first on NGINX.

]]>
Choosing the Right Load Balancer on Amazon: AWS Application Load Balancer vs. NGINX Plus https://www.nginx.com/blog/aws-alb-vs-nginx-plus/ Tue, 21 Jul 2020 19:37:52 +0000 https://www.nginx.com/?p=44277 In August 2016, Amazon Web Services (AWS) introduced Application Load Balancer for Layer 7 load balancing of HTTP and HTTPS traffic. The new product added several features missing from AWS’s existing Layer 4 and Layer 7 load balancer, Elastic Load Balancer, which was officially renamed Classic Load Balancer. A year later, AWS launched Network Load Balancer for improved Layer 4 [...]

Read More...

The post Choosing the Right Load Balancer on Amazon: AWS Application Load Balancer vs. NGINX Plus appeared first on NGINX.

]]>
table.nginx-blog, table.nginx-blog th, table.nginx-blog td { border: 2px solid black; border-collapse: collapse; } table.nginx-blog { width: 100%; } table.nginx-blog th { background-color: #d3d3d3; align: left; padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 2px; line-height: 120%; } table.nginx-blog td { padding-left: 5px; padding-right: 5px; padding-bottom: 2px; padding-top: 5px; line-height: 120%; } table.nginx-blog td.center { text-align: center; padding-bottom: 2px; padding-top: 5px; line-height: 120%; }

In August 2016, Amazon Web Services (AWS) introduced Application Load Balancer for Layer 7 load balancing of HTTP and HTTPS traffic. The new product added several features missing from AWS’s existing Layer 4 and Layer 7 load balancer, Elastic Load Balancer, which was officially renamed Classic Load Balancer.

A year later, AWS launched Network Load Balancer for improved Layer 4 load balancing, so the set of choices for users running highly available, scalable applications on AWS includes:

In this post, we review ALB’s features and compare its pricing and features to NGINX Open Source and NGINX Plus.

Notes –

  • The information about supported features is accurate as of July 2020, but is subject to change.
  • For a direct comparison of NGINX Plus and Classic Load Balancer (formerly Elastic Load Balancer or ELB), as well as information on using them together, see our previous blog post.
  • For information on using NLB for a high‑availability NGINX Plus deployment, see our previous blog post.

Features In Application Load Balancer

ALB, like Classic Load Balancer or NLB, is tightly integrated into AWS. Amazon describes it as a Layer 7 load balancer – though it does not provide the full breadth of features, tuning, and direct control that a standalone Layer 7 reverse proxy and load balancer can offer.

ALB provides the following features that are missing from Classic Load Balancer:

  • Content‑based routing. ALB supports content‑based routing based on the request URL, Host header, and fields in the request that include standard and custom HTTP headers and methods, query parameters, and source IP address. (See “Benefits of migrating from a Classic Load Balancer” in the ALB documentation.)
  • Support for container‑based applications. ALB improves on the existing support for containers hosted on Amazon’s EC2 Container Service (ECS).
  • More metrics. You can collect metrics on a per‑microservice basis.
  • WebSocket support. ALB supports persistent TCP connections between a client and server.
  • HTTP/2 support. ALB supports HTTP/2, a superior alternative when delivering content secured by SSL/TLS.

(For a complete feature comparison of ALB and Classic Load Balancer, see “Product comparisons” in the AWS documentation.)

ALB was a significant update for AWS users who had struggled with Classic Load Balancer’s limited feature set, and it went some way towards addressing the requirements of sophisticated users who need to be able to secure, optimize, and control the traffic to their web applications. However, it still does not provide all the capabilities of dedicated reverse proxies (such as NGINX) and load balancers (such as NGINX Plus).

A Better Approach to Control Traffic on AWS

Rather than using Amazon ALB, users can deploy NGINX Open Source or NGINX Plus on AWS to control and load balance traffic. They can also deploy Classic Load Balancer or Network Load Balancer as a frontend to achieve high availability across multiple availability zones. The table compares features supported by ALB, NGINX, and NGINX Plus.

Note: The information in the following table is accurate as of July 2020, but is subject to change.

Amazon ALB NGINX Open Source NGINX Plus
Load balancing
methods and features
Round_robin and least_outstanding_requests methods, with cookie‑based session persistence, weighted target groups, and slow start Multiple load‑balancing methods (including Round Robin, Least Connections, Hash, IP Hash, and Random) with weighted upstream servers Same as NGINX Open Source, plus Least Time method, more session persistence methods, and slow start
Caching ❌ Caching in the load balancer not supported ✅ Static file caching and dynamic (application) caching ✅ Static and dynamic caching plus advanced features
Health checks Active (identifies failed servers by checking status code returned for asynchronous checks) Passive (identifies failed servers by checking responses to client requests) Both active and passive – active checks are richer and more configurable than ALB’s
High availability You can deploy ALB instances in multiple Availability Zones for HA, but not across regions Active‑active HA with NLB and active‑passive HA with Elastic IP addresses Same as NGINX Open Source, plus built‑in cluster state sharing for seamless HA across all NGINX Plus instances
Support for all protocols in the IP suite ❌ HTTP and HTTPS only ✅ Also TCP and UDP, with passive health checks ✅ Also TCP and UDP, with active and passive health checks
Multiple applications per load balancer instance
Content‑based routing ✅ Based on request URL, Host header, and request fields including standard and custom HTTP headers, methods, query parameters, and source IP address ✅ Based on same factors as ALB, plus cookies and dynamic calculations using the NGINX JavaScript module as an inline JavaScript engine ✅ Based on same factors as NGINX Open Source, plus JWT claims and values in the key‑value store
Containerized applications Can load balance to Amazon IDs, ECS container instances, Auto Scaling groups, and AWS Lambda functions Requires manual configuration or configuration templates Automated configuration using DNS, including SRV records; works with leading registry services
Portability ❌ All environments (Dev, test, and production) must be on AWS ✅ Any environment can be on any deployment platform
SSL/TLS ✅ Multiple SSL/TLS certificates with SNI support
❌ Validation of SSL/TLS upstreams not supported
✅ Multiple SSL/TLS certificates with SNI support
✅ Full choice of SSL/TLS ciphers
✅ Full validation of SSL/TLS upstreams
HTTP/2 and WebSocket
Authentication capabilities – OIDC, SAML, LDAP, AD IdP authentication options
– Integrated with AWS Cognito and CloudFront
Multiple authentication options
Advanced capabilities ❌ Barebones API ✅ Origin serving, prioritization, rate limiting, and more ✅ Same as NGINX Open Source, plus RESTful API, key‑value store
Logging and debugging ✅ Amazon binary log format ✅ Customizable log files and multiple debug interfaces ✅ Fully customizable log files and multiple debug interfaces, fully supported by NGINX
Monitoring tools ✅ Integrated with Amazon CloudWatch ✅ NGINX Controller* and other third‑party tools ✅ NGINX Controller and other third‑party tools; extended set of reported statistics
Official technical support ✅ At additional cost ❌ Community support only ✅ Included in price and direct from NGINX
Free tier ✅ First 750 hours ✅ Always free ✅ 30‑day trial subscription

* NGINX Controller is now F5 NGINX Management Suite.

Of course, you should evaluate your load balancing choice not by a feature checklist, but by assessing the capabilities you need to deliver your applications flawlessly, with high security, maximum availability, and full control.

Handling Traffic Spikes

Amazon’s Classic Load Balancer (formerly ELB) suffered from a poor response to traffic spikes. Load balancer instances were automatically sized for the current level of traffic, and it could take many minutes for ELB to respond and deploy additional capacity when spikes occurred. Users had to resort to a manual, forms‑based process to request additional resources in advance of traffic spikes (referred to as “pre‑warming”). Because ALB is based on NGINX, ALB instances can handle much more traffic, but you may still observe scaling events in response to traffic spikes. Furthermore, a traffic spike automatically results in greater consumption of Load Balancer Capacity Units (LCUs) and consequently a higher cost.

You can gain complete control over capacity and cost if you deploy and scale your load‑balancing proxies yourself. NGINX and NGINX Plus are deployed within standard Amazon instances, and our sizing guide gives an indication of the potential peak performance of instance types with different capacities. Pricing for NGINX Plus is the same for all instance sizes, so it’s cost‑effective to deploy excess capacity to handle spikes, and it’s quick to deploy more instances – no forms to complete – when more capacity is needed.

Detecting Failed Servers with Health Checks

Our testing of Amazon ALB indicates that it does not implement “passive” health checks. A server is only detected as having failed once an asynchronous test verifies that it is not returning the expected status code.

We discovered this by creating an ALB instance to load balance a cluster of instances. We configured a health check with the minimum 5-second frequency and minimum threshold of 2 failed checks, and sent a steady stream of requests to the ALB. When we stopped one of the instances, for some requests ALB returned a 502 Bad Gateway error for several seconds until the health check detected the instance was down. Passive health checks (supported by both NGINX and NGINX Plus) prevent these types of errors from being seen by end users.

ALB’s health checks can only determine the health of an instance by inspecting the HTTP status code (such as 200 OK or 404 Not Found). Such health checks are unreliable; for example, some web applications replace server‑generated errors with user‑friendly pages, and some errors are only reported in the body of a web page.

NGINX Plus supports both passive and active health checks, and the latter are more powerful that ALB’s, able to match against the body of a response as well as the status code.

Pricing

Finally, the biggest question you face if you deploy ALB is cost. Load balancing can be a significant part of your Amazon bill.

AWS uses a complicated algorithm to determine pricing. Unless you know precisely how many new connections, how many concurrent connections, and how much data you will manage each month – which is very hard to predict – and you can run the LCU calculation the same way that Amazon does, you’ll be dreading your Amazon bill each month.

NGINX Plus on AWS gives you complete predictability. For a fixed hourly cost plus AWS hosting charges, you get a significantly more powerful load‑balancing solution with full support.

Conclusion

NGINX Plus is a proven solution for Layer 7 load balancing, with Layer 4 load‑balancing features as well. It works well in tandem with Amazon’s own Classic Load Balancer or NLB.

We encourage the continuing and growing use of NGINX and NGINX Plus in the AWS environment, already a very popular solution. If you are not already an NGINX Plus user, start your free 30-day trial today or contact us to discuss your use cases.

The post Choosing the Right Load Balancer on Amazon: AWS Application Load Balancer vs. NGINX Plus appeared first on NGINX.

]]>
NGINX Unit 1.16.0 Introduces New Yet Familiar Features https://www.nginx.com/blog/nginx-unit-1-16-0-now-available/ Thu, 09 Apr 2020 00:28:10 +0000 https://www.nginx.com/?p=63978 Like many of you, the NGINX Unit team has been hunkered down at home during the COVID‑19 pandemic. Nonetheless, we’ve been able to maintain our steady release cadence, introducing versions 1.15.0 and 1.16.0 in the past couple months. Let’s take a quick look at the new features we’ve added. Two of the features new with NGINX Unit 1.16.0 [...]

Read More...

The post NGINX Unit 1.16.0 Introduces New Yet Familiar Features appeared first on NGINX.

]]>

Like many of you, the NGINX Unit team has been hunkered down at home during the COVID‑19 pandemic. Nonetheless, we’ve been able to maintain our steady release cadence, introducing versions 1.15.0 and 1.16.0 in the past couple months. Let’s take a quick look at the new features we’ve added.

Two of the features new with NGINX Unit 1.16.0 are familiar to fans of NGINX Open Source and NGINX Plus: the fallback routing option is similar to the NGINX try_files directive, and the upstreams object introduces round‑robin load balancing of requests across a dedicated group of backend servers.

The fallback Routing Option

The first new feature enables you to define an alternative routing action for cases when a static file can’t be served for some reason. You can easily imagine ways to extend this logic beyond mere file system misses, but we decided to make them the first use case we address. Therefore, our initial implementation of the fallback action pairs with the share action introduced in NGINX Unit 1.11.0. Let’s see how it’s done.

The new fallback option defines what NGINX Unit does when it can’t serve a requested file from the share directory for any reason. Consider the following action object:

When this action is performed, NGINX Unit attempts to serve the request with a file from the /data/www/ directory. However, if (and only if) the requested file is unavailable (not found, insufficient rights, you name it), NGINX Unit performs the fallback action. Here, it’s a pass action that forwards the request to a PHP blog application, but you can configure a proxy or even another share as well (more on the latter below).

Effectively, this means that NGINX Unit serves existing static files and simultaneously forwards all other requests to the application, thus reducing the need for extra routing steps. Less configuration reduces the chance of a mistake.

Moreover, nothing prevents nesting multiple share actions to create an elaborate request handler:

This snippet builds upon the previous one by adding yet another file location, /data/cache/, to the fallback chain before the request is passed to the same application as in the previous snippet. Keep in mind that in this initial implementation fallback can only accompany a share; it can’t be specified as an alternative to pass or proxy.

The logic of this config option may seem simple, but it enables NGINX Unit to single‑handedly run many powerful applications that previously required an additional software layer. For example, we’ve already updated our how‑tos for WordPress, Bugzilla, and NextCloud to make use of the new option, significantly reducing configuration overhead.

Round-Robin Load Balancing with Upstreams

The other major feature we’re introducing is the upstreams object. It resides within the config section as a peer of the listeners, applications, routes, and settings objects. In case you’re not familiar with NGINX, an upstream is an abstraction that groups several servers into a single logical entity to simplify management and monitoring. Typically, you can distribute workloads, assign different roles, and fine‑tune properties of individual servers within an upstream, yet it looks and acts like a single entity when viewed from outside.

In NGINX Unit, upstreams are configured as follows:

Like an application or a route, an upstream can be the target of a pass action in both listeners and routes objects. The definition of each upstream includes its name (rr-lb on line 9) and a mandatory servers object, which contains a configuration object for each server in the named upstream, specifying the server’s IP address and port and optionally other characteristics. In the initial implementation, the only supported option is the integer‑valued weight of the server, used for load balancing.

NGINX Unit automatically distributes requests between the servers in the upstream object in a round‑robin fashion according to their weights. In the example above, the second server receives approximately twice as many requests as the first one (from which you can deduce that the default weight is 1). Again, round robin is just one of many possible load‑balancing methods and will be joined by other methods in future.

With the introduction of upstreams, NGINX Unit greatly enhances its range of functionality: originally a solid standalone building block for managing your apps, it is steadily gaining momentum as a versatile, feature‑rich component of your overall app‑delivery infrastructure. You can use it as an application endpoint, a reverse proxy, a load balancer, a static cache server, or in any other way you may come up with.

Other Developments

NGINX Unit now has a shiny new roadmap where you can find out whether your favorite feature is going to be implemented any time soon, and rate and comment on our in‑house initiatives. Feel free to open new issues in our repo on GitHub and share your ideas for improvement. Perhaps one day they’ll turn up on our roadmap as well!

In case you are wondering what happened to our containerization initiative: it’s alive and well, thank you very much. In NGINX Unit 1.14.0, we introduced the ability to change user and group settings for isolated applications when the NGINX Unit daemon runs as an unprivileged user (remember that the recommended way is to run it as root, though). Of course, this isn’t the end of it: there’s much more coming in the next few months.

What’s Next

Speaking of our plans: our extended team is working on several under-the-hood enhancements such as IPC or memory buffer management to make NGINX Unit a tad more robust. On the user‑facing side of things, our current endeavors include enhancements in load balancing, the ability to return custom HTTP response status codes during routing, and the rootfs option which confines running apps within a designated file system directory. All in all, NGINX Unit remains a busy construction site, so don’t forget to bring a hard hat!

The post NGINX Unit 1.16.0 Introduces New Yet Familiar Features appeared first on NGINX.

]]>
NGINX Load Balancing Deployment Scenarios https://www.nginx.com/blog/nginx-load-balance-deployment-models/ Tue, 01 Jan 2019 23:00:00 +0000 https://www.nginx.com/?p=1132 We are increasingly seeing NGINX Plus used as a load balancer or application delivery controller (ADC) in a number of deployment scenarios and this post describes some of the most common ones. We discuss why customers are using the different deployment scenarios, some things to consider when using each scenario, and some likely migration paths from one to another. Introduction [...]

Read More...

The post NGINX Load Balancing Deployment Scenarios appeared first on NGINX.

]]>
We are increasingly seeing NGINX Plus used as a load balancer or application delivery controller (ADC) in a number of deployment scenarios and this post describes some of the most common ones. We discuss why customers are using the different deployment scenarios, some things to consider when using each scenario, and some likely migration paths from one to another.

Introduction

Many companies are looking to move away from hardware load balancers and to NGINX Plus to deliver their applications. There can be many drivers for this decision: to give the application teams more control of application load balancing; to move to virtualization or the cloud, where it is not possible to use hardware appliances; to employ DevOps tools, which can’t be used with hardware load balancers; to move to an elastic, scaled‑out infrastructure, not feasible with hardware appliances; and more.

If you are deploying a load balancer for the first time, NGINX Plus might be the ideal platform for your application. If you are adding NGINX Plus to an existing environment that already has hardware load balancers in place, you can deploy NGINX Plus in parallel or in series.

Scenario 1: NGINX Plus Does All Load Balancing

The simplest deployment scenario is where NGINX Plus handles all the load balancing duties. NGINX Plus might be the first load balancer in the environment or it might be replacing a legacy hardware‑based load balancer. Clients connect directly to NGINX Plus which then acts as a reverse proxy, load balancing requests to pools of backend servers.

This scenario has the benefit of simplicity with just one platform to manage, and can be the end result of a migration process that starts with one of the other deployment scenarios discussed below.

If SSL/TLS is being used, NGINX Plus can offload SSL/TLS processing from the backend servers. This not only frees up resources on the backend servers, but centralizing SSL/TLS processing also increases SSL/TLS session reuse. Creating SSL/TLS sessions is the most CPU‑intensive part of SSL/TLS processing, so increasing session reuse can have a major positive impact on performance.

Both NGINX and NGINX Plus can be used as a cache for static and dynamic content, and NGINX Plus adds the ability to purge items from the cache, especially useful for dynamic content.

NGINX Plus offers additional ADC functions, such as application health checks, session persistence, request rate limiting, response rate limiting, connection limiting, and more.

To support high availability (HA) in this scenario requires clustering of the NGINX Plus instances. [Editor – the HA solution originally discussed here has been deprecated. For details about the HA solution introduced in NGINX Plus Release 6, see High Availability in the NGINX Plus Admin Guide.]

Scenario 2: NGINX Plus Works in Parallel with a Legacy Hardware‑Based Load Balancer

In this scenario, NGINX Plus is introduced to load balance new applications in an environment where a legacy hardware appliance continues to load balance existing applications.

This scenario can be applied in a data center where both the hardware load balancers and NGINX Plus reside, or the hardware load balancers might be in a legacy data center while NGINX Plus is deployed in a new data center or a cloud.

NGINX Plus and the hardware‑based load balancer are not connected in any way, so from an NGINX Plus perspective, it is very similar to first scenario where NGINX is the only load balancer.

Clients connect directly to NGINX Plus, which can offload SSL/TLS termination, cache static and dynamic content, and perform other advanced ADC functions.

If high availability is important, the same solution as in the previous scenario can be used.

The usual reason for deploying NGINX in this way is that a company wants to move to a more modern software‑based platform but does not want to rip and replace all of its legacy hardware load balancers. By putting all new applications behind NGINX Plus, an enterprise can start to implement a software‑based platform and then over time migrate the legacy applications from the hardware load balancer to NGINX Plus.

Scenario 3: NGINX Plus Sits Behind a Legacy Hardware‑Based Load Balancer

As in the previous scenario, NGINX Plus is added to an environment with a legacy hardware‑based load balancer, but here it sits behind the legacy load balancer. The hardware‑based load balancer accepts client requests and load balances them to a pool of NGINX Plus instances, which load balance them across the group of actual backend servers. In this scenario NGINX Plus performs all Layer‑7 application load balancing and caching.

Because the NGINX Plus instances are being load balanced by the hardware load balancer, HA can be achieved by having the hardware load balancer do health checks on the NGINX Plus instances and stop sending traffic to instances that are down.

The Problem with Large Multi‑Tenant Hardware Load Balancers

There can be multiple reasons for deploying NGINX Plus in this way. One is because of corporate structure. In a multi‑tenant environment where many internal application teams share a device or set of devices, the hardware load balancers are often owned and managed by the network team. The application teams probably would like access to the load balancers to add application‑specific logic, but the complexity of true multi‑tenancy means that even sophisticated solutions can still not provide complete isolation between one application and another. If the application teams were given free access to shared devices, one team might make configuration changes that negatively impact other teams.

To avoid the potential problems, the network team often retains sole control over the hardware load balancers. The application teams have to submit requests to make any configuration changes. In addition, because of the potential for configuration conflicts between teams, the network team is likely to limit which advanced ADC features are exposed, meaning that application teams can’t take advantage of all the functionally available on the hardware load balancer.

Why Using Multiple Proxy Layers Makes Sense

One solution to this problem is to deploy a set of smaller load balancers, such as NGINX Plus, so that each application team can have its own. Completely isolated from one another, the application teams can each take full advantage of all the features they need without risking negative consequences for other teams. It’s not cost effective to give each application team a set of hardware appliances, so this a great use case for a software‑based load balancer like NGINX Plus.

The hardware load balancers remain in place, still owned and managed by the network team, but they no longer have to deal with complex multi‑tenant issues or application logic; their only job is to get the requests to the right NGINX Plus instances where the application logic resides, and NGINX Plus routes the requests to the right backend servers. This provides the network team with the control they need while also enabling the application teams to take full advantage of the ADC functionality.

Summary

The flexibility inherent in a software‑based load balancer makes it easy to deploy in almost any environment and scenario. The IT world is clearly moving to software‑based platforms, and with NGINX Plus you don’t need to rip and replace what you have. You can install NGINX Plus into your existing environment or a new environment as you migrate to the architecture of the future.

Further Reading

If you’re ready to get started or have any questions about how NGINX Plus can best meet your needs, start a free 30-day trial or contact us to discuss your use cases.

The post NGINX Load Balancing Deployment Scenarios appeared first on NGINX.

]]>
NGINX and the “Power of Two Choices” Load-Balancing Algorithm https://www.nginx.com/blog/nginx-power-of-two-choices-load-balancing-algorithm/ Mon, 12 Nov 2018 20:42:02 +0000 https://www.nginx.com/?p=60116 New use cases sometimes require new load‑balancing algorithms, and in NGINX Plus R16 and NGINX Open Source 1.15.1 we added a new method that is particularly suitable for distributed load balancers: an implementation of the “power of two choices” algorithm. Why Do We Need a New Load‑Balancing Algorithm? Classic load‑balancing methods such as Least Connections work very [...]

Read More...

The post NGINX and the “Power of Two Choices” Load-Balancing Algorithm appeared first on NGINX.

]]>
New use cases sometimes require new load‑balancing algorithms, and in NGINX Plus R16 and NGINX Open Source 1.15.1 we added a new method that is particularly suitable for distributed load balancers: an implementation of the “power of two choices” algorithm.

Why Do We Need a New Load‑Balancing Algorithm?

Classic load‑balancing methods such as Least Connections work very well when you operate a single active load balancer which maintains a complete view of the state of the load‑balanced nodes. The “power of two choices” approach is not as effective on a single load balancer, but it deftly avoids the bad‑case “herd behavior” that can occur when you scale out to a number of independent load balancers.

This scenario is not just observed when you scale out in high‑performance environments; it’s also observed in containerized environments where multiple proxies each load balance traffic to the same set of service instances.

Cluster topologies using distributed load balancers

A common instance of this scenario occurs when you use NGINX Ingress Controller for Kubernetes, with one load‑balancing instance per Kubernetes node.

The algorithm is referred to in the literature as “power of two choices”, because it was first described in Michael Mitzenmacher’s 1996 dissertation, The Power of Two Choices in Randomized Load Balancing. In NGINX and NGINX Plus, it’s implemented as a variation of the Random load‑balancing algorithm, so we also refer to it as Random with Two Choices.

How Does “Power of Two Choices” Work?

Let’s begin with what might be a familiar situation. You’ve just landed after a long international flight, and along with 400 other travelers, have walked into a busy arrivals hall.

International arrivals hall at airport
Photo: Caroline M.A Otieno – Own work, CC BY 2.0

Many airports employ guides in the arrivals hall. Their job is to direct each traveler to join one of the several queues for each immigration desk. If we think of the guides as load balancers:

  • You and your fellow travelers are requests, each hoping to be processed as quickly as possible.
  • The immigration desks are (backend) servers, each processing a backlog of requests.
  • The guides maximize efficency by selecting the best server for each request.
  • The methods available to the guides for selecting the best server correspond to load‑balancing algorithms.

Let’s consider how well some of the possible algorithms work in a distributed load‑balancing scenario like the arrivals hall.

Round-Robin Load Balancing

Round robin is a naive approach to load balancing. In this approach, the guide selects each queue in rotation – the first traveler is directed to queue A, next traveler to queue B, and so on. Once a traveler is directed to the last queue, the process repeats from queue A. Round Robin is the default load‑balancing algorithm used by NGINX:

This approach works adequately, until there’s a delay in one of the queues. Perhaps one traveler has misplaced his or her documentation, or arouses suspicion in the immigration officer:

The queue stops moving, yet the guide continues to assign travelers to that queue. The backlog gets longer and longer – that’s not going to make the impatient travelers any happier!

Least Connections Load Balancing

There’s a much better approach. The guide watches each queue, and each time a traveler arrives, he sends that traveler to the shortest queue. This method is analogous to the Least Connections load‑balancing method in NGINX, which assigns each new request to the server with the fewest outstanding (queued) requests:

Least Connections load balancing deals quite effectively with travelers who take different amounts of time to process. It seeks to balance the lengths of the queues, and avoids adding more requests to a queue that has stalled.

Least Time Load Balancing

We’ve seen that different passengers take different times to process; in addition, some queues are processed faster or slower than others. For example, one immigration officer might have computer problems which means he processes travelers more slowly; another officer might be a stickler for detail, quizzing travelers very closely. Other officers might be very experienced and able to process travelers more quickly.

What if each immigration booth has a counter above it, indicating how many travelers have been processed in, for example, the last 10 minutes? Then the guide can direct travelers to a queue based on its length and how quickly it is being processed. That’s a more effective way to distribute load, and it’s what the Least Time load‑balancing algorithm in NGINX Plus does:

This algorithm is specific to NGINX Plus because it relies on additional data collected with NGINX Plus’s Extended Status metrics. It’s particularly effective in cloud or virtual environments where the latency to each server can vary unpredictably, so queue length alone is not sufficient to estimate the delay.

It All Falls Apart with Multiple Guides

So far, we’ve had one guide (that is, load balancer) with a complete view of the queues and response time in the arrivals hall. That guide tries to make the best choice for each traveler based on the information he knows.

Now consider what happens if we have several guides, each directing travelers independently. The guides have independent views of the queue lengths and queue wait times – they only consider the travelers that they send to each queue.

This scenario is prone to an undesirable behavior, where all the guides notice that one queue is momentarily shorter and faster, and all send travelers to that queue. Simulations show that this “herd behavior” distributes travelers in a way that is unbalanced and unfair. In the same way, several independent load balancers can overload some upstream servers, no matter which “best choice” algorithm you use.

The “Power of Two Choices” Load-Balancing Algorithm

The solution lies in the “power of two choices” load‑balancing algorithm. Instead of making the absolute best choice using incomplete data, with “power of two choices” you pick two queues at random and chose the better option of the two, avoiding the worse choice.

“Power of two choices” is efficient to implement. You don’t have to compare all queues to choose the best option each time; instead, you only need to compare two. And, perhaps unintuitively, it works better at scale than the best‑choice algorithms. It avoids the undesired herd behavior by the simple approach of avoiding the worst queue and distributing traffic with a degree of randomness.

Using “Power of Two Choices” with NGINX and NGINX Plus

In NGINX and NGINX Plus, the “power of two choices” load‑balancing method is implemented as a variant of the Random algorithm, so we call it Random with Two Choices.

In NGINX Open Source, Random with Two Choices chooses between two randomly selected servers based on which currently has fewer active connections. This is the same selection criterion as used for the Least Connections algorithm. (This is also the default algorithm in NGINX Plus, and can be explicitly configured by adding the least_conn parameter.)

upstream service1 {
    zone service1 64k;
    server 192.168.1.11;
    server 192.168.1.12;
    server 192.168.1.13;
    random two; 
}

NGINX Plus also supports the least_time parameter, which uses the same selection criterion as the Least Time algorithm. As with that algorithm, you further choose between considering either:

  • The time to receive the response header (least_time=header)
  • The time to receive the full response (least_time=last_byte), as in the following snippet. The Least Time criterion is ideal for situations where the latency to each upstream server can vary.
upstream service1 {
    zone service1 64k;
    server 192.168.1.11;
    server 192.168.1.12;
    server 192.168.1.13;
    random two least_time=last_byte; # use header or last_byte
}

Conclusion

NGINX and NGINX Plus support a range of load‑balancing methods; in this article, we did not consider the deterministic Hash and IP Hash methods.

The Least Connections (and for NGINX Plus, Least Time) methods are very effective at balancing loads when the load balancer has a complete view of the workload assigned to each node and its past performance. They are less effective when several load balancers are assigning requests, and each has an incomplete view of the workload and performance.

“Power of two choices” uses a biased random algorithm, and has been demonstrated to be effective at balancing loads when each load balancer has an incomplete or delayed view. It avoids the “herd behavior” exhibited by other algorithms that seek to make a best decision on every request.

Consider Random with Two Choices, NGINX’s implementation of “power of two choices”, for very high‑performance environments and for distributed load‑balancing scenarios. A good use case arises when using multiple Ingress controllers on Kubernetes.

The post NGINX and the “Power of Two Choices” Load-Balancing Algorithm appeared first on NGINX.

]]>
Architecting Robust Enterprise Application Network Services with NGINX and Diamanti https://www.nginx.com/blog/architecting-robust-enterprise-application-network-services-nginx-diamanti/ Thu, 08 Nov 2018 20:42:21 +0000 https://www.nginx.com/?p=60033 If you’re actively involved in architecting enterprise applications to run in production Kubernetes environments, or in deploying and managing the underlying container infrastructure, you know firsthand how containers use IT resources quite differently from non‑containerized applications, and how important it is to have an application‑aware network that can adapt at the fast pace of change [...]

Read More...

The post Architecting Robust Enterprise Application Network Services with NGINX and Diamanti appeared first on NGINX.

]]>
If you’re actively involved in architecting enterprise applications to run in production Kubernetes environments, or in deploying and managing the underlying container infrastructure, you know firsthand how containers use IT resources quite differently from non‑containerized applications, and how important it is to have an application‑aware network that can adapt at the fast pace of change typical of containerized applications.

In this blog, we look at how the synergy between bare‑metal container infrastructure and an application‑centric load‑balancing architecture enables enterprises to deploy network services that are tailored to the needs of their containerized applications.

Recently, Diamanti – a leading provider of bare‑metal container infrastructure – announced its technology partnership with NGINX, developer of the eponymous open source load balancer. Below, we’ll offer up a look at a few common use cases for modern load balancing and application network services deployed in Kubernetes environments, with NGINX and NGINX Plus running on the Diamanti D10 Bare‑Metal Container Platform.

With Diamanti’s newly released support for multi‑zone Kubernetes clusters, the combination of NGINX and Diamanti enhances not only application delivery and scalability, but high availability as well.

This blog assumes a working knowledge of Kubernetes and an understanding of basic load‑balancing concepts.

Key Functional Requirements for Application Network Services

To meet the needs of Kubernetes‑based applications, network services need to provide the following functionality:

  • Application exposure
    • Proxy – Hide an application from the outside world
    • Routing/ingress – Route requests to different applications and offer name‑based virtual hosting
  • Performance optimization
    • Caching – Cache static and dynamic content to improve performance and decrease the load on the backend
    • Load balancing – Distribute network traffic across multiple instances of an application
    • High availability (HA) – Ensure application uptime in the event of a load balancer or site outage
  • Security and simplified application management
    • Rate limiting – Limit the traffic to an application and protect it from DDoS attacks
    • SSL/TLS termination – Secure applications by terminating client TLS connections, without any need to modify the application
    • Health checks – Check the operational status of applications and take appropriate action
  • Support for microservices
    • Central communications point for services – Enable backend services to be aware of each other
    • Dynamic scaling and service discovery – Scale and upgrade services seamlessly, in a manner that is completely transparent to users
    • Low‑latency connectivity – Provide the lowest latency path between source and target microservices
    • API gateway capability – Act as an API gateway for microservices‑based applications
    • Inter‑service caching – Cache data exchanged by microservices

Diamanti’s Bare-Metal Container Platform Eliminates Major Obstacles To Building Robust Network Services For Kubernetes

At best, container networking in Kubernetes is challenging. All of of our customers who have attempted to build their own container infrastructure have told us that not only is network configuration complex, but it is extremely difficult to establish predictable performance for applications requiring it, unless they substantially overprovision their infrastructure. Quality of service (QoS) is also critical for stable functioning of multi‑zone clusters, which is easily disrupted by fluctuations in network throughput and storage I/O.

Because the Diamanti platform is purpose‑built for Kubernetes environments, it can be made operationally ready for production applications in a fraction of the time it takes to build a Kubernetes stack from legacy infrastructure. More importantly, it solves major challenges around building application network services with the following attributes and features:

  • Plug-and-play networking
    • Diamanti’s custom network processing allows for plug‑and‑play data center connectivity, and abstracts away the complexity of network constructs in Kubernetes.
    • Built‑in monitoring capabilities allow for clear visibility into a pod’s network usage.
    • Diamanti assigns routable IP addresses to each pod, enabling easy service discovery. A routable IP address means that load balancers within the network are already accessible, so exposing them requires no additional steps.
    • Support for network segmentation, enabling multi‑tenancy and isolation.
    • Support for multiple endpoints to allow higher bandwidth and cross‑segment communication.
  • QoS
    • Diamanti provides QoS for each application. This guarantees bandwidth to load‑balancer pods, ensuring that the load balancers are not bogged down by other pods co‑hosted on same node.
    • Applications behind the load balancer can also be rate limited using pod‑level QoS.
  • Multi-zone support
    • Diamanti supports setting up a cluster across multiple zones, allowing you to distribute applications and load balancers across multiple zones for HA, faster connectivity, and special access needs.

NGINX Ingress Controller And Load Balancer Are Key Building Blocks For Application Network Services

Traditionally, applications and services are exposed to the external world through physical load balancers. The Kubernetes Ingress API was introduced in order to expose applications running in a Kubernetes cluster, and can enable software‑based Layer 7 (HTTP) load balancing through the use of a dedicated Ingress controller. A standard Kubernetes deployment does not include a native Ingress controller. Therefore, users have the option to employ any third‑party Ingress controller, such as NGINX Ingress Controller, which is widely used by NGINX Plus customers in their Kubernetes environments.

With advanced load‑balancing features using a flexible software‑based deployment model, NGINX provides an agile, cost‑effective means of managing the needs of microservices‑based applications. NGINX Ingress Controller provides enterprise‑grade delivery services for Kubernetes applications, with benefits for users of both NGINX Open Source and NGINX Plus. With NGINX Ingress Controller, you get basic load balancing, SSL/TLS termination, support for URI rewrites, and upstream SSL/TLS encryption.

Reference Architectures

There are many ways to provision the NGINX load balancer in a Kubernetes cluster running on the Diamanti platform. Here, we’ll focus on two important architectures which uniquely exemplify the joint value of Diamanti and NGINX.

Load Balancing and Service Discovery Across a Multi-Zone Cluster

Diamanti enables the distribution of Kubernetes cluster nodes across multiple zones. This configuration greatly enhances application HA as it mitigates the risks inherent to a single point of failure.

In a multi‑zone Diamanti cluster, the simplest approach is to deploy an NGINX Ingress controller in each zone. This approach has several benefits:

  • Having multiple zones establishes HA. If a load balancer at one zone fails, another can take over to serve the requests.
  • East‑west load balancing can be done within or across the zones. The cluster administrator can also define a particular zone affinity so that requests tend to stay within a zone for low latency, but go across zones in the absence of a local pod.
  • Network traffic to the individual zones’ load balancers can be distributed via an external load balancer, or via another in‑cluster NGINX Ingress controller.

The following diagram shows an example of an multi‑zone architecture.

NGINX Plus deployed on Diamanti as the Kubernetes Ingress controller in a multiple zones

In this example, the Kubernetes cluster is distributed across two zones and is serving the application fruit.com. A DNS server or external load balancer can be used to distribute inbound traffic in round‑robin fashion to the NGINX Ingress controller of each zone in the cluster. Through defined Ingress rules, the NGINX Ingress controller discovers all of the frontend and backend pods of orange, blueberry, and strawberry services.

To minimize latency across zones, each zone’s Ingress controller can be configured to discover only the services within its own zone. In this example, however, Zone 1 does not have a local backend strawberry service. Therefore, Zone 1’s Ingress Controller needs to be configured to discover backend strawberry services in other zones as well. Alternatively, each zone’s load balancer can be configured to detect all pods in the cluster, across zones. In that case, load balancing can be executed based on the lowest possible latency to ensure preference for pods within the local zone. However, this approach increases the risk of a load imbalance within the cluster.

Once the Ingress controller is set up, services can be accessed via hostname (orange.fruit.com, blueberry.fruit.com, and strawberry.fruit.com). The NGINX Ingress controller for each zone load balances all pods of related services. Frontend pods (such as orange‑front) access the backend orange services within the local zone to avoid high network latency. However, frontend pods that do not have any services running within the local zone (such as strawberry‑front) need to go across zones to access the backend service.

Load Balancing and Service Discovery in a Fabric Mesh Architecture

Of the many possible modern load‑balancing approaches, the most flexible approach is doing client‑side load balancing directly from each pod. This architecture is also referred to as a service mesh or fabric model. Here, each pod has its own Ingress controller that runs as a sidecar container, making it fully capable of performing its own client‑side load balancing. Ingress controller sidecars can be manually deployed to pods or can be set up to be automatically injected with a tool such as Istio. In addition to the per‑pod Ingress controllers, a cluster‑level Ingress controller is required in order to expose the desired services to the external world. A service mesh architecture has the following benefits:

  • Facilitates east‑west load balancing and microservices architecture
  • Each pod is aware of all the backends, which helps to minimize hops and latency
  • Enables secure SSL/TLS communication between pods without modifying applications
  • Facilitates built‑in health monitoring and cluster‑wide visibility
  • Facilitates HA for east‑west load balancing

The following diagram shows an example of a service‑mesh architecture.

NGINX Plus deployed on Diamanti as the Kubernetes Ingress controller in a service mesh

Here the cluster is configured with one NGINX ingress controller per pod to construct a fabric, and serves the application fruit.com. A DNS server or external load balancer routes requests for fruit.com to the top‑level Ingress controller, which is the primary ingress point to the cluster. According to Ingress rules, the top‑level Ingress controller discovers all the pods for the orange, blueberry, and strawberry services, as does does each side Ingress controller in the fabric.

Once the Ingress controller fabric is set up, services can be accessed via hostname (orange.fruit.com, blueberry.fruit.com, and strawberry.fruit.com). The cluster‑level NGINX Ingress controller load balances for all pods of related services, based on hostname. Other frontend application pods running on same cluster (orange‑front, blueberry‑front, and strawberry‑front) can access their respective backend application pods directly, via their respective sidecar Ingress controllers.

Conclusion

Robust application network services are critical for enterprise organizations seeking to improve availability, deployment, and scalability for production Kubernetes applications. For these purposes, the choice of container infrastructure and approach to modern load balancing are the key success factors. That being said, Diamanti and NGINX pair well together to mitigate the risks of downtime, manage network traffic, and optimize overall application performance.

The post Architecting Robust Enterprise Application Network Services with NGINX and Diamanti appeared first on NGINX.

]]>
Not All Software Load Balancers Are Created Equal https://www.nginx.com/blog/not-all-software-load-balancers-are-created-equal/ Thu, 04 Oct 2018 00:40:23 +0000 https://www.nginx.com/?p=59644 According to findings by InformationWeek and Interop ITX, 50% of organizations have implemented DevOps methodologies or plan to implement them soon. But DevOps methodologies require solutions that deliver the agility and flexibility required to rapidly achieve scale and high feature velocity. In a recent blog, NGINX CEO Gus Robertson described how load balancers are a [...]

Read More...

The post Not All Software Load Balancers Are Created Equal appeared first on NGINX.

]]>
According to findings by InformationWeek and Interop ITX, 50% of organizations have implemented DevOps methodologies or plan to implement them soon. But DevOps methodologies require solutions that deliver the agility and flexibility required to rapidly achieve scale and high feature velocity.

In a recent blog, NGINX CEO Gus Robertson described how load balancers are a critical component in DevOps tooling, but only if they’re the right kind: hardware‑based load balancers are in fact roadblocks to agile development. A software approach is mandatory.

But not all software‑based load balancers are created equal. Broadly speaking, there are two varieties:

  • Software appliances that run on virtual machines (VMs), such as F5’s BIG‑IP Virtual Edition (VE) or Citrix’s ADC (formerly NetScaler) Virtual Appliance. These are typically not binaries but are packaged as a full‑fledged VM or heavyweight cloud image.
  • True software‑based load balancers, such as NGINX Open Source and NGINX Plus. These are built from the ground up as software, and like other software applications are binaries that can be installed in any environment that meets the technical specifications – bare metal, VM, container, or cloud.

Most software appliances began as integrated hardware devices, with proprietary (single‑purpose, closed) hardware, operating system, and user interface. When vendors created software versions of their load balancers, they adapted the operating system and software to run on a hypervisor (or cloud) that provided a standard, virtualized hardware instance.

On the other hand, software‑first load balancers such as NGINX are portable, lightweight applications able to run on a wide range of general‑purpose operating systems.

Why Is a True Software Load Balancer Better?

Does a true software load balancer have advantages over a software appliance? We at NGINX believe so. Let’s explore four dimensions where these two approaches differ:

  • Flexibility – Software appliance vendors typically impose artificial limits on throughput and functionality, forcing you to pay more for better performance or advanced features. NGINX can be used as a load balancer, web server, content cache, reverse proxy, service mesh for microservices, and API gateway – simultaneously and taking full advantage of the power of the underlying hardware. For instance, if you are using NGINX as a load balancer and suddenly start experiencing a DDoS attack, you can mitigate this attack with NGINX’s API gateway capability by limiting the request rate to a value typical for real users.

    This flexibility allows our customers to reduce complexity and save costs. For example, a leading B2C enterprise maintaining a large website used 13 disparate solutions for content delivery network (CDN), network (Layer 4) load balancer, application (Layer 7) load balancer, API gateway, web application firewall (WAF), reverse proxy, web server, application server and microservices sidecar proxy. They plan to collapse these 13 solutions into just 3 using the NGINX Application Platform.

  • Seamless integration – True software load balancers are designed to meet the needs of many different kinds of applications. They can easily be integrated into your application stack, and deployed everywhere in line. Whether you need to load balance a legacy application or a modern application that uses microservices, true software load balancers interoperate seamlessly with your application code, resulting in high performance and reliability.

    In contrast, software appliances can only act as endpoints – “front doors” to your application stack. Given their large size and packaging as VMs, software appliances don’t fit in modern application environments built on the cloud, containers, and microservices. True software load balancers like NGINX and NGINX Plus can function at any and every layer of your infrastructure, from reverse proxy at the network edge to sidecar proxy handling intra‑service mesh traffic in a microservices environment.

  • Compliance – You can deploy a true software load balancer on Linux OS distributions that have been hardened and approved by your IT team. Compliance with IT policies is a lot harder with a software appliance that’s already configured with its own OS and other system software.

    Software appliances are essentially black boxes – when vulnerabilities are discovered (such as the many vulnerabilities discovered in OpenSSL in recent years), you are at the mercy of the appliance vendor to incorporate patches, test them, and issue a fix. We’ve heard reports of many weeks’ turnaround for serious vulnerabilities, and of appliances that use 10‑year‑old versions of OpenSSL for management interfaces.

  • Lightweight – Software appliances packaged as VMs have a large footprint (typically many GBs) because they include a proprietary OS and other system software components on top of the host OS. This is certainly not ideal if you have a bare metal environment or are shifting toward a containerized environment. Software appliances are not ideal for virtualized environments either – in order to ensure the load balancer is not constrained by CPU or memory, compute resources are typically over‑provisioned for the VMs hosting the software appliance, resulting in substantial costs.

    NGINX is less than 2 MB in size. It can run on supported Linux servers (bare metal, cloud, or virtual), or directly in containers on Kubernetes and other platforms. You control the compute resources needed for operating NGINX based on the needs of your environment.

The limitations built into software appliances mean they cover only a small subset of the architectures used in enterprise application delivery environments. A true software load balancer encompasses all of them:

Summary

A true software load balancer is well suited for the broadest range of compute infrastructure and breadth of application types. Software appliances are limited to a traditional IT infrastructure environment supporting legacy applications. As you modernize your infrastructure and applications, true software load balancers are key to achieving your DevOps objectives. At the same time, they work well across the full breadth of your legacy compute infrastructure and existing set of applications; thus they’re the only choice that actually simplifies your architecture.

Are your software appliances keeping you from achieving your DevOps objectives? Are you finding that you even have to deploy true software load balancers to supplement your appliances? Or have you invested in true software load balancers? We’d love to hear from you in the comments below. In the meantime, get started with a free 30‑day trial of NGINX Plus and enjoy the advantages of a true software‑based load balancer.

The post Not All Software Load Balancers Are Created Equal appeared first on NGINX.

]]>