NGINX Plus Archives - NGINX https://www.nginx.com/blog/tag/nginx-plus/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Thu, 15 Feb 2024 16:01:16 +0000 en-US hourly 1 Dynamic A/B Kubernetes Multi-Cluster Load Balancing and Security Controls with NGINX Plus https://www.nginx.com/blog/dynamic-a-b-kubernetes-multi-cluster-load-balancing-and-security-controls-with-nginx-plus/ Thu, 15 Feb 2024 16:00:56 +0000 https://www.nginx.com/?p=72906 You’re a modern Platform Ops or DevOps engineer. You use a library of open source (and maybe some commercial) tools to test, deploy, and manage new apps and containers for your Dev team. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and [...]

Read More...

The post Dynamic A/B Kubernetes Multi-Cluster Load Balancing and Security Controls with NGINX Plus appeared first on NGINX.

]]>
You’re a modern Platform Ops or DevOps engineer. You use a library of open source (and maybe some commercial) tools to test, deploy, and manage new apps and containers for your Dev team. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and concepts of microservices and, for the most part, it works pretty well. However, you’ve encountered a few speed bumps along this journey.

For instance, as you build and roll out new clusters, services, and applications, how do you easily integrate or migrate these new resources into production without dropping any traffic? Traditional networking appliances require reloads or reboots when implementing configuration changes to DNS records, load balancers, firewalls, and proxies. These adjustments are not reconfigurable without causing downtime because a “service outage” or “maintenance window” is required to update DNS, load balancer, and firewall rules. More often than not, you have to submit a dreaded service ticket and wait for another team to approve and make the changes.

Maintenance windows can drive your team into a ditch, stall application delivery, and make you declare, “There must be a better way to manage traffic!” So, let’s explore a solution that gets you back in the fast lane.

Active-Active Multi-Cluster Load Balancing

If you have multiple Kubernetes clusters, it’s ideal to route traffic to both clusters at the same time. An even better option is to perform A/B, canary, or blue-green traffic splitting and send a small percentage of your traffic as a test. To do this, you can use NGINX Plus with ngx_http_split_clients_module.

K8s with NGINX Plus diagram

The HTTP Split Clients module is written by NGINX Open Source and allows the ratio of requests to be distributed based on a key. In this use case, the clusters are the “upstreams” of NGINX. So, as the client requests arrive, the traffic is split between two clusters. The key that is used to determine the client request is any available NGINX client $variable. That said, to control this for every request, use the $request_id variable, which is a unique number assigned by NGINX to every incoming request.

To configure the split ratios, determine which percentages you’d like to go to each cluster. In this example, we use K8s Cluster1 as a “large cluster” for production and Cluster2 as a “small cluster” for pre-production testing. If you had a small cluster for staging, you could use a 90:10 ratio and test 10% of your traffic on the small cluster to ensure everything is working before you roll out new changes to the large cluster. If that sounds too risky, you can change the ratio to 95:5. Truthfully, you can pick any ratio you’d like from 0 to 100%.

For most real-time production traffic, you likely want a 50:50 ratio where your two clusters are of equal size. But you can easily provide other ratios, based on the cluster size or other details. You can easily set the ratio to 0:100 (or 100:0) and upgrade, patch, repair, or even replace an entire cluster with no downtime. Let NGINX split_clients route the requests to the live cluster while you address issues on the other.


# Nginx Multi Cluster Load Balancing
# HTTP Split Clients Configuration for Cluster1:Cluster2 ratios
# Provide 100, 99, 50, 1, 0% ratios  (add/change as needed)
# Based on
# https://www.nginx.com/blog/dynamic-a-b-testing-with-nginx-plus/
# Chris Akker – Jan 2024
#
 
split_clients $request_id $split100 {
   * cluster1-cafe;                     # All traffic to cluster1
   } 

split_clients $request_id $split99 {
   99% cluster1-cafe;                   # 99% cluster1, 1% cluster2
   * cluster2-cafe;
   } 
 
split_clients $request_id $split50 { 
   50% cluster1-cafe;                   # 50% cluster1, 50% cluster2
   * cluster2-cafe;
   }
    
split_clients $request_id $split1 { 
   1.0% cluster1-cafe;                  # 1% to cluster1, 99% to cluster2
   * cluster2-cafe;
   }

split_clients $request_id $split0 { 
   * cluster2-cafe;                     # All traffic to cluster2
   }
 
# Choose which cluster upstream based on the ratio
 
map $split_level $upstream { 
   100 $split100; 
   99 $split99; 
   50 $split50; 
   1.0 $split1; 
   0 $split0;
   default $split50;
}

You can add or edit the configuration above to match the ratios that you need (e.g., 90:10, 80:20, 60:40, and so on).

Note: NGINX also has a Split Clients module for TCP connections in the stream context, which can be used for non-HTTP traffic. This splits the traffic based on new TCP connections, instead of HTTP requests.

NGINX Plus Key-Value Store

The next feature you can use is the NGINX Plus key-value store. This is a key-value object in an NGINX shared memory zone that can be used for many different data storage use cases. Here, we use it to store the split ratio value mentioned in the section above. NGINX Plus allows you to change any key-value record without reloading NGINX. This enables you to change this split value with an API call, creating the dynamic split function.

Based on our example, it looks like this:

{“cafe.example.com”:90}

This KeyVal record reads:
The Key is the “cafe.example.com” hostname
The Value is “90” for the split ratio

Instead of hard-coding the split ratio in the NGINX configuration files, you can instead use the key-value memory. This eliminates the NGINX reload required to change a static split value in NGINX.

In this example, NGINX is configured to use 90:10 for the split ratio with the large Cluster1 for the 90% and the small Cluster2 for the remaining 10%. Because this is a key-value record, you can change this ratio using the NGINX Plus API dynamically with no configuration reloads! The Split Clients module will use this new ratio value as soon as you change it, on the very next request.

Create the KV Record, start with a 50/50 ratio:

Add a new record to the KeyValue store, by sending an API command to NGINX Plus:

curl -iX POST -d '{"cafe.example.com":50}' http://nginxlb:9000/api/8/http/keyvals/split

Change the KV Record, change to the 90/10 ratio:

Change the KeyVal Split Ratio to 90, using an HTTP PATCH Method to update the KeyVal record in memory:

curl -iX PATCH -d '{"cafe.example.com":90}' http://nginxlb:9000/api/8/http/keyvals/split

Next, the pre-production testing team verifies the new application code is ready, you deploy it to the large Cluster1, and change the ratio to 100%. This immediately sends all the traffic to Cluster1 and your new application is “live” without any disruption to traffic, no service outages, no maintenance windows, reboots, reloads, or lots of tickets. It only takes one API call to change this split ratio at the time of your choosing.

Of course, being that easy to move from 90% to 100% means you have an easy way to change the ratio from 100:0 to 50:50 (or even 0:100). So, you can have a hot backup cluster or can scale your clusters horizontally with new resources. At full throttle, you can even completely build a new cluster with the latest software, hardware, and software patches – deploying the application and migrating the traffic over a period of time without dropping a single connection!

Use Cases

Using the HTTP Split Clients module with the dynamic key-value store can deliver the following use cases:

  • Active-active load balancing – For load balancing to multiple clusters.
  • Active-passive load balancing – For load balancing to primary, backup, and DR clusters and applications.
  • A/B, blue-green, and canary testing – Used with new Kubernetes applications.
  • Horizontal cluster scaling – Adds more cluster resources and changes the ratio when you’re ready.
  • Hitless cluster upgrades – Ability to use one cluster while you upgrade, patch, or repair the other cluster.
  • Instant failover – If one cluster has a serious issue, you can change the ratio to use your other cluster.

Configuration Examples

Here is an example of the key-value configuration:

# Define Key Value store, backup state file, timeout, and enable sync
 
keyval_zone zone=split:1m state=/var/lib/nginx/state/split.keyval timeout=365d sync;

keyval $host $split_level zone=split;

And this is an example of the cafe.example.com application configuration:

# Define server and location blocks for cafe.example.com, with TLS

server {
   listen 443 ssl;
   server_name cafe.example.com; 

   status_zone https://cafe.example.com;
      
   ssl_certificate /etc/ssl/nginx/cafe.example.com.crt; 
   ssl_certificate_key /etc/ssl/nginx/cafe.example.com.key;
   
   location / {
   status_zone /;
   
   proxy_set_header Host $host;
   proxy_http_version 1.1;
   proxy_set_header "Connection" "";
   proxy_pass https://$upstream;   # traffic split to upstream blocks
   
   }

# Define 2 upstream blocks – one for each cluster
# Servers managed dynamically by NLK, state file backup

# Cluster1 upstreams
 
upstream cluster1-cafe {
   zone cluster1-cafe 256k;
   least_time last_byte;
   keepalive 16;
   #servers managed by NLK Controller
   state /var/lib/nginx/state/cluster1-cafe.state; 
}
 
# Cluster2 upstreams
 
upstream cluster2-cafe {
   zone cluster2-cafe 256k;
   least_time last_byte;
   keepalive 16;
   #servers managed by NLK Controller
   state /var/lib/nginx/state/cluster2-cafe.state; 
}

The upstream server IP:ports are managed by NGINX Loadbalancer for Kubernetes, a new controller that also uses the NGINX Plus API to configure NGINX Plus dynamically. Details are in the next section.

Let’s take a look at the HTTP split traffic over time with Grafana, a popular monitoring and visualization tool. You use the NGINX Prometheus Exporter (based on njs) to export all of your NGINX Plus metrics, which are then collected and graphed by Grafana. Details for configuring Prometheus and Grafana can be found here.

There are four upstreams servers in the graph: Two for Cluster1 and two for Cluster2. We use an HTTP load generation tool to create HTTP requests and send them to NGINX Plus.

In the three graphs below, you can see the split ratio is at 50:50 at the beginning of the graph.

LB Upstream Requests diagram

Then, the ratio changes to 10:90 at 12:56:30.

LB Upstream Requests diagram

Then it changes to 90:10 at 13:00:00.

LB Upstream Requests diagram

You can find working configurations of Prometheus and Grafana on the NGINX Loadbalancer for Kubernetes GitHub repository.

Dynamic HTTP Upstreams: NGINX Loadbalancer for Kubernetes

You can change the static NGINX Upstream configuration to dynamic cluster upstreams using the NGINX Plus API and the NGINX Loadbalancer for Kubernetes controller. This free project is a Kubernetes controller that watches NGINX Ingress Controller and automatically updates an external NGINX Plus instance configured for TCP/HTTP load balancing. It’s very straightforward in design and simple to install and operate. With this solution in place, you can implement TCP/HTTP load balancing in Kubernetes environments, ensuring new apps and services are immediately detected and available for traffic – with no reload required.

Architecture and Flow

NGINX Loadbalancer for Kubernetes sits inside a Kubernetes cluster. It is registered with Kubernetes to watch the NGINX Ingress Controller (nginx-ingress) Service. When there is a change to the Ingress controller(s), NGINX Loadbalancer for Kubernetes collects the Worker Ips and the NodePort TCP port numbers, then sends the IP:ports to NGINX Plus via the NGINX Plus API.

The NGINX upstream servers are updated with no reload required, and NGINX Plus load balances traffic to the correct upstream servers and Kubernetes NodePorts. Additional NGINX Plus instances can be added to achieve high availability.

Diagram of NGINX Loadbalancer in action

A Snapshot of NGINX Loadbalancer for Kubernetes in Action

In the screenshot below, there are two windows that demonstrate NGINX Loadbalancer for Kubernetes deployed and doing its job:

  1. Service TypeLoadBalancer for nginx-ingress
  2. External IP – Connects to the NGINX Plus servers
  3. Ports – NodePort maps to 443:30158 with matching NGINX upstream servers (as shown in the NGINX Plus real-time dashboard)
  4. Logs – Indicates NGINX Loadbalancer for Kubernetes is successfully sending data to NGINX Plus

NGINX Plus window

Note: In this example, the Kubernetes worker nodes are 10.1.1.8 and 10.1.1.10

Adding NGINX Plus Security Features

As more and more applications running in Kubernetes are exposed to the open internet, security becomes necessary. Fortunately, NGINX Plus has enterprise-class security features that can be used to create a layered, defense-in-depth architecture.

With NGINX Plus in front of your clusters and performing the split_clients function, why not leverage that presence and add some beneficial security features? Here are a few of the NGINX Plus features that could be used to enhance security, with links and references to other documentation that can be used to configure, test, and deploy them.

Get Started Today

If you’re frustrated with networking challenges at the edge of your Kubernetes cluster, consider trying out this NGINX multi-cluster Solution. Take the NGINX Loadbalancer for Kubernetes software for a test drive and let us know what you think. The source code is open source (under the Apache 2.0 license) and all installation instructions are available on GitHub.

To provide feedback, drop us a comment in the repo or message us in the NGINX Community Slack.

The post Dynamic A/B Kubernetes Multi-Cluster Load Balancing and Security Controls with NGINX Plus appeared first on NGINX.

]]>
Updating NGINX for the Vulnerabilities in the HTTP/3 Module https://www.nginx.com/blog/updating-nginx-for-the-vulnerabilities-in-the-http-3-module/ Wed, 14 Feb 2024 15:15:40 +0000 https://www.nginx.com/?p=72902 Today, we are releasing updates to NGINX Plus, NGINX Open source, and NGINX Open Source subscription in response to the internally discovered vulnerabilities in the HTTP/3 module ngx_http_v3_module. These vulnerabilities were discovered based on two bug reports in NGINX open source (trac #2585 and trac #2586). Note that this module is not enabled by default [...]

Read More...

The post Updating NGINX for the Vulnerabilities in the HTTP/3 Module appeared first on NGINX.

]]>
Today, we are releasing updates to NGINX Plus, NGINX Open source, and NGINX Open Source subscription in response to the internally discovered vulnerabilities in the HTTP/3 module ngx_http_v3_module. These vulnerabilities were discovered based on two bug reports in NGINX open source (trac #2585 and trac #2586). Note that this module is not enabled by default and is documented as experimental.

The vulnerabilities have been registered in the Common Vulnerabilities and Exposures (CVE) database and the F5 Security Incident Response Team (F5 SIRT) has assigned scores to them using the Common Vulnerability Scoring System (CVSS v3.1) scale.

The following vulnerabilities in the HTTP/3 module apply to NGINX Plus, NGINX Open source subscription, and NGINX Open source.

CVE-2024-24989: The patch for this vulnerability is included in following software versions:

  • NGINX Plus R31 P1
  • NGINX Open source subscription R6 P1
  • NGINX Open source mainline version 1.25.4. (The latest NGINX Open source stable version 1.24.0 is not affected.)

CVE-2024-24990: The patch for this vulnerability is included in following software versions:

  • NGINX Plus R30 P2
  • NGINX Plus R31 P1
  • NGINX Open source subscription R5 P2
  • NGINX Open source subscription R6 P1
  • NGINX Open source mainline version 1.25.4. (The latest NGINX Open source stable version 1.24.0 is not affected.)

You are impacted if you are running NGINX Plus R30 or R31, NGINX Open source subscription packages R5 or R6 or NGINX Open source mainline version 1.25.3 or earlier. We strongly recommend that you upgrade your NGINX software to the latest version.

For NGINX Plus upgrade instructions, see Upgrading NGINX Plus in the NGINX Plus Admin Guide. NGINX Plus customers can also contact our support team for assistance at https://my.f5.com/.

The post Updating NGINX for the Vulnerabilities in the HTTP/3 Module appeared first on NGINX.

]]>
NGINX’s Continued Commitment to Securing Users in Action https://www.nginx.com/blog/nginx-continued-commitment-to-securing-users-in-action/ Wed, 14 Feb 2024 15:15:34 +0000 https://www.nginx.com/?p=72903 F5 NGINX is committed to a secure software lifecycle, including design, development, and testing optimized to find security concerns before release. While we prioritize threat modeling, secure coding, training, and testing, vulnerabilities do occasionally occur. Last month, a member of the NGINX Open Source community reported two bugs in the HTTP/3 module that caused a [...]

Read More...

The post NGINX’s Continued Commitment to Securing Users in Action appeared first on NGINX.

]]>
F5 NGINX is committed to a secure software lifecycle, including design, development, and testing optimized to find security concerns before release. While we prioritize threat modeling, secure coding, training, and testing, vulnerabilities do occasionally occur.

Last month, a member of the NGINX Open Source community reported two bugs in the HTTP/3 module that caused a crash in NGINX Open Source. We determined that a bad actor could cause a denial-of-service attack on NGINX instances by sending specially crafted HTTP/3 requests. For this reason, NGINX just announced two vulnerabilities: CVE-2024-24989 and CVE-2024-24990.

The vulnerabilities have been registered in the Common Vulnerabilities and Exposures (CVE) database, and the F5 Security Incident Response Team (F5 SIRT) has assigned them scores using the Common Vulnerability Scoring System (CVSS v3.1) scale.

Upon release, the QUIC and HTTP/3 features in NGINX were considered experimental. Historically, we did not issue CVEs for experimental features and instead would patch the relevant code and release it as part of a standard release. For commercial customers of NGINX Plus, the previous two versions would be patched and released to customers. We felt that not issuing a similar patch for NGINX Open Source would be a disservice to our community. Additionally, fixing the issue in the open source branch would have exposed users to the vulnerability without providing a binary.

Our decision to release a patch for both NGINX Open Source and NGINX Plus is rooted in doing what is right – to deliver highly secure software for our customers and community. Furthermore, we’re making a commitment to document and release a clear policy for how future security vulnerabilities will be addressed in a timely and transparent manner.

The post NGINX’s Continued Commitment to Securing Users in Action appeared first on NGINX.

]]>
Tutorial: Configure OpenTelemetry for Your Applications Using NGINX https://www.nginx.com/blog/tutorial-configure-opentelemetry-for-your-applications-using-nginx/ Thu, 18 Jan 2024 18:23:47 +0000 https://www.nginx.com/?p=72855 If you’re looking for a tool to trace web applications and infrastructure more effectively, OpenTelemetry might be just what you need. By instrumenting your NGINX server with the existing OpenTelemetry NGINX community module you can collect metrics, traces, and logs and gain better visibility into the health of your server. This, in turn, enables you [...]

Read More...

The post Tutorial: Configure OpenTelemetry for Your Applications Using NGINX appeared first on NGINX.

]]>
If you’re looking for a tool to trace web applications and infrastructure more effectively, OpenTelemetry might be just what you need. By instrumenting your NGINX server with the existing OpenTelemetry NGINX community module you can collect metrics, traces, and logs and gain better visibility into the health of your server. This, in turn, enables you to troubleshoot issues and optimize your web applications for better performance. However, this existing community module can also slow down your server’s response times due to the performance overhead it requires for tracing. This process can also consume additional resources, increasing CPU and memory usage. Furthermore, setting up and configuring the module can be a hassle.

NGINX has recently developed a native OpenTelemetry module, ngx_otel_module, which revolutionizes the tracing of request processing performance. The module utilizes telemetry calls to monitor application requests and responses, enabling enhanced tracking capabilities. The module can be conveniently set up and configured within the NGINX configuration files, making it highly user-friendly. This new module caters to the needs of both NGINX OSS and NGINX Plus users. It supports W3C context propagation and OTLP/gRPC export protocol, rendering it a comprehensive solution for optimizing performance.

The NGINX-native OpenTelemetry module is a dynamic module that doesn’t require any additional packaging with NGINX Plus. It offers a range of features, including the API and key-value store modules. These features work together to provide a complete solution for monitoring and optimizing the performance of your NGINX Plus instance. By using ngx_otel_module, you can gain valuable insights into your web application’s performance and take steps to improve it. We highly recommend exploring ngx_otel_module to discover how it can help you achieve better results.

Note: You can head over to our GitHub page for detailed instructions on how to install nginx_otel_module and get started.

Tutorial Overview

In this blog, you can follow a step-by-step guide on configuring OpenTelemetry in NGINX Plus and using the Jaeger tool to collect and visualize traces. OpenTelemetry is a powerful tool that offers a comprehensive view of a request’s path, including valuable information such as latency, request details, and response data. This can be incredibly useful in optimizing performance and identifying potential issues. To simplify things, we have set up the OpenTelemetry module, application, and Jaeger all in one instance, which you can see in the diagram below.

Open Telemetry Module diagram
Figure 1: NGINX OpenTelemetry architecture overview

Follow the steps in these sections to complete the tutorial:

  • Prerequisites
  • Deploy NGINX Plus and Install the OpenTelemetry Module
  • Deploy Jaeger and the echo Application
  • Configure OpenTelemetry in NGINX for Tracing
  • Test the Configuration

Prerequisites

  • A Linux/Unix environment, or any compatible environment
  • A NGINX Plus subscription
  • Basic familiarity with the Linux command line and JavaScript
  • Docker
  • Node.js 19.x or later
  • Curl

Deploy NGINX Plus and Install the OpenTelemetry Module

Selecting an appropriate environment is crucial for successfully deploying an NGINX instance. This tutorial will walk you through deploying NGINX Plus and installing the NGINX dynamic modules.

  1. Install NGINX Plus on a supported operating system.
  2. Install ngx_otel_module. Add the dynamic module to the NGINX configuration directory to activate OpenTelemetry:
  3. load_module modules/ngx_otel_module.so;

  4. Reload NGINX to enable the module:
  5. nginx -t && nginx -s reload

Deploy Jaeger and the echo Application

There are various options available to view traces. This tutorial uses Jaeger to collect and analyze OpenTelemetry data. Jaeger provides an efficient and user-friendly interface to collect and visualize tracing data. After data collection, you will deploy mendhak/http-https-echo, a simple Docker application. This application returns the request attributes for JavaScript in JSON format.

  1. Use docker-compose to deploy Jaeger and the http-echo application. You can create a docker-compose file by copying the configuration below and saving it in a directory of your choice.

    
    version: '3'
    
    Services:
      jaeger:
        image: jaegertracing/all-in-one:1.41
        container_name: jaeger
        ports:
          - "16686:16686"
          - "4317:4317"
          - "4318:4318"
        environment:
          COLLECTOR_OTLP_ENABLED: true
    
      http-echo:
        image: mendhak/http-https-echo
        environment:
            - HTTP_PORT=8888
            - HTTPS_PORT=9999
        ports:
            - "4500:8888" 
            - "8443:9999"
    
  2. To install the Jaeger all-in-one tracing and http-echo application. Run this command:
  3. 'docker-compose up -d'

  4. Run the docker ps -a command to verify if the container is installed.
  5. 
    $docker ps -a
    CONTAINER ID   IMAGE                           COMMAND                  CREATED        STATUS
    PORTS                                                                                                                                                                   NAMES
    
    5cb7763439f8   jaegertracing/all-in-one:1.41   "/go/bin/all-in-one-…"   30 hours ago   Up 30 hours   5775/udp, 5778/tcp, 14250/tcp, 0.0.0.0:4317-4318->4317-4318/tcp, :::4317-4318->4317-4318/tcp, 0.0.0.0:16686->16686/tcp, :::16686->16686/tcp, 6831-6832/udp, 14268/tcp   jaeger
    
    e55d9c00a158   mendhak/http-https-echo         "docker-entrypoint.s…"   11 days ago    Up 30 hours   8080/tcp, 8443/tcp, 0.0.0.0:8080->8888/tcp, :::8080->8888/tcp, 0.0.0.0:8443->9999/tcp, :::8443->9999/tcp                                                                ubuntu-http-echo-1
    

    You can now access Jaeger by simply typing in the http://localhost:16686 endpoint in your browser. Note that you might not be able to see any system trace data right away as it is currently being sent to the console. But don’t worry! We can quickly resolve this by exporting the traces in the OpenTelemetry Protocol (OTLP) format. You’ll learn to do this in the next section when we configure NGINX to send the traces to Jaeger.

Configure OpenTelemetry in NGINX for Tracing

This section will show you step-by-step how to set up the OpenTelemetry directive in NGINX Plus using a key-value store. This powerful configuration enables precise monitoring and analysis of traffic, allowing you to optimize your application’s performance. By the end of this section, you will have a solid understanding of utilizing the NGINX OpenTelemetry module to track your application’s performance.

Setting up and configuring telemetry collection is a breeze with NGINX configuration files. With ngx_otel_module, users can access a robust, protocol-aware tracing tool that can help to quickly identify and resolve issues in applications. This module is a valuable addition to your application development and management toolset and will help you enhance the performance of your applications. To learn more about configuring other OpenTelemetry sample configurations, please refer to the documentation ngx_otel_module documentation.

OpenTelemetry Directives and Variables

NGINX has new directives that can help you achieve an even more optimized OpenTelemetry deployment, tailored to your specific needs. These directives were designed to enhance your application’s performance and make it more efficient than ever.

Module Directives:

  1. otel_exporter – Sets the parameters for OpenTelemetry data, including the endpoint, interval, batch size, and batch count. These parameters are crucial for the successful export of data and must be defined accurately.
  2. otel_service_name – Sets the service name attribute for your OpenTelemetry resource to improve organization and tracking.
  3. otel_trace – To enable or disable OpenTelemetry tracing, you can now do so by specifying a variable. This offers flexibility in managing your tracing settings.
  4. otel_span_name – The name of the OpenTelemetry span is set as the location name for a request by default. It’s worth noting that the name is customizable and can include variables as required.

Configuration Examples

Here are examples of ways you can configure OpenTelemetry in NGINX using the NGINX Plus key-value store. The NGINX Plus key-value store module offers a valuable use case that enables dynamic configuration of OpenTelemetry span and other OpenTelemetry attributes, thereby streamlining the process of tracing and debugging.

This is an example of dynamically enabling OpenTelemetry tracing by using a key-value store:


http {
      keyval "otel.trace" $trace_switch zone=name;

      server {
          location / {
              otel_trace $trace_switch;
              otel_trace_context inject;
              proxy_pass http://backend;
          }

          location /api {
              api write=on;
          } 
      }
  }

Next, here’s an example of dynamically disabling OpenTelemetry tracing by using a key-value store:


location /api {
              api write=off;
          } 

Here is an example NGINX OpenTelemetry span attribute configuration:


user  nginx;
worker_processes  auto;
load_module modules/ngx_otel_module.so;
error_log /var/log/nginx debug;
pid   /var/run/nginx.pid;


events {
    worker_connections  1024;
}

http {
    keyval "otel.span.attr" $trace_attr zone=demo;
    keyval_zone zone=demo:64k  state=/var/lib/nginx/state/demo.keyval;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    include       mime.types;
    default_type  application/json;
    upstream echo {
        server localhost:4500;
        zone echo 64k;
    }
    otel_service_name nginx;
    otel_exporter {
           endpoint localhost:4317;
        }

    server {
       listen       4000;
       otel_trace on;
       otel_span_name otel;
       location /city {
            proxy_set_header   "Connection" "" ;
            proxy_set_header Host $host;
            otel_span_attr demo $trace_attr;
            otel_trace_context inject;
            proxy_pass http://echo;
       }
       location /api {
           api write=on;
       }
       location = /dashboard.html {
        root /usr/share/nginx/html;
    }
       
  }

}

To save the configuration and restart NGINX, input this code:

nginx -s reload

Lastly, here is how to add span attribute in NGINX Plus API:


curl -X POST -d '{"otel.span.attr": "<span attribute name>"}' http://localhost:4000/api/6/http/keyvals/<zone name>

Test the Configuration

Now, you can test your configuration by following the steps below.

  1. To generate the trace data, start by opening your terminal window. Next, type in this command to create the data:

    $ curl -i localhost:4000/city

    The output will look like this:

                          
    HTTP/1.1 200 OK
    Server: nginx/1.25.3
    Date: Wed, 29 Nov 2023 20:25:04 GMT
    Content-Type: application/json; charset=utf-8
    Content-Length: 483
    Connection: keep-alive
    X-Powered-By: Express
    ETag: W/"1e3-2FytbGLEVpb4LkS9Xt+KkoKVW2I"
    
    {
    "path": "/city",
    "headers": {
    "host": "localhost",
    "connection": "close",
    "user-agent": "curl/7.81.0",
    "accept": "*/*",
    "traceparent": "00-66ddaa021b1e36b938b0a05fc31cab4a-182d5a6805fef596-00"
    },
    "method": "GET",
    "body": "",
    "fresh": false,
    "hostname": "localhost",
    "ip": "::ffff:172.18.0.1",
    "ips": [],
    "protocol": "http",
    "query": {},
    "subdomains": [],
    "xhr": false,
    "os": {
    "hostname": "e55d9c00a158"
    },
    "connection": {}
    
  2. Now you want to ensure that the OTLP exporter is functioning correctly and that you can gain access to the trace. Start by opening a browser and accessing the Jaeger UI at http://localhost:16686. Once the page loads, click on the Search button, located in the title bar. From there, select the service that starts with NGINX from the drop-down menu in the Service field. Then select the operation named Otel from the drop-down menu called Operation. To make it easier to identify any issues, click on the Find Traces button to visualize the trace.
  3. Jaeger dashboard
    Figure 2: Jaeger dashboard
  4. To access a more detailed and comprehensive analysis of a specific trace, click on one of the individual traces available. This will provide you with valuable insights into the trace you have selected. In the trace below, you can review both the OpenTelemetry directive span attribute and the non-directive of the trace, allowing you to better understand the data at hand.
  5. Analysis of the OpenTelemetry trace
    Figure 3: Detailed analysis of the OpenTelemetry trace

    Under Tags you can see the following attributes:

    • demo – OTel – OpenTelemetry span attribute name
    • http.status_code field – 200 – Indicates successful creation
    • otel.library.name – nginx – OpenTelemetry service name

Conclusion

NGINX now has built-in support for OpenTelemetry, a significant development for tracing requests and responses in complex application environments. This feature streamlines the process and ensures seamless integration, making it much easier for developers to monitor and optimize their applications.

Although the OpenTracing module that was introduced in NGINX Plus R18 is now deprecated and will be removed starting from NGINX Plus R34, it will still be available in all NGINX Plus releases until then. However, it’s recommended to use the OpenTelemetry module, which was introduced in NGINX Plus R29.

If you’re new to NGINX Plus, you can start your 30-day free trial today or contact us to discuss your use cases.

The post Tutorial: Configure OpenTelemetry for Your Applications Using NGINX appeared first on NGINX.

]]>
QUIC+HTTP/3 Support for OpenSSL with NGINX https://www.nginx.com/blog/quic-http3-support-openssl-nginx/ Wed, 13 Sep 2023 15:24:32 +0000 https://www.nginx.com/?p=72672 Developers usually want to build applications and infrastructure using released, official, and supported libraries. Even with HTTP/3, there is a strong need for a convenient library that supports QUIC and doesn’t increase the maintenance costs or operational complexity in the production infrastructure. For many QUIC+HTTP/3 users, that default cryptographic library is OpenSSL. Installed on most [...]

Read More...

The post QUIC+HTTP/3 Support for OpenSSL with NGINX appeared first on NGINX.

]]>
Developers usually want to build applications and infrastructure using released, official, and supported libraries. Even with HTTP/3, there is a strong need for a convenient library that supports QUIC and doesn’t increase the maintenance costs or operational complexity in the production infrastructure.

For many QUIC+HTTP/3 users, that default cryptographic library is OpenSSL. Installed on most Linux-based operating systems by default, OpenSSL is the number one Transport Layer Security (TLS) library and is used by the majority of network applications.

The Problem: Incompatibility Between OpenSSL and QUIC+HTTP/3

Even with such wide usage, OpenSSL does not provide the TLS API required for QUIC support. Instead, the OpenSSL Management Committee decided to implement a complete QUIC stack on their own. This endeavor is a considerable effort planned for OpenSSL v3.4 but, according to the OpenSSL roadmap, that won’t likely happen before the end of 2024. Furthermore, the initial Minimum Viable Product of the OpenSSL implementation won’t contain the QUIC API implementation, so there is no clear path for users to get HTTP/3 support with OpenSSL.

Options for QUIC TLS Support

In this situation, there are two options for users looking for QUIC TLS support for their HTTP/3 needs:

  • OpenSSL QUIC implementation – As mentioned above, OpenSSL is currently working on implementing a complete QUIC stack on its own. This development will encapsulate all QUIC functionality within the implementation, making it much easier for HTTP/3 users to use the OpenSSL TLS API without worrying about QUIC-specific functionality.
  • Libraries supporting the BoringSSL QUIC API – Various SSL libraries like BoringSSL, quicTLS, and LibreSSL (all of which started as forks of OpenSSL) now provide QUIC TLS functionality by implementing BoringSSL QUIC API. However, these libraries aren’t as widely adopted as OpenSSL. This option also requires building the SSL library from source and installing it on every server that needs QUIC+HTTP/3 support, which might not be a feasible option for everyone. That said, this is currently the only option for users wanting to use HTTP/3 because the OpenSSL QUIC TLS implementation is not ready yet.

A New Solution: The OpenSSL Compatibility Layer

At NGINX, we felt inspired by these challenges and created the OpenSSL Compatibility Layer to simplify QUIC+HTTP/3 deployments that use OpenSSL and help avoid complexities associated with maintaining a separate SSL library in production environments.

Available with NGINX Open Source mainline since version 1.25.0 and NGINX Plus R30, the OpenSSL Compatibility Layer allows NGINX to run QUIC+HTTP/3 on top of OpenSSL without needing to patch or rebuild it. This removes the dependency of compiling and deploying third-party TLS libraries to get QUIC support. Since users don’t need to use third-party libraries, it also alleviates the dependency on schedules and roadmaps of those libraries, making it a comparatively easier solution to deploy in production.

How the OpenSSL Compatibility Layer Works

The OpenSSL Compatibility Layer implements these steps:

  • Converts a QUIC handshake to a TLS 1.3 handshake that is supported by OpenSSL.
  • Passes the TLS handshake messages in and out of OpenSSL.
  • Gets the encryption keys for handshake and application encryption levels out of OpenSSL.
  • Passes the QUIC transport parameters in and out of OpenSSL.

Based on the amount of OpenSSL adoption today and knowing its status with official QUIC+HTTP/3 support, we believe an easy and scalable option to enable QUIC is a step in the right direction. It will also promote HTTP/3 adoption and allow for valuable feedback. Most importantly, we trust that the OpenSSL Compatibility Layer will help us provide a more robust and scalable solution for our enterprise users and the entire NGINX community.

Note: While we are making sure NGINX users have an easy and scalable option with the availability of the OpenSSL Compatibility Layer, users still have options to use third-party libraries like BoringSSL, quicTLS, or LibreSSL with NGINX. To decide which one is the right path for you, consider what approach best meets your requirements and how comfortable you are with compiling and managing libraries as dependencies.

A Note on 0-RTT

0-RTT is a feature in QUIC that allows a client to send application data before the TLS handshake is complete. 0-RTT functionality is made possible by reusing negotiated parameters from a previous connection. It is enabled by the client remembering critical parameters and providing the server with a TLS session ticket that allows the server to recover the same information.

While this feature is an important part of QUIC, it is not yet supported in the OpenSSL Compatibility Layer. If you have specific use cases that need 0-RTT, we welcome your feedback to inform our roadmap.

Learn More about NGINX with QUIC+HTTP/3 and OpenSSL

You can begin using NGINX’s OpenSSL Compatibility Layer today with NGINX Open Source or by starting a 30-day free trial of NGINX Plus. We hope you find it useful and welcome your feedback.

More information about NGINX with QUIC+HTTP/3 and OpenSSL is available in the resources below.

The post QUIC+HTTP/3 Support for OpenSSL with NGINX appeared first on NGINX.

]]>
Using 1Password CLI to Securely Build NGINX Plus Containers https://www.nginx.com/blog/using-1password-cli-to-securely-build-nginx-plus-containers/ Tue, 29 Aug 2023 15:00:06 +0000 https://www.nginx.com/?p=72648 If you’re a regular user of F5 NGINX Plus, it’s likely that you’re building containers to try out new features or functionality. And when building NGINX Plus containers, you often end up storing sensitive information like the NGINX repository certificate and key on your local file system. While it’s straightforward to add sensitive files to [...]

Read More...

The post Using 1Password CLI to Securely Build NGINX Plus Containers appeared first on NGINX.

]]>
If you’re a regular user of F5 NGINX Plus, it’s likely that you’re building containers to try out new features or functionality. And when building NGINX Plus containers, you often end up storing sensitive information like the NGINX repository certificate and key on your local file system. While it’s straightforward to add sensitive files to a .gitignore repository file, that process is not ideal nor secure – in fact, there are many examples where engineers accidentally commit sensitive information to a repository.

A better method is to use a secrets management solution. Personally, I’m a longtime fan of 1Password and recently discovered their CLI tool. This tool makes it easier for developers and platform engineers to interact with secrets in their day-to-day workflow.

In this blog post, we outline how to use 1Password CLI to securely build an NGINX Plus container. This example assumes you have an NGINX Plus subscription, a 1Password subscription with the CLI tool installed, access to an environment with a shell (Bash or Zsh), and Docker installed.

Store Secrets in 1Password

The first step is to store your secrets in 1Password, which supports multiple secret types like API credentials, files, notes, and passwords. In this NGINX Plus use case, we leverage 1Password’s secure file feature.

You can obtain your NGINX repository certificate and key from the MyF5 portal. Follow the 1Password documentation to create a secure document for both the NGINX repository certificate and key. Once you have created the two secure documents , follow the steps to collect the 1Password secret reference.

Note: At the time of this writing, 1Password does not support multiple files on the same record.

Build the NGINX Plus Container

Now it’s time to build the NGINX Plus container that leverages your secure files and their secret reference Uniform Resource Identifiers (URIs). This step uses the example Dockerfile from the NGINX Plus Admin Guide.

Prepare the docker build Process

After saving the Dockerfile to a new directory, prepare the docker build process. To pass your 1Password secrets into the docker build, first store each secret reference URI in an environment variable. Then, open a new Bash terminal in the directory where you saved your Dockerfile.

Enter these commands into the Bash terminal:

export NGINX_CRT="op://Work/nginx-repo-crt/nginx-repo.crt"
export NGINX_KEY="op://Work/nginx-repo-key/nginx-repo.key"

Replace Secret Reference URIs

The op run command enables your 1Password CLI to replace secret reference URIs in environment variables with the secret’s value. You can leverage this in your docker build command to pass the NGINX repository certificate and key into the build container.

To finish building your container, run the following commands in the same terminal used in the previous step:

op run -- docker build --no-cache --secret id=nginx-key,env=NGINX_KEY --secret id=nginx-crt,env=NGINX_CRT -t nginxplus --load .

In this command, op run executes the docker build command and detects two environment variable references (NGINX_CRT and NGINX_KEY) with the 1Password secret reference URIs. The op command replaces the URI with the secret’s actual value.

Get Started Today

By following the simple steps and using 1Password CLI, you can build NGINX Plus containers against the NGINX Plus repository without storing the certificate and key on your local file system – creating an environment for better security.

If you’re new to NGINX Plus, you can start your 30-day free trial today or contact us to discuss your use cases.

The post Using 1Password CLI to Securely Build NGINX Plus Containers appeared first on NGINX.

]]>
Automate TCP Load Balancing to On-Premises Kubernetes Services with NGINX https://www.nginx.com/blog/automate-tcp-load-balancing-to-on-premises-kubernetes-services-with-nginx/ Tue, 22 Aug 2023 15:00:07 +0000 https://www.nginx.com/?p=72639 You are a modern app developer. You use a collection of open source and maybe some commercial tools to write, test, deploy, and manage new apps and containers. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and concepts of microservices, the Cloud [...]

Read More...

The post Automate TCP Load Balancing to On-Premises Kubernetes Services with NGINX appeared first on NGINX.

]]>
You are a modern app developer. You use a collection of open source and maybe some commercial tools to write, test, deploy, and manage new apps and containers. You’ve chosen Kubernetes to run these containers and pods in development, test, staging, and production environments. You’ve bought into the architectures and concepts of microservices, the Cloud Native Computing Foundation, and other modern industry standards.

On this journey, you’ve discovered that Kubernetes is indeed powerful. But you’ve probably also been surprised at how difficult, inflexible, and frustrating it can be. Implementing and coordinating changes and updates to routers, firewalls, load balancers and other network devices can become overwhelming – especially in your own data center! It’s enough to bring a developer to tears.

How you handle these challenges has a lot to do with where and how you run Kubernetes (as a managed service or on premises). This article addresses TCP load balancing, a key area where deployment choices impact ease of use.

TCP Load Balancing with Managed Kubernetes (a.k.a. the Easy Option)

If you use a managed service like a public cloud provider for Kubernetes, much of that tedious networking stuff is handled for you. With just one command (kubectl apply -f loadbalancer.yaml), the Service type LoadBalancer gives you a Public IP, DNS record, and TCP load balancer. For example, you could configure Amazon Elastic Load Balancer to distribute traffic to pods containing NGINX Ingress Controller and, using this command, have no worries when the backends change. It’s so easy, we bet you take it for granted!

TCP Load Balancing with On-Premises Kubernetes (a.k.a. the Hard Option)

With on-premises clusters, it’s a totally different scenario. You or your networking peers must provide the networking pieces. You might wonder, “Why is getting users to my Kubernetes apps so difficult?” The answer is simple but a bit shocking: The Service type LoadBalancer, the front door to your cluster, doesn’t actually exist.

To expose your apps and Services outside the cluster, your network team probably requires tickets, approvals, procedures, and perhaps even security reviews – all before they reconfigure their equipment. Or you might need to do everything yourself, slowing the pace of application delivery to a crawl. Even worse, you dare not make changes to any Kubernetes Services, for if the NodePort changes, the traffic could get blocked! And we all know how much users like getting 500 errors. Your boss probably likes it even less.

A Better Solution for On-Premises TCP Load Balancing: NGINX Loadbalancer for Kubernetes

You can turn the “hard option” into the “easy option” with our new project: NGINX Loadbalancer for Kubernetes. This free project is a Kubernetes controller that watches NGINX Ingress Controller and automatically updates an external NGINX Plus instance configured for load balancing. Being very straightforward in design, it’s simple to install and operate. With this solution in place, you can implement TCP load balancing in on-premises environments, ensuring new apps and services are immediately detected and available for traffic – with no need to get hands on.

Architecture and Flow

NGINX Loadbalancer for Kubernetes sits inside a Kubernetes cluster. It is registered with Kubernetes to watch the nginx-ingress Service (NGINX Ingress Controller). When there is a change to the backends, NGINX Loadbalancer for Kubernetes collects the Worker IPs and the NodePort TCP port numbers, then sends the IP:ports to NGINX Plus via the NGINX Plus API. The NGINX upstream servers are updated with no reload required, and NGINX Plus load balances traffic to the correct upstream servers and Kubernetes NodePorts. Additional NGINX Plus instances can be added to achieve high availability.

Diagram of NGINX Loadbalancer in action

A Snapshot of NGINX Loadbalancer for Kubernetes in Action

In the screenshot below, there are two windows that demonstrate NGINX Loadbalancer for Kubernetes deployed and doing its job:

  1. Service Type – LoadBalancer (for nginx-ingress)
  2. External IP – Connects to the NGINX Plus servers
  3. Ports – NodePort maps to 443:30158 with matching NGINX upstream servers (as shown in the NGINX Plus real-time dashboard)
  4. Logs – Indicates NGINX Loadbalancer for Kubernetes is successfully sending data to NGINX Plus

Note: In this example, the Kubernetes worker nodes are 10.1.1.8 and 10.1.1.10

A screenshot of NGINX Loadbalancer for Kubernetes in Action

Get Started Today

If you’re frustrated with networking challenges at the edge of your Kubernetes cluster, take the project for a spin and let us know what you think. The source code for NGINX Loadbalancer for Kubernetes is open source (under the Apache 2.0 license) with all installation instructions available on GitHub.  

To provide feedback, drop us a comment in the repo or message us in the NGINX Community Slack.

The post Automate TCP Load Balancing to On-Premises Kubernetes Services with NGINX appeared first on NGINX.

]]>
Announcing NGINX Plus R30 https://www.nginx.com/blog/nginx-plus-r30-released/ Tue, 15 Aug 2023 15:00:06 +0000 https://www.nginx.com/?p=72620 We’re happy to announce the availability of NGINX Plus Release 30 (R30). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway. New and enhanced features in NGINX Plus R30 include: Native support for QUIC+HTTP/3 – NGINX Plus now has official support for HTTP/3. The implementation [...]

Read More...

The post Announcing NGINX Plus R30 appeared first on NGINX.

]]>
We’re happy to announce the availability of NGINX Plus Release 30 (R30). Based on NGINX Open Source, NGINX Plus is the only all-in-one software web server, load balancer, reverse proxy, content cache, and API gateway.

New and enhanced features in NGINX Plus R30 include:

  • Native support for QUIC+HTTP/3NGINX Plus now has official support for HTTP/3. The implementation does not depend on third-party libraries to provide the missing OpenSSL TLS functionality required to deliver HTTP/3 support over QUIC protocol. It uses an OpenSSL Compatibility Layer developed by the NGINX team to circumvent the challenges with QUIC TLS interfaces that are not supported by OpenSSL.
  • Per-worker connection telemetryMonitoring connections at a per-worker level is now supported. This enables users to fine tune NGINX performance by regulating the number of worker processes and effectively distributing connections amongst workers for optimal performance.
  • Diagnostic package The NGINX diagnostic package collects all data required for troubleshooting issues in a single compressed file. This improves communication between NGINX Plus users and F5 Support, increasing efficiency and reducing the turnaround time for issue resolution.

Rounding out the release are new features and bug fixes inherited from NGINX Open Source and updates to the NGINX JavaScript module.

Important Changes in Behavior

Note: If you are upgrading from a release other than NGINX Plus R29, be sure to check the Important Changes in Behavior section in previous announcement blogs for all releases between your current version and this one.

Deprecation of listen … http2 directive

The listen … http2 directive has been deprecated in NGINX 1.25.1. NGINX configuration check using nginx -t gives a warning to that effect.  

nginx -t
nginx: [warn] the "listen ... http2" directive is deprecated, use the "http2" directive instead in etc/nginx/nginx.conf :15
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

All existing users of this directive are strongly advised to upgrade NGINX and use the http2 directive, which enables HTTP/2 on a per-server basis.

Change this:

listen 443 ssl http2;

To this:

listen 443 ssl;
http2 on;

Nonavailability of GeoIP2 Module on Amazon Linux 2

Previous versions of NGINX Plus used the “libmaxminddb” library from the Amazon Linux 2 EPEL repository to build the GeoIP2 module. The EPEL repository no longer provides this library, nor is it accessible natively from the Amazon Linux 2 distribution. Therefore, the module is no longer available in NGINX Plus R30 as there is no feasible way to build it for Amazon Linux 2.

Changes to MQTT Directives

The mqtt_rewrite_buffer_size directive, which is used for specifying the size of buffer to construct MQTT messages, has been superseded by the mqtt_buffers directive. The new directive allows for specifying the number of buffers that can be allocated per connection, along with specifying the size of each buffer.

Updated API Version

The version number of the NGINX Plus API has been updated from 8 to 9 to reflect the addition of the per-worker metrics described in Per-Worker Connection Telemetry. Previous version numbers still work, but the output doesn’t include metrics added in later API versions.

Changes to Platform Support

New operating systems supported:

  • Debian 12
  • Alpine 3.18

Older operating systems removed:

  • Alpine 3.14, which reached end-of-life (EOL) on May 1, 2023
  • Ubuntu 18.04, which reached EOL on April 26, 2023

Older operating systems deprecated and scheduled for removal in NGINX Plus R31:

  • Alpine 3.15, which will reach EOL in November 2023

New Features in Detail

Native Support for QUIC+HTTP/3

HTTP/3 over QUIC has been a highly anticipated feature requested by many of our enterprise customers, and we are delighted to officially introduce it in NGINX Plus R30. This is a new technology and implementation that we will continue to focus on in future releases. We advise NGINX Plus users to first try it out in a non-production environment and share any valuable feedback with us.

NGINX Plus relies on OpenSSL for secure communication and cryptographic functionality, making use of the SSL/TLS libraries that ship with operating systems. However, because QUIC’s TLS interfaces are not supported by OpenSSL at the time of this release, third-party libraries are needed to provide for the missing TLS functionality required by HTTP/3.

To address this concern, the NGINX team developed an OpenSSL Compatibility Layer, removing the need to build and ship third-party TLS libraries like quictls, BoringSSL, and LibreSSL. This helps manage the end-to-end QUIC+HTTP/3 experience in NGINX without the burden of a custom TLS implementation nor the dependency on schedules and roadmaps of third-party libraries. We plan to enhance the OpenSSL Compatibility Layer in future releases with more features and options, such as support for 0-RTT.

Here is the QUIC+HTTP/3 configuration:
   

http {
        log_format quic '$remote_addr - $remote_user [$time_local] '
                        '"$request" $status $body_bytes_sent '
                        '"$http_referer" "$http_user_agent" "$http3"';
        access_log logs/access.log quic;
        server {             # for better compatibility it's recommended             # to use the same port for quic and https             listen 8443 quic reuseport;             listen 8443 ssl;
            ssl_certificate     certs/example.com.crt;             ssl_certificate_key certs/example.com.key;
            location / {                 # required for browsers to direct them into quic port                 add_header Alt-Svc 'h3=":8443"; ma=86400';             }         }     }

The QUIC+HTTP/3 support in NGINX Plus R30 is available as a single binary – unlike the experimental HTTP/3 support introduced in NGINX Plus R29, which had a separate binary for nginx quic. This improvement makes it easier to deploy the functionality in your environment.

Note: With NGINX Plus R30, we’re ending support and updates for the standalone QUIC binary and plan to remove it as a download option later this year.

Per-Worker Connection Telemetry

NGINX Plus users are now able to monitor total connections per-worker process to adequately tune the worker_connections directive. This improvement gives users better visibility into how connections are distributed amongst workers. Being able to tune worker connections also helps you better assess your NGINX deployment.

The per-worker connection metrics are available over REST API. To retrieve per-worker connection metrics, use the …/api/9/workers endpoint.

To retrieve per-worker connection metrics for an individual worker, use the .../api/9/workers/<worker id> endpoint. The worker id has a 0-based index.

Here is a sample response:

{
      {
          "id": 0,
          "pid": 2346,
          "connections": {
              "accepted": 1,
              "dropped": 0,
              "active": 1,
              "idle": 0
          },
          "http": {
              "requests": {
                  "total": 15,
                  "current": 1
              }
          }
      },
      {
          "id": 1,
          "pid": 1234,
          "connections": {
              "accepted": 3,
              "dropped": 0,
              "active": 1,
              "idle": 0
          },
          "http": {
              "requests": {
                  "total": 15,
                  "current": 1
              }
          }
      },
    ...
}

The per-worker connection metrics are available in the NGINX Plus Live Activity Monitoring Dashboard, as shown below. Access a live demo of this feature at demo.nginx.com.

NGINX Plus Live Activity Monitoring Dashboard showing the information below about NGINX Plus connections and requests

The dashboard shows the information below about NGINX Plus connections and requests.

Connections:

  • Accepted connections per worker
  • Active connections per worker
  • Idle connections per worker
  • Dropped connections per worker

Requests:

  • Current requests per worker
  • Total requests per worker
  • Requests/sec per worker

Diagnostic Package

To reduce turnaround time for issue resolution, the diagnostic package streamlines the process of collecting the data required to troubleshoot issues in your NGINX environment. The diagnostic package also helps avoid discrepancies and delays associated with the manual requesting and collecting of information needed to troubleshoot issues, making the interaction between NGINX Plus customers and F5 Support more efficient.

The diagnostic package collects:

  • NGINX information – NGINX Plus version, configs, process information, third-party modules, logs, and API stats and endpoints
  • System information – Host commands (ps, lsof, vmstat, etc.)
  • Service informationsystemd, etc.
  • NGINX Agent – Logs and configs (if present)
  • NGINX App Protect – Logs and configs (if present)
  • Support package log – Log containing a list of all files collected

Our goal with the addition of the diagnostic package is to be transparent with users about what commands the script within the package runs and what data is being collected. Refer to the NGINX Plus Diagnostic Package page for more information.

Note: The availability of the diagnostic package is being announced as part of the current NGINX Plus R30 release, however the diagnostic package is not actually release dependent. Going forward, we plan to update it based on feedback from you and F5 Support with the intent of improving the troubleshooting data collection process.

Other Enhancements in NGINX Plus R30

MQTT Optimizations

With memory consumption improvements made to the Message Queuing Telemetry Transport (MQTT) filter module, there is now a 4-5x increase in throughput.

The directive mqtt_rewrite_buffer_size has been removed. Instead, the new directive mqtt_buffers <num> <size> has been introduced to specify how many buffers the module may allocate per connection, along with the size of each buffer. The default number of buffers is 100 and the default size of each buffer is 1024 bytes, which makes the default value of the variable mqtt_buffers <100> <1024>.

DNS Reload Optimizations

NGINX Plus now preserves DNS name expiry times for dynamically resolved upstream hosts across reloads, removing the need for re-resolution on configuration reload. Before this update, DNS resolutions were triggered for all upstreams. With this update, NGINX preserves DNS resolutions and expiry times for all upstreams and triggers DNS resolutions only for new or changed upstreams on reload.

This optimization will be most impactful for NGINX environments containing a large number of upstream hosts. If you have 100 or more upstream hosts in your NGINX configuration, the optimizations would be most evident.

Changes Inherited from NGINX Open Source

NGINX Plus R30 is based on NGINX Open Source 1.25.1 and inherits functional changes, features, and bug fixes made since NGINX Plus R29 was released (in NGINX 1.25.0 and 1.25.1).

Changes

  • HTTP/2 server push support has been removed. HTTP/2 server_push had minimal adoption and could only be used in very limited use cases. (As per IETF 102 reference documentation, it was used in just 0.04% of sessions. As per RFC 913 it was “difficult to be used effectively.”) HTTP/2 server push was disabled in Chrome version 106. As part of this change, the http2_push, http2_push_preload, and http2_max_concurrent_pushes directives have been made obsolete.
  • The deprecated ssl directive is no longer supported. The ssl directive was deprecated in NGINX 1.15.0 and replaced by the ssl parameter of the listen directive. The deprecated ssl directive has now been removed.
  • As mentioned above, the listen … http2 directive is deprecated. Users are advised to use the http2 directive instead.
    • For SSL connections with OpenSSL v1.0.2h or higher, if the HTTP/2 protocol is enabled in the virtual server chosen by a Server Name Identification (SNI), it is automatically selected by the Application Layer Protocol Negotiation (ALPN) callback.
    • For older versions of OpenSSL, the HTTP/2 protocol is enabled based on the default virtual server configuration.
    • For plain TCP connections, HTTP/2 is now auto-detected by the HTTP/2 preface if it is enabled in the default virtual server. If the preface does not match, HTTP/0.9-1.1 is assumed.
  • Support is added for HTTP/2 over Cleartext TCP (h2c) and HTTP/1.1 on the same listening socket. In the existing implementation, if a user configures a h2c listening socket (e.g., listen port_num http2) only HTTP/2 connections can be created. An h2 listening socket supports both HTTP/1.1 and HTTP/2 with negotiation of protocol via ALPN. However, in the existing implementation, HTTP/1.1 clients fail on the socket, preventing the use of HTTP Upgrade as a means of negotiating the protocol. This change allows simultaneous support of HTTP/1.1 and HTTP/2 on a plain TCP socket when using HTTP/2.
  • Avoidance of possible buffer overrun with some $sent_http_* is enabled. A defect in the logic for evaluating multi-header $sent_http_ variables led to potential buffer overruns when certain elements were cleared but remained in the linked list. This issue manifested when third-party modules were used to override multi-header values. The update introduces refined boundary checks, ensuring safer handling and evaluation of these variables.

Features

  • Full HTTP/3 support is added. NGINX 1.25.0 mainline version introduced support for HTTP/3, and this support has been merged into NGINX Plus R30. The NGINX Plus R30 implementation has the following changes when compared to the experimental packages delivered in NGINX Plus R29:
    • Removed quic_mtu directive
    • Removed http3 parameter of listen directive
    • Removed QUIC support from the stream module
    • Removed HTTP/3 server push
    • Fixed building the OpenSSL Compatibility Layer with OpenSSL 3.2+

Bug Fix

  • Fixed segfault if a regular expression (regex) studies list allocation fails.

For the full list of new changes, features, bug fixes, and workarounds inherited from recent releases, see the NGINX CHANGES file.

Changes to the NGINX JavaScript Module

NGINX Plus R30 incorporates changes from the NGINX JavaScript (njs) module version 0.8.0.

Features

  • Introduced global NGINX properties:
    ngx.build, ngx.conf_file_path, ngx.error_log_path, ngx.prefix, ngx.version, ngx.version_number, and ngx.worker_id.
  • Introduced the js_shared_dict_zone directive for http and stream that allows declaring a dictionary shared between worker processes.
  • Added ES13-compliant Array methods: Array.from(), Array.prototype.toSorted(), Array.prototype.toSpliced(), Array.prototype.toReversed().
  • Added ES13-compliant TypedArray methods: %TypedArray%.prototype.toSorted(), %TypedArray%.prototype.toSpliced(), %TypedArray%.prototype.toReversed().
  • Added CryptoKey properties in WebCrypto API. These properties were added: algorithm, extractable, type, usages.

Changes

  • Removed special treatment of forbidden headers in the Fetch API introduced in 0.7.10.
  • Removed r.requestBody() from the http module, which was deprecated in version 0.5.0. The r.requestBuffer or r.requestText property should be used instead.
  • Removed r.responseBody() from the http module which was deprecated in version 0.5.0. The r.responseBuffer or r.responseText property should be used instead.
  • Throwing an exception in r.internalRedirect() while filtering in the http module.
  • Native methods are provided with retval argument. This change breaks compatibility with C extension for njs requiring the modification of the code.
  • Non-compliant deprecated String methods were removed. The following methods were removed: String.bytesFrom(), String.prototype.fromBytes(), String.prototype.fromUTF8(), String.prototype.toBytes(), String.prototype.toUTF8(), String.prototype.toString(encoding).
  • Removed support for building with GNU readline.

Bug Fixes

  • Fixed r.status setter when filtering in http module.
  • Fixed setting of Location header in http module.
  • Fixed retval of сrypto.getRandomValues().
  • Fixed evaluation of computed property names with function expressions.
  • Fixed implicit name for a function expression declared in arrays.
  • Fixed parsing of for-in loops.
  • Fixed Date.parse() with ISO-8601 format and UTC time offset.

For a comprehensive list of all the features, changes, and bug fixes, see the njs Changes log.

Upgrade or Try NGINX Plus

If you’re running NGINX Plus, we strongly encourage you to upgrade to NGINX Plus R30 as soon as possible. In addition to all the great new features, you’ll also pick up several additional fixes and improvements, and being up to date will help NGINX to help you if you need to raise a support ticket.

If you haven’t tried NGINX Plus, we encourage you to check it out. You can use it for security, load balancing, and API gateway use cases, or as a fully supported web server with enhanced monitoring and management APIs. Get started today with a free 30-day trial.

The post Announcing NGINX Plus R30 appeared first on NGINX.

]]>
Announcing the Open Source Subscription by F5 NGINX https://www.nginx.com/blog/announcing-open-source-subscription-f5-nginx/ Wed, 14 Jun 2023 15:01:00 +0000 https://www.nginx.com/?p=71674 As a reader of the NGINX blog, you’ve likely already gathered that NGINX Open Source is pretty popular. But it isn’t just because it’s free (though that’s nice, too!) – NGINX Open Source is so popular because it’s known for being stable, lightweight, and the developer’s Swiss Army Knife™. Whether you need a web server, [...]

Read More...

The post Announcing the Open Source Subscription by F5 NGINX appeared first on NGINX.

]]>
As a reader of the NGINX blog, you’ve likely already gathered that NGINX Open Source is pretty popular. But it isn’t just because it’s free (though that’s nice, too!) – NGINX Open Source is so popular because it’s known for being stable, lightweight, and the developer’s Swiss Army Knife™.

Tweet screenshot: "Ok world. What say you? Favorite webserver? @nginx , Apache or are you using @caddyserver ?" and the response "Nothing compares to nginx. Used it yesterday to emergency fix a problem by reverse proxying in a handful of lines of config. Swiss army knife of hosting software."

Whether you need a web server, reverse proxy, API gateway, Ingress controller, or cache, NGINX (which is lightweight enough to be installed from a floppy disk) has your back. But there’s one thing NGINX Open Source users have told us is missing: Enterprise support. So, that (and more) is what we’re excited to introduce with the new Open Source Subscription!

What Is the Open Source Subscription?

The Open Source Subscription is a new bundle that includes:

Enterprise Support for NGINX Open Source

NGINX Open Source has a reputation for reliability and the community provides fantastic support, but sometimes more is necessary. With the Open Source Subscription, F5 adds enterprise support to NGINX Open Source, including:

  • SLA options of business hours or 24/7
  • Security patches and bug fixes
  • Security notifications
  • Debugging and error correction
  • Clarification of documentation discrepancies

Next, let’s dive into some of the benefits of having enterprise support.

Timely Patches and Fixes

A common vulnerability with any open source software (OSS) is the time it can take to address Common Vulnerabilities and Exposures (CVEs) and bugs. In fact, we’ve seen forks of NGINX Open Source take weeks, or even months, to patch. For example, on October 19, 2022, we announced fixes to CVE-2022-41741 and CVE-2022-41742 but the corresponding Ubuntu and Debian patches weren’t made available until November 15, 2022.

As a customer of the Open Source Subscription, you’ll get immediate access to patches and fixes, proactive notifications of CVEs, and more, including:

  • Security patches in the latest mainline and stable releases
  • Critical bug fixes in the latest mainline release
  • Non-critical bug fixes in the latest or a future mainline release

Regulatory Compliance

An increasing number of companies and governments are concerned about software supply chain issues, with many adhering to the practice of building a software bill of materials (SBOM). As the SBOM concept matures, regulators are starting to require patching "on a reasonably justified regular cycle", with timely patches for serious vulnerabilities found outside of the normal patch cycle.

With the Open Source Subscription, you can ensure that your NGINX Open Source instances meet your organization’s OSS software requirements by demonstrating due diligence, traceability, and compliance with relevant regulations, especially when it comes to security aspects.

Confidentiality

Getting good support requires sharing configuration files. However, if you’re sharing configs with a community member or in forums, then you’re exposing your organization to security vulnerabilities (or even breaches). Just one simple piece of NGINX code shared on Stack Overflow could offer bad actors insight into how to exploit your apps or architecture.

The Open Source Subscription grants you direct access to F5’s team of security experts, so you can be assured that your configs stay confidential. To learn more, see the NGINX Open Source Support Policy.

Note: The Open Source Subscription includes support for Linux packages of NGINX Open Source stable and mainline versions obtained directly from NGINX. We are exploring how we might be able to support packages customized and distributed by other vendors, so tell us in the comments which distros are important to you!

Enterprise Features Via Automatic Access to NGINX Plus

With the Open Source Subscription, you get access to NGINX Plus at no added cost. The subscription lets you choose when to use NGINX Open Source or NGINX Plus based on your business needs.

NGINX Open Source is perfect for many app delivery use cases, and is particularly outstanding for web serving, content caching, and basic traffic management. And while you can extend NGINX Open Source for other use cases, this can result in stability and latency issues. For example, it’s common to use Lua scripts to detect endpoint changes (where the Lua handler chooses which upstream service to route requests to, thus eliminating the need to reload the NGINX configuration). However, Lua must continuously check for changes, so it ends up consuming resources which, in turn, increases the processing time of incoming requests. In addition to causing timeouts, this also results in complexity and higher resource costs.

NGINX Plus can handle advanced use cases and provides out-of-the-box capabilities for load balancing, API gateway, Ingress controller, and more. Many customers choose NGINX Plus for business-critical apps and APIs that have stringent requirements related to uptime, availability, security, and identity.

Maintain Uptime and Availability at Scale

Uptime and availability are crucial to mission-critical apps and APIs because your customers (both internal and external) are directly impacted by any problems that arise when scaling up.

You can use NGINX Plus to:

Improve Security and Identity Management

By building non-functional requirements into your traffic management strategy, you can offload those requirements from your apps. This reduces errors and frees up developers to work on core requirements.

With NGINX Plus, you can enhance security by:

  • Using JWT authentication, OpenID Connect (OIDC), and SAML to centralize authentication and authorization at the load balancer, API gateway, or Ingress controller
  • Enforcing end-to-end encryption and certificate management with SSL/TLS offloading and SSL termination
  • Enabling FIPS 140-2 for the processing of all SSL/TLS and HTTP/2 traffic
  • Implementing PCI DDS best practices for protecting consumer’s credit card numbers and other personal data
  • Adding NGINX App Protect for Layer 7 WAF and denial-of-service (DoS) protection

Fleet Management with Instance Manager

Administration of a NGINX fleet at scale can be difficult. With NGINX Open Source, you might have hundreds of instances (maybe even thousands!) at your organization, which can introduce a lot of complexity and risk related to CVEs, configuration issues, and expired certificates. That’s why the Open Source Subscription includes NGINX Management Suite Instance Manager, which enables you to centrally inventory all of your NGINX Open Source, NGINX Plus, and NGINX App Protect WAF instances so you can configure, secure, and monitor your NGINX fleet with ease.

Diagram showing how NGINX Instance Manager manages your fleet of NGINX Open Source, Plus, and App Protect WAF

Understand Your NGINX Estate

With Instance Manager you can get an accurate count of your instances in any environment, including Kubernetes. Instance Manager allows you to:

  • Inventory instances and discover software versions with potential CVE exposures
  • Learn about configuration problems and resolve them with a built-in editor that leverages best practice recommendations
  • Visualize protection insights, analyze possible threats, and identify opportunities for tuning your WAF policies with Security Monitoring

Manage Certificates

Expired certificates have become a notorious cause of breaches. Use Instance Manager to ensure secure communication between NGINX instances and their clients. With Instance manager, you can track, manage, and deploy SSL/TLS certificates on all of your instances (including by finding and updating expiring certificates) and rotate the encryption keys regularly (or whenever a key has been compromised).

Simplify Visibility

The amount of data you can get from NGINX instances can be staggering. To help you get the most out of that data and your third-party tools, Instance Manager provides events and metrics data that helps you collect valuable NGINX metrics then forward them to commonly used monitoring, visibility, and alerting tools via API. In addition, you can get unique, curated insights into the protection of your apps and APIs, such as when NGINX App Protect is added.

Get Started with the Open Source Subscription

If you’re interested in getting started with the new Open Source Subscription, contact us today to discuss your use cases.

Dive deeper into the use cases you can enable with NGINX Plus:

Learn more about NGINX Management Suite Instance Manager:

The post Announcing the Open Source Subscription by F5 NGINX appeared first on NGINX.

]]>
Optimizing MQTT Deployments in Enterprise Environments with NGINX Plus https://www.nginx.com/blog/optimizing-mqtt-deployments-in-enterprise-environments-nginx-plus/ Tue, 06 Jun 2023 15:01:38 +0000 https://www.nginx.com/?p=71662 When announcing the R29 release of NGINX Plus, we briefly covered its new native support for parsing MQTT messages. In this post, we’ll build on that and discuss how NGINX Plus can be configured to optimize MQTT deployments in enterprise environments. What Is MQTT? MQTT stands for Message Queuing Telemetry Transport. It’s a very popular, [...]

Read More...

The post Optimizing MQTT Deployments in Enterprise Environments with NGINX Plus appeared first on NGINX.

]]>
When announcing the R29 release of NGINX Plus, we briefly covered its new native support for parsing MQTT messages. In this post, we’ll build on that and discuss how NGINX Plus can be configured to optimize MQTT deployments in enterprise environments.

What Is MQTT?

MQTT stands for Message Queuing Telemetry Transport. It’s a very popular, lightweight publish-subscribe messaging protocol, ideal for connecting Internet of Things (IoT) or machine-to-machine (M2M) devices and applications over the internet. MQTT is designed to operate efficiently in low-bandwidth or low-power environments, making it an ideal choice for applications with a large number of remote clients. It’s used in a variety of industries, including consumer electronics, automotive, transportation, manufacturing, and healthcare.

NGINX Plus MQTT Message Processing

NGINX Plus R29 supports MQTT 3.1.1 and MQTT 5.0. It acts as a proxy between clients and brokers, offloading tasks from core systems, simplifying scalability, and reducing compute costs. Specifically, NGINX Plus parses and rewrites portions of MQTT CONNECT messages, enabling features like:

  • MQTT broker load balancing 
  • Session persistence (reconnecting clients to the same broker) 
  • SSL/TLS termination 
  • Client certificate authentication 

MQTT message processing directives must be defined in the stream context of an NGINX configuration file and are provided by the ngx_stream_mqtt_preread_module
and ngx_stream_mqtt_filter_module.

The preread module processes MQTT data prior to NGINX’s internal proxying, allowing load balancing and upstream routing decisions to be made based on parsed message data.

The filter module enables rewriting of the clientid, username, and password fields within received CONNECT messages. The ability to set these fields to variables and complex values expands configuration options significantly, enabling NGINX Plus to mask sensitive device information or insert data like a TLS certificate distinguished name.

MQTT Directives and Variables

Several new directives and embedded variables are now available for tuning your NGINX configuration to optimize MQTT deployments and meet your specific needs.

Preread Module Directives and Embedded Variables

  • mqtt_preread – Enables MQTT parsing, extracting the clientid and username fields from CONNECT messages sent by client devices. These values are made available via embedded variables and help hash sessions to load balanced upstream servers (examples below).
  • $mqtt_preread_clientid – Represents the MQTT client identifier sent by the device.
  • $mqtt_preread_username – Represents the username sent by the client for authentication purposes.

Filter Module Directives

  • mqtt – Defines whether MQTT rewriting is enabled.
  • mqtt_buffers – Overrides the maximum number of MQTT processing buffers that can be allocated per connection and the size of each buffer. By default, NGINX will impose a limit of 100 buffers per connection, each 1k in length. Typically, this is optimal for performance, but may require tuning in special situations. For example, longer MQTT messages require a larger buffer size. Systems processing a larger volume of MQTT messages for a given connection within a short period of time may benefit from an increased number of buffers. In most cases, tuning buffer parameters has little bearing on underlying system performance, as NGINX constructs buffers from an internal memory pool.
  • mqtt_rewrite_buffer_size – Specifies the size of the buffer used for constructing MQTT messages. This directive has been deprecated and is obsolete since NGINX Plus R30.
  • mqtt_set_connect – Rewrites parameters of the CONNECT message sent from a client. Supported parameters include: clientid, username, and password.

MQTT Examples

Let’s explore the benefits of processing MQTT messages with NGINX Plus and the associated best practices in more detail. Note that we use ports 1883 and 8883 in the examples below. Port 1883 is the default unsecured MQTT port, while 8883 is the default SSL/TLS encrypted port.

MQTT Broker Load Balancing

The ephemeral nature of MQTT devices may cause client IPs to change unexpectedly. This can create challenges when routing device connections to the correct upstream broker. The subsequent movement of device connections from one upstream broker to another can result in expensive syncing operations between brokers, adding latency and cost.

By parsing the clientid field in an MQTT CONNECT message, NGINX can establish sticky sessions to upstream service brokers. This is achieved by using the clientid as a hash key for maintaining connections to broker services on the backend.

In this example, we proxy MQTT device data using the clientid as a token for establishing sticky sessions to three upstream brokers. We use the consistent parameter so that if an upstream server fails, its share of the traffic is evenly distributed across the remaining servers without affecting sessions that are already established on those servers.

stream {
      mqtt_preread on; 
     
      upstream backend {
          zone tcp_mem 64k;
          hash $mqtt_preread_clientid consistent;
    
          server 10.0.0.7:1883; # upstream mqtt broker 1
          server 10.0.0.8:1883; # upstream mqtt broker 2
          server 10.0.0.9:1883; # upstream mqtt broker 3 
      }
    
      server {
          listen 1883;
          proxy_pass backend;
          proxy_connect_timeout 1s;
      }
  }

NGINX Plus can also parse the username field of an MQTT CONNECT message. For more details, see the ngx_stream_mqtt_preread_module specification

SSL/TLS Termination

Encrypting device communications is key to ensuring data confidentiality and protecting against man-in-the-middle attacks. However, TLS handshaking, encryption, and decryption can be a resource burden on an MQTT broker. To solve this, NGINX Plus can offload data encryption from a broker (or a cluster of brokers), simplifying security rules and allowing brokers to focus on processing device messages. 

In this example, we show how NGINX can be used to proxy TLS-encrypted MQTT traffic from devices to a backend broker. The ssl_session_cache directive defines a 5-megabyte cache, which is enough to store approximately 20,000 SSL sessions. NGINX will attempt to reach the proxied broker for five seconds before timing out, as defined by the proxy_connect_timeout directive.

stream {
      server {
          listen 8883 ssl;
          ssl_certificate /etc/nginx/certs/tls-cert.crt;
          ssl_certificate_key /etc/nginx/certs/tls-key.key;
          ssl_session_cache shared:SSL:5m;
          proxy_pass 10.0.0.8:1883;
          proxy_connect_timeout 5s;
      }
  } 

Client ID Substitution

For security reasons, you may opt to not store client-identifiable information in the MQTT broker’s database. For example, a device may send a serial number or other sensitive data as part of an MQTT CONNECT message. By replacing a device’s identifier with other known static values received from a client, an alternate unique key can be established for every device attempting to reach NGINX Plus proxied brokers.

In this example, we extract a unique identifier from a device’s client SSL certificate and use it to mask its MQTT client ID. Client certificate authentication (mutual TLS) is controlled with the ssl_verify_client directive. When set to the on parameter, NGINX ensures that client certificates are signed by a trusted Certificate Authority (CA). The list of trusted CA certificates is defined by the ssl_client_certificate directive. 

stream {
      mqtt on; 
    
      server {
          listen 8883 ssl;
          ssl_certificate /etc/nginx/certs/tls-cert.crt;
          ssl_certificate_key /etc/nginx/certs/tls-key.key;
          ssl_client_certificate /etc/nginx/certs/client-ca.crt;
          ssl_session_cache shared:SSL:10m;
          ssl_verify_client on;
          proxy_pass 10.0.0.8:1883;
          proxy_connect_timeout 1s;
          
          mqtt_set_connect clientid $ssl_client_serial;
      }
  }

Client Certificate as an Authentication Credential

One common approach to authenticating MQTT clients is to use data stored in a client certificate as the username. NGINX Plus can parse client certificates and rewrite the MQTT username field, offloading this task from backend brokers. In the following example, we extract the client certificate’s Subject Distinguished Name (Subject DN) and copy it to the username portion of an MQTT CONNECT message.

stream {
      mqtt on; 
     
      server {
          listen 8883 ssl;
          ssl_certificate /etc/nginx/certs/tls-cert.crt;
          ssl_certificate_key /etc/nginx/certs/tls-key.key;
          ssl_client_certificate /etc/nginx/certs/client-ca.crt;
          ssl_session_cache shared:SSL:10m;
          ssl_verify_client on;
          proxy_pass 10.0.0.8:1883;
          proxy_connect_timeout 1s;
          
          mqtt_set_connect username $ssl_client_s_dn;
      }
  } 

For a complete specification on NGINX Plus MQTT CONNECT message rewriting, see the ngx_stream_mqtt_filter_module specification.

Get Started Today

Future developments to MQTT in NGINX Plus may include parsing of other MQTT message types, as well as deeper parsing of the CONNECT message to enable functions like:

  • Additional authentication and access control mechanisms
  • Protecting brokers by rate limiting “chatty” clients
  • Message telemetry and connection metrics

If you’re new to NGINX Plus, sign up for a free 30-day trial to get started with MQTT. We would also love to hear your feedback on the features that matter most to you. Let us know what you think in the comments.

The post Optimizing MQTT Deployments in Enterprise Environments with NGINX Plus appeared first on NGINX.

]]>