Events Archives - NGINX https://www.nginx.com/category/events/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Mon, 22 Jan 2024 21:20:51 +0000 en-US hourly 1 Building the Next NGINX Experience, Designed for the Reality of Modern Apps https://www.nginx.com/blog/building-the-next-nginx-experience-designed-for-the-reality-of-modern-apps/ Tue, 23 Jan 2024 16:00:27 +0000 https://www.nginx.com/?p=72861 As the VP of Product at NGINX, I speak frequently with customers and users. Whether you’re a Platform Ops team, Kubernetes architect, application developer, CISO, CIO, or CTO – I’ve talked to someone like you. In our conversations, you gave me your honest thoughts about NGINX, including our products, pricing and licensing models, highlighting both [...]

Read More...

The post Building the Next NGINX Experience, Designed for the Reality of Modern Apps appeared first on NGINX.

]]>
As the VP of Product at NGINX, I speak frequently with customers and users. Whether you’re a Platform Ops team, Kubernetes architect, application developer, CISO, CIO, or CTO – I’ve talked to someone like you. In our conversations, you gave me your honest thoughts about NGINX, including our products, pricing and licensing models, highlighting both our strengths and weaknesses.

The core thing we learned is that our “NGINX is the center of the universe” approach does not serve our users well. We had been building products that aimed to make NGINX the “platform” – the unified management plane for everything related to application deployment. We knew that some of our previous products geared towards that goal had been lightly used and adopted. You told us that NGINX is a mission critical component of your existing platform, homegrown or otherwise, but that NGINX was not the platform. Therefore, we needed to integrate better with the rest of your components to make it easier to deploy, manage, secure our products with (and this is important) transparent pricing and consumption models. And to make it all possible via API, of course.

The underlying message was straightforward: make it easier for you to integrate NGINX into your workflows, existing toolchains, and processes in an unopinionated manner. We heard you. In 2024, we will be taking a much more flexible, simple, repeatable, and scalable approach towards use-case configuration and management for data plane and security.

Your desire makes complete sense. Your world has changed and continues to change! You transitioned through various stages, moving from cloud to hybrid to multi-cloud and multi-cloud-hybrid setups. There have also been changes from VMs to Kubernetes, and from APIs to microservices and serverless. Many of you have shifted left and that has led to complexity. More teams have more tools that require more management, observability, and robust security – all powering apps that must be able to scale out in minutes; not hours, days, or weeks. And the latest accelerant, artificial intelligence (AI), puts significant pressure on legacy application and infrastructure architectures.

What We Plan to Address in Upcoming NGINX Product Releases

While the bones of NGINX products have always been rock solid, battle-tested, and performant, the way our users could consume, manage, and observe all aspects of NGINX didn’t keep up with the times. We are moving quickly to remedy that with a new product launch and a slew of new capabilities. We will be announcing more about this at F5’s conference AppWorld 2024, happening February 6 through 8. Here are specific pain points we plan on addressing in upcoming product releases.

Pain Point #1: Modern Apps Are Challenging to Manage Due to the Diversity of Deployment Environments

Today, CIOs and CTOs can pick from a wide variety of application deployment modalities. This is a blessing because it enables far more choice in terms of performance, capabilities, and resilience. It’s also a curse because diversity leads to complexity and sprawl. For example, managing applications running in AWS requires different configurations, tools, and tribal knowledge than managing applications in Azure Cloud.

While containers have standardized, large swathes of application deployment, everything below containers (or going in and out of containers) remains differentiated. As the de facto container orchestration platform, Kubernetes was supposed to clean that process up. But anyone who has deployed on Amazon EKS, Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE) can tell you – they’re not at all alike.

You have told us that managing NGINX products across this huge diversity of environments requires significant operational resources and leads to waste. And, frankly, pricing models based on annual licenses collapse in dynamic environments where you might launch an app in a serverless environment, scale it up on a Kubernetes environment, and maintain a small internal deployment running on the cloud for development purposes.

Pain Point #2: Apps Running in Many Environments and Spanning License Types Are Challenging to Secure

The complexity of diverse environments can make it difficult to discover and monitor where modern apps are deployed and then apply the right security measures. Maybe you deployed NGINX Plus as your global load balancer and NGINX Open Source for various microservices, with each running in different clouds or on top of different types of applications. Additionally, they could be requiring different things for privacy, data protection, and traffic management.
Each permutation adds a new security twist. There is no standard, comprehensive solution and that injects operational complexity and potential for configuration errors. Admittedly, we’ve added to that complexity by making it confusing as to which types of security can be applied to which NGINX solutions.

We understand. Customers need a single way to secure all applications that leverage NGINX. This unified security solution must cover the vast majority of use cases and deploy the same tools, dashboards, and operational processes across all cloud, on-prem, serverless, and other environments. We also recognize the importance of moving towards more intelligent security approach, leveraging the collective intelligence of the NGINX community and the unprecedented view of global traffic that we are fortunate to have.

Pain Point #3: Managing the Cost of Modern Apps Is Complex and Results in Waste

In a shift-left world, every organization wants to empower developers and practitioners to do their jobs better, without filing a ticket or sending a Slack. The reality has been different. Some marginal abstraction of complexity has been achieved with Kubernetes, serverless, and other mechanisms for managing distributed applications and applications spanning on-prem, cloud, and multi-cloud environments. But this progress has largely been confined inside the container and application. It has not translated well to the layers around applications like networking, security, and observability, nor to CI/CD.

I have hinted at these issues in the previous pain points, but the bottom line is this: complexity has great costs when it comes to hours and toil, compromised security, and resilience. Maintaining increasingly complex systems is fundamentally challenging and resource intensive. Pricing and license complexity adds another unhappy layer. NGINX has never been a “true-up” company that sticks it to users when they mistakenly overconsume.

But in a world of SaaS, APIs, and microservices, you want to pay as you go and not pay by the year, nor by the seat or site license. You want an easy-to-understand pricing model based on consumption, for all NGINX products and services, across your entire technology infrastructure and application portfolio. You also want a way to incorporate support and security for any open source modules that your teams run, paying for just the bits that you want.

This will require some shifts in how NGINX packages and prices products. The ultimate solution must be simplicity, transparency, and pay-for-what-you-consume, just like any other SaaS. We hear you. And we have something great in store which will address all three of the above pain points.

Join Us at App World 2024

We will be talking about these exciting updates at AppWorld 2024 and will be rolling out pieces of the solution as part of our longer-term plan and roadmap over the next twelve months.

Join me on this journey and tune in to AppWorld for a full breakdown of what’s in store. Early bird pricing is available through January 21. Please check out the AppWorld 2024 registration page for further details. You’re also invited to join NGINX leaders and other members of the community on the night of February 6 at the San Jose F5 office for an evening of looking forward into the future of NGINX, community connections, and indulging in the classics: pizza and swag! See the event page for registration and details.

We hope to see you next month in San Jose!

The post Building the Next NGINX Experience, Designed for the Reality of Modern Apps appeared first on NGINX.

]]>
Join F5 NGINX at AppWorld 2024 https://www.nginx.com/blog/join-f5-nginx-at-appworld-2024/ Thu, 04 Jan 2024 16:00:37 +0000 https://www.nginx.com/?p=72832 Happy new year! With every turn of the annual calendar comes a sense of renewal. This year, we’re feeling especially invigorated because 2024 is already shaping up to be a monumental year for F5 – and for F5 NGINX in particular. To kick it off, we want to invite you to join us for AppWorld [...]

Read More...

The post Join F5 NGINX at AppWorld 2024 appeared first on NGINX.

]]>
Happy new year! With every turn of the annual calendar comes a sense of renewal. This year, we’re feeling especially invigorated because 2024 is already shaping up to be a monumental year for F5 – and for F5 NGINX in particular. To kick it off, we want to invite you to join us for AppWorld 2024 at the McEnery Convention Center in San Jose, CA from February 6 through 8.

What Is AppWorld?

AppWorld is the reinvention of F5’s flagship event, formerly known as Agility. It’s hard to believe, but it has been more than five years since we last hosted an event in person, and so much has changed in that timespan. While we carried on with Agility as a virtual event in the interim, we recognize that there is no virtual substitute for the deep learning and rich experience that comes from a dynamic in-person gathering.

Attendees of AppWorld 2024 will have more than sixty sessions to choose from, and you can tailor an agenda that interests you and fits the needs of your organization. We will offer:

  • F5 leadership keynote sessions featuring our customers and partners
  • Breakout sessions and labs on specific topics of technical learning
  • Ample opportunities to connect and socialize with fellow attendees and F5 employees

Join us at the event to discover what is possible with F5 and NGINX’s suite of solutions. F5 is the only company that can secure every app and API everywhere, and AppWorld will provide the opportunity for you to learn firsthand about F5’s plans and how NGINX complements them.

Exciting Announcement for NGINX

What we are most excited about is that NGINX will announce the biggest product leap in our history at AppWorld. This major announcement will focus on a significant reshaping of our product portfolio that will provide a simple, straightforward, and uniform path for scaling modern applications within the complex rigors of the modern enterprise. NGINX General Manager Shawn Wormke will unveil this exciting news during his keynote address on February 7.

More Reasons to Attend AppWorld 2024

Beyond this big announcement, there are plenty of other great reasons to attend AppWorld.

AppWorld features four topical breakout tracks that encapsulate F5’s broad portfolio, including one exclusively devoted to NGINX. The NGINX track will feature sessions designed to help you and your organization drive innovation and recognize value from NGINX products and the broader F5 portfolio.

NGINX has long been the world’s most popular web server as well as a preferred tool for organizations looking to deploy and secure web apps and APIs in any environment. We hope you will take this unique opportunity to learn what’s new and what’s next, as well as how we plan to transform NGINX into a solution that is even more powerful and easy-to-use for developers, architects, and network and security professionals.

Register Today

Early bird pricing is available through January 21. Please check out the AppWorld 2024 registration page for further details. Also, you’re invited to join NGINX leaders and other members of the community on the night of February 6 at the San Jose F5 office for an evening of looking forward into the future of NGINX, community connections, and indulging in the classics: pizza and swag! See the event page for registration and details. We hope to see you next month in San Jose!

The post Join F5 NGINX at AppWorld 2024 appeared first on NGINX.

]]>
NGINX Tutorial: How to Use GitHub Actions to Automate Microservices Canary Deployments https://www.nginx.com/blog/nginx-tutorial-github-actions-automate-microservices-canary-deployments/ Tue, 21 Mar 2023 15:02:01 +0000 https://www.nginx.com/?p=71380 This post is one of four tutorials that help you put into practice concepts from Microservices March 2023: Start Delivering Microservices: How to Deploy and Configure Microservices How to Securely Manage Secrets in Containers How to Use GitHub Actions to Automate Microservices Canary Releases (this post) How to Use OpenTelemetry Tracing to Understand Your Microservices Automating [...]

Read More...

The post NGINX Tutorial: How to Use GitHub Actions to Automate Microservices Canary Deployments appeared first on NGINX.

]]>

This post is one of four tutorials that help you put into practice concepts from Microservices March 2023: Start Delivering Microservices:

Automating deployments is critical to the success of most projects. However, it’s not enough to just deploy your code. You also need to ensure downtime is limited (or eliminated), and you need the ability to roll back quickly in the event of a failure. Combining canary deployment and blue‑green deployment is a common approach to ensuring new code is viable. This strategy includes two steps:

  • Step 1: Canary deployment to test in isolation – Deploy your code to an isolated node, server, or container outside your environment and test to ensure the code works as intended.
  • Step 2: Blue‑green deployment to test in production – Assuming the code works in the canary deployment, port the code to newly created servers (or nodes or containers) in your production environment alongside the servers for the current version. Then redirect a portion of production traffic to the new version to test whether it continues to work well under higher load. Most often, you start by directing a small percentage (10%, say) to the new version and incrementally increase it until the new version receives all traffic. The size of the increments depends on how confident you are that the new version can handle traffic; you can even switch over completely to the new version in a single step.

If you’re unfamiliar with the different use cases for distributing traffic between different versions of an app or website (traffic splitting), read How to Improve Resilience in Kubernetes with Advanced Traffic Management on our blog to gain a conceptual understanding of blue‑green deployments, canary releases, A/B testing, rate limiting, circuit breaking, and more. While the blog is specific to Kubernetes, the concepts are broadly applicable to microservices apps.

Tutorial Overview

In this tutorial, we show how to automate the first step of a canary blue‑green deployment using GitHub Actions. In the four challenges of the tutorial you use Microsoft Azure Container Apps to deploy a new version of your application, then use Azure Traffic Manager to shift traffic from the old environment to the new environment:

Note: While this tutorial uses Azure Container Apps, the concepts and techniques can be applied to any cloud‑based host.

Prerequisites and Setup

Prerequisites

If you want to do this tutorial in your own environment, you need:

  • An Azure account. We recommend that you use an account that is not linked to your organization, because you might have problems with permissions when using an organizational account.
  • The Azure CLI.
  • The GitHub CLI, if you want to use it instead of (or in addition to) the browser‑based GitHub GUI.

Set Up

Create and configure the necessary base resources. Fork and clone the repository for the tutorial, log in to the Azure CLI, and install the extension for Azure Container Apps.

  1. In your home directory, create the microservices-march directory. (You can also use a different directory name and adapt the instructions accordingly.)

    Note: Throughout the tutorial the prompt on the Linux command line is omitted, to make it easier to copy and paste the commands into your terminal.

    mkdir ~/microservices-march
    cd ~/microservices-march
  2. Fork and clone the Microservices March platform repository to your personal GitHub account, using either the GitHub CLI or GUI.

    • If using the GitHub GUI:

      1. Click Fork on the upper right corner of the window and select your personal GitHub account on the Owner menu.

        screenshot of GitHub GUI showing fork of the repository for this tutorial

      2. Clone the repository locally, substituting your account name for <your_GitHub_account>:

        git clone https://github.com/<your_GitHub_account>/platform.git
        cd platform
    • If using the GitHub CLI, run:

      gh repo fork microservices-march/platform -–clone
  3. Login to the Azure CLI. Follow the prompts to log in using a browser:

    az login
    [
      {
        "cloudName": "AzureCloud",
        "homeTenantId": "cfd11e0f-1435-450s-bdr1-ffab73b4er8e",
        "id": "60efapl2-38ad-41w8-g49a-0ecc6723l97c",
        "isDefault": true,
        "managedByTenants": [],
        "name": "Azure subscription 1",
        "state": "Enabled",
        "tenantId": "cfda3e0f-14g5-4e05-bfb1-ffab73b4fsde",
        "user": {
          "name": "user@example.com",
          "type": "user"
        }
      }
    ]
  4. Install the containerapp extension:

    az extension add --name containerapp -upgrade
    The installed extension 'containerapp' is in preview.

Challenge 1: Create and Deploy an NGINX Container App

In this initial challenge, you create an NGINX Azure Container App as the initial version of the application used as the baseline for the canary blue‑green deployment. Azure Container Apps is a Microsoft Azure service you use to easily execute application code packaged in a container in a production‑ready container environment.

  1. Create an Azure resource group for the container app:

    az group create --name my-container-app-rg --location westus
    {
      "id": "/subscriptions/0efafl2-38ad-41w8-g49a-0ecc6723l97c/resourceGroups/my-container-app-rg",
      "location: "westus",
      "managedBy": null,
      "name": "my-container-app-rg",
      "properties": {
        "provisioningState": "Succeeded"
      },
      "tags": null,
      "type": "Microsoft.Resources/resourceGroups"
    }
  2. Deploy the container to Azure Container Apps (this step may take a while):

    az containerapp up \
        --resource-group my-container-app-rg \
        --name my-container-app \
        --source ./ingress \
        --ingress external \
        --target-port 80 \
        --location westus
    ... 
    - image:
        registry: cac085021b77acr.azurecr.io
        repository: my-container-app
        tag: "20230315212413756768"
        digest: sha256:90a9fc67c409e244195ec0ea190ce3b84195ae725392e8451...
      runtime-dependency:
        registry: registry.hub.docker.com
        repository: library/nginx
        tag: "1.23"
        digest: sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce...
      git: {} 
    Run ID: cf1 was successful after 27s
    Creating Containerapp my-container-app in resource group my-container-app-rg
    Adding registry password as a secret with name "ca2ffbce7810acrazurecrio-cac085021b77acr" 
    Container app created. Access your app at https://my-container-app.delightfulmoss-eb6d59d5.westus.azurecontainerapps.io/
    ...
  3. In the output in Step 2, find the name and the URL of the container app you’ve created in the Azure Container Registry (ACR). They’re highlighted in orange in the sample output. You will substitute the values from your output (which will be different from the sample output in Step 2) for the indicated variables in commands throughout the tutorial:

    • Name of container app – In the image.registry key, the character string before .azurecr.io. In the sample output in Step 2, it is cac085021b77acr.

      Substitute this character string for <ACR_name> in subsequent commands.

    • URL of the container app – The URL on the line that begins Container app created. In the sample output in Step 2, it is https://my-container-app.delightfulmoss-eb6d59d5.westus.azurecontainerapps.io/.

      Substitute this URL for <ACR_URL> in subsequent commands.

  4. Enable revisions for the container app as required by for a blue‑green deployment:

    az containerapp revision set-mode \
        --name my-container-app \
        --resource-group my-container-app-rg \
        --mode multiple
    "Multiple"
  5. (Optional) Test that the deployment is working by querying the /health endpoint in the container:

    curl <ACR_URL>/health
    OK

Challenge 2: Set Up Permissions for Automating Azure Container App Deployments

In this challenge, you obtain the JSON token that enables you to automate Azure container app deployments.

You start by obtaining the ID for the Azure Container Registry (ACR), and then the principal ID for your Azure managed identity. You then assign the built‑in Azure role for ACR to the managed identity, and configure the container app to use the managed identity. Finally you obtain the JSON credentials for the managed identity, which will be used by GitHub Actions to authenticate to Azure.

While this set of steps may seem tedious, you only need to perform them once when creating a new application and you can fully script the process. The tutorial has you perform the steps manually to become familiar with them.

Note: This process for creating credentials for deployment is specific to Azure.

  1. Look up the principal ID of your managed identity. It appears in the PrincipalID column of the output (which is divided across two lines for legibility). You’ll substitute this value for <managed_identity_principal_ID> in Step 3:

    az containerapp identity assign \
        --name my-container-app \
        --resource-group my-container-app-rg \
        --system-assigned \
        --output table
    PrincipalId                          ...                           
    ------------------------------------ ...  
        39f8434b-12d6-4735-81d8-ba0apo14579f ...
     
        ... TenantId
        ... ------------------------------------
            ... cfda3e0f-14g5-4e05-bfb1-ffab73b4fsde
  2. Look up the container app’s resource ID in ACR, replacing <ACR_name> with the name you recorded in Step 3 of Challenge 1. You’ll substitute this value for <ACR_resource_ID> in the next step:

    az acr show --name <ACR_name> --query id --output tsv
    /subscriptions/60efafl2-38ad-41w8-g49a-0ecc6723l97c/resourceGroups/my-container-app-rg/providers/Microsoft.ContainerRegistry/registries/cac085021b77acr
  3. Assign the built‑in Azure role for ACR to the container app’s managed identity, replacing <managed_identity_principal_ID> with the managed identity obtained in Step 1, and <ACR_resource_ID> with the resource ID obtained in Step 2:

    az role assignment create \
        --assignee <managed_identity_principal_ID> \
        --role AcrPull \
        --scope <ACR_resource_ID>
    {
      "condition": null,
      "conditionVersion": null,
      "createdBy": null,
      "createdOn": "2023-03-15T20:28:40.831224+00:00",
      "delegatedManagedIdentityResourceId": null,
      "description": null,
      "id": "/subscriptions/0efafl2-38ad-41w8-g49a-0ecc6723l97c/resourceGroups/my-container-app-rg/providers/Microsoft.ContainerRegistry/registries/cac085021b77acr/providers/Microsoft.Authorization/roleAssignments/f0773943-8769-44c6-a820-ed16007ff249",
      "name": "f0773943-8769-44c6-a820-ed16007ff249",
      "principalId": "39f8ee4b-6fd6-47b5-89d8-ba0a4314579f",
      "principalType": "ServicePrincipal",
      "resourceGroup": "my-container-app-rg",
      "roleDefinitionId": "/subscriptions/60e32142-384b-43r8-9329-0ecc67dca94c/providers/Microsoft.Authorization/roleDefinitions/7fd21dda-4fd3-4610-a1ca-43po272d538d",
      "scope": "/subscriptions/ 0efafl2-38ad-41w8-g49a-0ecc6723l97c/resourceGroups/my-container-app-rg/providers/Microsoft.ContainerRegistry/registries/cac085021b77acr",
      "type": "Microsoft.Authorization/roleAssignments",
      "updatedBy": "d4e122d6-5e64-4bg1-9cld-2aceeb0oi24d",
      "updatedOn": "2023-03-15T20:28:41.127243+00:00"
    }
  4. Configure the container app to use the managed identity when pulling images from ACR, replacing <ACR_name> with the container app name you recorded in Step 3 in Challenge 1 (and also used in Step 2 above):

    az containerapp registry set \
        --name my-container-app \
        --resource-group my-container-app-rg \
        --server <ACR_name>.azurecr.io \
        --identity system
    [
      {
        "identity": "system",
        "passwordSecretRef": "",
        "server": "cac085021b77acr.azurecr.io",
        "username": ""
      }
    ]
  5. Look up your Azure subscription ID.

    az account show --query id --output tsv
    0efafl2-38ad-41w8-g49a-0ecc6723l97c
  6. Create a JSON token which contains the credentials to be used by the GitHub Action, replacing <subscription_ID> with your Azure subscription ID. Save the output to paste in as the value of the secret named AZURE_CREDENTIALS in Add Secrets to Your GitHub Repository. You can safely ignore the warning about --sdk-auth being deprecated; it’s a known issue:

    az ad sp create-for-rbac \
        --name my-container-app-rbac \
        --role contributor \
        --scopes /subscriptions/<subscription_ID>/resourceGroups/my-container-app-rg \
        --sdk-auth \
        --output json
    Option '--sdk-auth' has been deprecated and will be removed in a future release.
    ...
    {
      "clientId": "0732444d-23e6-47fb-8c2c-74bddfc7d2er",
      "clientSecret": "qg88Q~KJaND4JTWRPOLWgCY1ZmZwN5MK3N.wwcOe",
      "subscriptionId": "0efafl2-38ad-41w8-g49a-0ecc6723l97c",
      "tenantId": "cfda3e0f-14g5-4e05-bfb1-ffab73b4fsde",
      "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
      "resourceManagerEndpointUrl": "https://management.azure.com/",
      "activeDirectoryGraphResourceId": "https://graph.windows.net/",
      "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
      "galleryEndpointUrl": "https://gallery.azure.com/",
      "managementEndpointUrl": "https://management.core.windows.net/"
    }

Challenge 3: Create a Canary Blue-Green Deployment GitHub Action

In this challenge, you add secrets to your GitHub repo (used to manage sensitive data in your GitHub Action workflows), create an Action workflow file, and execute the Action workflow.

For a detailed introduction to secrets management, see the second tutorial for Microservices March 23, How to Securely Manage Secrets in Containers on our blog.

Add Secrets to Your GitHub Repository

To deploy a new version of the application, you need to create a series of secrets in the GitHub repository you forked in Set Up. The secrets are the JSON credentials for the managed identity created in Challenge 2, and some sensitive deployment‑specific parameters necessary to deploy new versions of the NGINX image to Azure. In the next section you’ll use these secrets in a GitHub Action to automate the canary blue‑green deployment.

  • If using the GitHub GUI:

    1. Navigate to your forked GitHub repository.
    2. Select Settings > Secrets and variables > Actions.
    3. Click New repository secret.
    4. Type the following values in the indicated fields:

      • Name – AZURE_CREDENTIALS
      • Secret – The JSON credentials from Step 6 of Challenge 2
    5. Click Add secret.
    6. Repeat Steps 3–5 three times to create the secrets listed in the table. Type the values from the Secret Name and Secret Value columns into the GUI’s Name and Secret fields respectively. For the third secret, replace <ACR_name> with the name assigned to the container app which you recorded in Step 3 of Challenge 1.

      Secret Name Secret Value
      CONTAINER_APP_NAME my-container-app
      RESOURCE_GROUP my-container-app-rg
      ACR_NAME <ACR_name>
    7. Proceed to Create a GitHub Action Workflow File.
  • If using the GitHub CLI:

    1. At the root of your repo, create a temporary file.

      touch ~/creds.json
    2. Using your preferred text editor, open creds.json and copy in the JSON credentials you created in Step 6 of Challenge 2.
    3. Create the secret:

      gh secret set AZURE_CREDENTIALS --repo <your_GitHub_account>/platform < ~/creds.json
    4. Delete creds.json:

      rm ~/creds.json
    5. Repeat this command to create three more secrets:

      gh secret set <secret_name> --repo <your_GitHub_account>/platform

      For each repetition, replace <secret_name> with one of the values in the Secret Name column in the table. At the prompt, paste the associated value from the Secret Value column. For the third secret, replace <ACR_name> with the name assigned to the container app which you recorded in Step 3 of Challenge 1.

      Secret Name Secret Value
      CONTAINER_APP_NAME my-container-app
      RESOURCE_GROUP my-container-app-rg
      ACR_NAME <ACR_name>

Create a GitHub Action Workflow File

With the managed identity and secrets in place, you can create a workflow file for a GitHub Action that automates the canary blue‑green deployment.

Note: Workflow files are defined in YAML format, where whitespace is significant. Be sure to preserve the indentation shown in the steps below.

  1. Create a file for the Action workflow.

    • If using the GitHub GUI:

      1. Navigate to your GitHub repository.
      2. Select Actions > New workflow > Skip this and set up a workflow yourself.
    • If using the GitHub CLI, create the .github/workflows directory and create a new file called main.yml:

      mkdir .github/workflows
      touch .github/workflows/main.yml
  2. Using your preferred text editor, add the text of the workflow to main.yml. The easiest method is to copy in the text that appears in Full Workflow File. Alternatively, you can build the file manually by adding the set of snippets annotated in this step.

    Note: Workflow files are defined in YAML format, where whitespace is significant. If you copy in the snippets, be sure to preserve the indentation (and to be extra sure, compare your file to Full Workflow File.

    • Define the workflow’s name:

      name: Deploy to Azure
    • Configure the workflow to run when a push or pull request is made to the main branch:

      on:
        push:
          branches:
            - main
        pull_request:
          branches:
            - main
    • In the jobs section, define the build-deploy job, which checks out the code, logs into Azure, and deploys the application to Azure Container App:

      jobs:
        build-deploy:
          runs-on: ubuntu-22.04
          steps:
            - name: Check out the codebase
              uses: actions/checkout@v3
      
            - name: Log in to Azure
              uses: azure/login@v1
              with:
                creds: ${{ secrets.AZURE_CREDENTIALS }}
      
            - name: Build and deploy Container App
              run: |
                # Add the containerapp extension manually
                az extension add --name containerapp --upgrade
                # Use Azure CLI to deploy update
                az containerapp up -n ${{ secrets.CONTAINER_APP_NAME }}\
                  -g ${{ secrets.RESOURCE_GROUP }} \
                  --source ${{ github.workspace }}/ingress \
                  --registry-server ${{ secrets.ACR_NAME }}.azurecr.io
    • Define the test-deployment job, which obtains the staging URL of the newly deployed revision and uses a GitHub Action to ping the API endpoint /health to ensure the new revision is responding. If the health check succeeds, the Azure Traffic Manager on the container app is updated to point all traffic at the newly deployed container.

      Note: Be sure to indent the test-deployment key at the same level as the build-deploy key you defined in the previous bullet:

        test-deployment:
          needs: build-deploy
          runs-on: ubuntu-22.04
          steps:
            - name: Log in to Azure
              uses: azure/login@v1
              with:
                creds: ${{ secrets.AZURE_CREDENTIALS }}
      
            - name: Get new container name
              run: |
                # Add the containerapp extension manually
                az extension add --name containerapp --upgrade
      
                # Get the last deployed revision name
                REVISION_NAME=`az containerapp revision list -n ${{ secrets.CONTAINER_APP_NAME }} -g ${{ secrets.RESOURCE_GROUP }} --query "[].name" -o tsv | tail -1`
                # Get the last deployed revision's fqdn
                REVISION_FQDN=`az containerapp revision show -n ${{ secrets.CONTAINER_APP_NAME }} -g ${{ secrets.RESOURCE_GROUP }} --revision "$REVISION_NAME" --query properties.fqdn -o tsv`
                # Store values in env vars
                echo "REVISION_NAME=$REVISION_NAME" >> $GITHUB_ENV
                echo "REVISION_FQDN=$REVISION_FQDN" >> $GITHUB_ENV
      
            - name: Test deployment
              id: test-deployment
              uses: jtalk/url-health-check-action@v3 # Marketplace action to touch the endpoint
              with:
                url: "https://${{ env.REVISION_FQDN }}/health" # Staging endpoint
      
            - name: Deploy succeeded
              run: |
                echo "Deployment succeeded! Enabling new revision"
                az containerapp ingress traffic set -n ${{ secrets.CONTAINER_APP_NAME }} -g ${{ secrets.RESOURCE_GROUP }} --revision-weight "${{ env.REVISION_NAME }}=100"

Full Workflow File

This is the complete text for the Action workflow file.

name: Deploy to Azure
on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main
jobs:
  build-deploy:
    runs-on: ubuntu-22.04
    steps:
      - name: Check out the codebase
        uses: actions/checkout@v3

      - name: Log in to Azure
        uses: azure/login@v1
        with:
          creds: ${{ secrets.AZURE_CREDENTIALS }}
     
      - name: Build and deploy Container 
        run: |
          # Add the containerapp extension manually
          az extension add --name containerapp -upgrade
       
          # Use Azure CLI to deploy update
          az containerapp up -n ${{ secrets.CONTAINER_APP_NAME }} \
            -g ${{ secrets.RESOURCE_GROUP }} \
            --source ${{ github.workspace }}/ingress \
            --registry-server ${{ secrets.ACR_NAME }}.azurecr.io
  test-deployment:
    needs: build-deploy
    runs-on: ubuntu-22.04
    steps:
      - name: Log in to Azure
        uses: azure/login@v1
        with:
          creds: ${{ secrets.AZURE_CREDENTIALS }}

      - name: Get new container name
        run: |
          # Install the containerapp extension for the Azure CLI
          az extension add --name containerapp --upgrade
          # Get the last deployed revision name
          REVISION_NAME=`az containerapp revision list -n ${{ secrets.CONTAINER_APP_NAME }} -g ${{ secrets.RESOURCE_GROUP }} --query "[].name" -o tsv | tail -1`
          # Get the last deployed revision's fqdn
          REVISION_FQDN=`az containerapp revision show -n ${{ secrets.CONTAINER_APP_NAME }} -g ${{ secrets.RESOURCE_GROUP }} --revision "$REVISION_NAME" --query properties.fqdn -o tsv`
          # Store values in env vars
          echo "REVISION_NAME=$REVISION_NAME" >> $GITHUB_ENV
          echo "REVISION_FQDN=$REVISION_FQDN" >> $GITHUB_ENV

      - name: Test deployment
        id: test-deployment
        uses: jtalk/url-health-check-action@v3 # Marketplace action to touch the endpoint
        with:
          url: "https://${{ env.REVISION_FQDN }}/health" # Staging endpoint

      - name: Deploy succeeded
        run: |
          echo "Deployment succeeded! Enabling new revision"
          az containerapp ingress traffic set -n ${{ secrets.CONTAINER_APP_NAME }} -g ${{ secrets.RESOURCE_GROUP }} --revision-weight "${{ env.REVISION_NAME }}=100"

Execute the Action Workflow

  • If using the GitHub GUI:

    1. Click Start commit, add a commit message if you wish, and in the dialog box select Commit new file. The new workflow file is merged into the main branch and begins executing.
    2. Click Actions to monitor the progress of the workflow.
  • If using the GitHub CLI:

    1. Add main.yml to the Git staging area:

      git add .github/workflows/main.yml
    2. Commit the file:

      git commit -m "feat: create GitHub Actions workflow"
    3. Push your changes to GitHub:

      git push
    4. Monitor the progress of the workflow:

      gh workflow view main.yml --repo <your_GitHub_account>/platform

Challenge 4: Test the GitHub Actions Workflow

In this challenge, you test the workflow. You first simulate a successful update to your Ingress load balancer and confirm the application has been updated. You then simulate an unsuccessful update (which leads to an internal server error) and confirm that the published application remains unchanged.

Make a Successful Update

Create a successful update and watch the workflow succeed.

  • If using the GitHub GUI:

    1. Select Code > ingress > default.conf.template.
    2. Open default.conf.template for editing by selecting the pencil icon with the tooltip Edit this file.
    3. In the location /health block near the end of the file, change the return directive as indicated:

      location /health {
          access_log off;
          return 200 "Successful Update!\n";
      }
    4. In the dialog box, select Create a new branch for this commit and start a pull request and then Propose changes.
    5. Click Create pull request to access the pull request template.
    6. Click Create pull request again to create the pull request.
    7. Click Actions to monitor the progress of the workflow.
    8. When the workflow completes, navigate to your container app at the <ACR_URL>/health endpoint, where the <ACR_URL> is the URL you noted in Step 3 of Challenge 1. Notice the Successful Update! message.
    9. You can confirm the message by starting a terminal session and sending a health‑check request to the app, again replacing <ACR_URL> with the value you recorded in Step 3 of Challenge 1:

      curl <ACR_URL>/health
      Successful Update!
    10. Proceed to Make an Unsuccessful Update.
  • If using the GitHub CLI:

    1. Create a new branch called patch-1:

      git checkout -b patch-1
    2. In your preferred text editor open ingress/default.conf.template and in the location /health block near the end of the file, change the return directive as indicated:

      location /health {
          access_log off;
          return 200 "Successful Update!\n";
      }
    3. Add default.conf.template to the Git staging area:

      git add ingress/default.conf.template
    4. Commit the file:

      git commit -m "feat: update NGINX ingress"
    5. Push your changes to GitHub:

      git push --set-upstream origin patch-1
    6. Create a pull request (PR):

      gh pr create --head patch-1 --fill --repo <your_GitHub_account>/platform
    7. Monitor the progress of the workflow:

      gh workflow view main.yml --repo <your_GitHub_account>/platform
    8. When the workflow completes, send a health‑check request to the app, replacing <ACR_URL> with the value you recorded in Step 3 of Challenge 1:

      curl <ACR_URL>/health
      Successful Update!

Make an Unsuccessful Update

Now create an unsuccessful update and watch the workflow fail. This basically involves repeating the steps in Make a Successful Update but with a different value for the return directive.

  • If using the GitHub GUI:

    1. Select Code > ingress > default.conf.template.
    2. In the upper left, select main and then the name of the branch which ends with patch-1, which you created in the previous section.
    3. Open default.conf.template for editing by selecting the pencil icon with the tooltip Edit this file.
    4. Change the return directive as indicated:

      location /health {
          access_log off;
          return 500 "Unsuccessful Update!\n";
      }
    5. Select Commit directly to the -patch-1 branch and then Commit changes.
    6. Select Actions to monitor the progress of the workflow. Notice the workflow executes again when files in the PR are updated.
    7. When the workflow completes, navigate to your container app at the <ACR_URL>/health endpoint, where the <ACR_URL> is the URL you recorded in Step 3 of Challenge 1.

      Notice the message is Successful Update! (the same as after the previous, successful update). Though that may seem paradoxical, it in fact confirms that this update failed – the update attempt resulted in status 500 (meaning Internal Server Error) and did not get applied.

    8. You can confirm the message by starting a terminal session and sending a health‑check request to the app, again replacing <ACR_URL> with the value you recorded in Step 3 of Challenge 1:

      curl <ACR_URL>/health
      Successful Update!
  • If using the GitHub CLI:

    1. Check out the patch-1 branch you created in the previous section:

      git checkout patch-1
    2. In your preferred text editor open ingress/default.conf.template and again change the return directive as indicated:

      location /health {
          access_log off;
          return 500 "Unsuccessful Update!\n";
      }
    3. Add default.conf.template to the Git staging area:

      git add ingress/default.conf.template
    4. Commit the file:

      git commit -m "feat: update NGINX ingress again"
    5. Push your changes to GitHub:

      git push
    6. Monitor the progress of the workflow:

      gh workflow view main.yml --repo <your_GitHub_account>/platform
    7. When the workflow completes, send a health‑check request to the app, replacing <ACR_URL> with the value you recorded in Step 3 of Challenge 1:

      curl <ACR_URL>/health
      Successful Update!

      It may seem paradoxical that the message is Successful Update! (the same as after the previous, successful update). Though that may seem paradoxical, in fact it confirms that this update failed – the update attempt resulted in status 500 (meaning Internal Server Error) and did not get applied.

Resource Cleanup

You probably want to remove the Azure resources you deployed in the tutorial to avoid any potential charges down the line:

az group delete -n my-container-app-rg -y

You can also delete the fork you created if you wish.

  • If using the GitHub GUI:

    1. Click Settings.
    2. Scroll down to the bottom of the page.
    3. Click Delete this repository.
    4. Type <your_GitHub_account>/platform and select I understand the consequences, delete this repository.
  • If using the GitHub CLI:

    gh repo delete <your_GitHub_account>/platform -yes

Next Steps

Congratulations! You’ve learned how to use GitHub Actions to perform a canary blue‑green deployment of a microservices app. Check out these articles in the GitHub docs to continue exploring and growing your knowledge of DevOps:

If you’re ready to try out the second step of a canary deployment (testing in production) then check out the tutorial from Microservices March 2022, Improve Uptime and Resilience with a Canary Deployment on our blog. It uses NGINX Service Mesh to gradually transition to a new app version. Even if your deployments aren’t yet complex enough to need a service mesh, or you’re not using Kubernetes, the principles still apply to simpler deployments only using an Ingress controller or load balancer.

To continue your microservices education, check out Microservices March 2023. In Unit 3, Accelerate Microservices Deployments with Automation, you’ll learn more about automating deployments.

Banner reading 'Microservices March 2023: Sign Up for Free, Register Today'

The post NGINX Tutorial: How to Use GitHub Actions to Automate Microservices Canary Deployments appeared first on NGINX.

]]>
NGINX Tutorial: How to Securely Manage Secrets in Containers https://www.nginx.com/blog/nginx-tutorial-securely-manage-secrets-containers/ Tue, 14 Mar 2023 15:06:21 +0000 https://www.nginx.com/?p=71340 This post is one of four tutorials that help you put into practice concepts from Microservices March 2023: Start Delivering Microservices: How to Deploy and Configure Microservices How to Securely Manage Secrets in Containers (this post) How to Use GitHub Actions to Automate Microservices Canary Releases How to Use OpenTelemetry Tracing to Understand Your Microservices Many [...]

Read More...

The post NGINX Tutorial: How to Securely Manage Secrets in Containers appeared first on NGINX.

]]>
This post is one of four tutorials that help you put into practice concepts from Microservices March 2023: Start Delivering Microservices:

Many of your microservices need secrets to operate securely. Examples of secrets include the private key for an SSL/TLS certificate, an API key to authenticate to another service, or an SSH key for remote login. Proper secrets management requires strictly limiting the contexts where secrets are used to only the places they need to be and preventing secrets from being accessed except when needed. But this practice is often skipped in the rush of application development. The result? Improper secrets management is a common cause of information leakage and exploits.

Tutorial Overview

In this tutorial, we show how to safely distribute and use a JSON Web Token (JWT) which a client container uses to access a service. In the four challenges in this tutorial, you experiment with four different methods for managing secrets, to learn not only how to manage secrets correctly in your containers but also about methods that are inadequate:

Although this tutorial uses a JWT as a sample secret, the techniques apply to anything for containers that you need to keep secret, such as database credentials, SSL private keys, and other API keys.

The tutorial leverages two main software components:

  • API server – A container running NGINX Open Source and some basic NGINX JavaScript code that extracts a claim from the JWT and returns a value from one of the claims or, if no claim is present, an error message
  • API client – A container running very simple Python code that simply makes a GET request to the API server

Watch this video for a demo of the tutorial in action.

The easiest way to do this tutorial is to register for Microservices March and use the browser‑based lab that’s provided. This post provides instructions for running the tutorial in your own environment.

Prerequisites and Set Up

Prerequisites

To complete the tutorial in your own environment, you need:

Notes:

  • The tutorial makes use of a test server listening on port 80. If you’re already using port 80, use the ‑p flag to set a different value for the test server when you start it with the docker run command. Then include the :<port_number> suffix on localhost in the curl commands.
  • Throughout the tutorial the prompt on the Linux command line is omitted, to make it easier to cut and paste the commands into your terminal. The tilde (~) represents your home directory.

Set Up

In this section you clone the tutorial repo, start the authentication server, and send test requests with and without a token.

Clone the Tutorial Repo

  1. In your home directory, create the microservices-march directory and clone the GitHub repository into it. (You can also use a different directory name and adapt the instructions accordingly.) The repo includes config files and separate versions of the API client application that use different methods to obtain secrets.

    mkdir ~/microservices-march
    cd ~/microservices-march
    git clone https://github.com/microservices-march/auth.git
  2. Display the secret. It’s a signed JWT, commonly used to authenticate API clients to servers.

    cat ~/microservices-march/auth/apiclient/token1.jwt
    "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2Nz UyMDA4MTMsImlzcyI6ImFwaUtleTEiLCJhdWQiOiJhcGlTZXJ2aWNlIiwic3ViIjoiYXBpS2V5MSJ9._6L_Ff29p9AWHLLZ-jEZdihy-H1glooSq_z162VKghA"

While there are a few ways to use this token for authentication, in this tutorial the API client app passes it to the authentication server using the OAuth 2.0 Bearer Token Authorization framework. That involves prefixing the JWT with Authorization: Bearer as in this example:

"Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2NzUyMDA4MTMsImlzcyI6ImFwaUtleTEiLCJhdWQiOiJhcGlTZXJ2aWNlIiwic3ViIjoiYXBpS2V5MSJ9._6L_Ff29p9AWHLLZ-jEZdihy-H1glooSq_z162VKghA"

Build and Start the Authentication Server

  1. Change to the authentication server directory:

    cd apiserver
  2. Build the Docker image for the authentication server (note the final period):

    docker build -t apiserver .
  3. Start the authentication server and confirm that it’s running (the output is spread over multiple lines for legibility):

    docker run -d -p 80:80 apiserver
    docker ps
    CONTAINER ID   IMAGE       COMMAND                  ...
    2b001f77c5cb   apiserver   "nginx -g 'daemon of..." ...  
    
    
        ... CREATED         STATUS          ...                                    
        ... 26 seconds ago  Up 26 seconds   ... 
    
    
        ... PORTS                                      ...
        ... 0.0.0.0:80->80/tcp, :::80->80/tcp, 443/tcp ...
    
    
        ... NAMES
        ... relaxed_proskuriakova

Test the Authentication Server

  1. Verify that the authentication server rejects a request that doesn’t include the JWT, returning 401 Authorization Required:

    curl -X GET http://localhost
    <html>
    <head><title>401 Authorization Required</title></head>
    <body>
    <center><h1>401 Authorization Required</h1></center>
    <hr><center>nginx/1.23.3</center>
    </body>
    </html>
  2. Provide the JWT using the Authorization header. The 200 OK return code indicates the API client app authenticated successfully.

    curl -i -X GET -H "Authorization: Bearer `cat $HOME/microservices-march/auth/apiclient/token1.jwt`" http://localhost
    HTTP/1.1 200 OK
    Server: nginx/1.23.2
    Date: Day, DD Mon YYYY hh:mm:ss TZ
    Content-Type: text/html
    Content-Length: 64
    Last-Modified: Day, DD Mon YYYY hh:mm:ss TZ
    Connection: keep-alive
    ETag: "63dc0fcd-40"
    X-MESSAGE: Success apiKey1
    Accept-Ranges: bytes
    
    
    { "response": "success", "authorized": true, "value": "999" }

Challenge 1: Hardcode Secrets in Your App (Not!)

Before you begin this challenge, let’s be clear: hardcoding secrets into your app is a terrible idea! You’ll see how anyone with access to the container image can easily find and extract hardcoded credentials.

In this challenge, you copy the code for the API client app into the build directory, build and run the app, and extract the secret.

Copy the API Client App

The app_versions subdirectory of the apiclient directory contains different versions of the simple API client app for the four challenges, each slightly more secure than the previous one (see Tutorial Overview for more information).

  1. Change to the API client directory:

    cd ~/microservices-march/auth/apiclient
  2. Copy the app for this challenge – the one with a hardcoded secret – to the working directory:

    cp ./app_versions/very_bad_hard_code.py ./app.py
  3. Take a look at the app:

    cat app.py
    import urllib.request
    import urllib.error
    
    jwt = "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2NzUyMDA4MTMsImlzcyI6ImFwaUtleTEiLCJhdWQiOiJhcGlTZXJ2aWNlIiwic3ViIjoiYXBpS2V5MSJ9._6L_Ff29p9AWHLLZ-jEZdihy-H1glooSq_z162VKghA"
    authstring = "Bearer " + jwt
    req = urllib.request.Request("http://host.docker.internal")
    req.add_header("Authorization", authstring)
    try:
        with urllib.request.urlopen(req) as response:
            the_page = response.read()
            message = response.getheader("X-MESSAGE")
            print("200  " + message)
    except urllib.error.URLError as e:
        print(str(e.code) + " s " + e.msg)

    The code simply makes a request to a local host and prints out either a success message or failure code.

    The request adds the Authorization header on this line:

    req.add_header("Authorization", authstring)

    Do you notice anything else? Perhaps a hardcoded JWT? We will get to that in a minute. First let’s build and run the app.

Build and Run the API Client App

We’re using the docker compose command along with a Docker Compose YAML file – this makes it a little easier to understand what’s going on.

(Notice that in Step 2 of the previous section you renamed the Python file for the API client app that’s specific to Challenge 1 (very_bad_hard_code.py) to app.py. You’ll also do this in the other three challenges. Using app.py each time simplifies logistics because you don’t need to change the Dockerfile. It does mean that you need to include the ‑build argument on the docker compose command to force a rebuild of the container each time.)

The docker compose command builds the container, starts the application, makes a single API request, and then shuts down the container, while displaying the results of the API call on the console.

The 200 Success code on the second-to-last line of the output indicates that authentication succeeded. The apiKey1 value is further confirmation, because it shows the auth server was able to decode the claim of that name in the JWT:

docker compose -f docker-compose.hardcode.yml up -build
...
apiclient-apiclient-1  | 200  Success apiKey1
apiclient-apiclient-1 exited with code 0

So hardcoded credentials worked correctly for our API client app – not surprising. But is it secure? Maybe so, since the container runs this script just once before it exits and doesn’t have a shell?

In fact – no, not secure at all.

Retrieve the Secret from the Container Image

Hardcoding credentials leaves them open to inspection by anyone who can access the container image, because extracting the filesystem of a container is a trivial exercise.

  1. Create the extract directory and change to it:

    mkdir extract
    cd extract
  2. List basic information about the container images. The --format flag makes the output more readable (and the output is spread across two lines here for the same reason):

    docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
    CONTAINER ID   NAMES                   IMAGE       ...
    11b73106fdf8   apiclient-apiclient-1   apiclient   ...
    ad9bdc05b07c   exciting_clarke         apiserver   ...
    
    
        ... CREATED          STATUS
        ... 6 minutes ago    Exited (0) 4 minutes ago
        ... 43 minutes ago   Up 43 minutes
  3. Extract the most recent apiclient image as a .tar file. For <container_ID>, substitute the value from the CONTAINER ID field in the output above (11b73106fdf8 in this tutorial):

    docker export -o api.tar <container_ID>

    It takes a few seconds to create the api.tar archive, which includes the container’s entire file system. One approach to finding secrets is to extract the whole archive and parse it, but as it turns out there is a shortcut for finding what’s likely to be interesting – displaying the container’s history with the docker history command. (This shortcut is especially handy because it also works for containers that you find on Docker Hub or another container registry and thus might not have the Dockerfile, but only the container image).

  4. Display the history of the container:

    docker history apiclient
    IMAGE         CREATED        ...
    9396dde2aad0  8 minutes ago  ...                    
    <missing>     8 minutes ago  ...   
    <missing>     28 minutes ago ...  
                   
        ... CREATED BY                          SIZE ... 
        ... CMD ["python" "./app.py"]           622B ...   
        ... COPY ./app.py ./app.py # buildkit   0B   ... 
        ... WORKDIR /usr/app/src                0B   ...   
                 
        ... COMMENT
        ... buildkit.dockerfile.v0
        ... buildkit.dockerfile.v0
        ... buildkit.dockerfile.v0

    The lines of output are in reverse chronological order. They show that the working directory was set to /usr/app/src, then the file of Python code for the app was copied in and run. It doesn’t take a great detective to deduce that the core codebase of this container is in /usr/app/src/app.py, and as such that’s a likely location for credentials.

  5. Armed with that knowledge, extract just that file:

    tar --extract --file=api.tar usr/app/src/app.py
  6. Display the file’s contents and, just like that, we have gained access to the “secure” JWT:

    cat usr/app/src/app.py
    ...
    jwt="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2NzUyMDA4MTMsImlzcyI6ImFwaUtleTEiLCJhdWQiOiJhcGlTZXJ2aWNlIiwic3ViIjoiYXBpS2V5MSJ9._6L_Ff29p9AWHLLZ-jEZdihy-H1glooSq_z162VKghA"
    ...

Challenge 2: Pass Secrets as Environment Variables (Again, No!)

If you completed Unit 1 of Microservices March 2023 (Apply the Twelve‑Factor App to Microservices Architectures), you’re familiar with using environment variables to pass configuration data to containers. If you missed it, never fear – it’s available on demand after you register.

In this challenge, you pass secrets as environment variables. Like the method from Challenge 1, we don’t recommend this one! It’s not as bad as hardcoding secrets, but as you’ll see it has some weaknesses.

There are four ways to pass environment variables to a container:

  • Use the ENV statement in a Dockerfile to do variable substitution (set the variable for all images built). For example:

    ENV PORT $PORT
  • Use the ‑e flag on the docker run command. For example:

    docker run -e PASSWORD=123 mycontainer
  • Use the environment key in a Docker Compose YAML file.
  • Use a .env file containing the variables.

In this challenge, you use an environment variable to set the JWT and examine the container to see if the JWT is exposed.

Pass an Environment Variable

  1. Change back to the API client directory:

    cd ~/microservices-march/auth/apiclient
  2. Copy the app for this challenge – the one that uses environment variables – to the working directory, overwriting the app.py file from Challenge 1:

    cp ./app_versions/medium_environment_variables.py ./app.py
  3. Take a look at the app. In the relevant lines of output, the secret (JWT) is read as an environment variable in the local container:

    cat app.py
    ...
    jwt = ""
    if "JWT" in os.environ:
        jwt = "Bearer " + os.environ.get("JWT")
    ...
  4. As explained above, there’s a choice of ways to get the environment variable into the container. For consistency, we’re sticking with Docker Compose. Display the contents of the Docker Compose YAML file, which uses the environment key to set the JWT environment variable:

    cat docker-compose.env.yml
    ---
    version: "3.9"
    services:
      apiclient:
        build: .
        image: apiclient
        extra_hosts:
          - "host.docker.internal:host-gateway"
        environment:
          - JWT
  5. Run the app without setting the environment variable. The 401 Unauthorized code on the second-to-last line of the output confirms that authentication failed because the API client app didn’t pass the JWT:

    docker compose -f docker-compose.env.yml up -build
    ...
    apiclient-apiclient-1  | 401  Unauthorized
    apiclient-apiclient-1 exited with code 0
  6. For simplicity, set the environment variable locally. It’s fine to do that at this point in the tutorial, since it’s not the security issue of concern right now:

    export JWT=`cat token1.jwt`
  7. Run the container again. Now the test succeeds, with the same message as in Challenge 1:

    docker compose -f docker-compose.env.yml up -build
    ... 
    apiclient-apiclient-1  | 200  Success apiKey1
    apiclient-apiclient-1 exited with code 0

So at least now the base image doesn’t contain the secret and we can pass it at run time, which is safer. But there is still a problem.

Examine the Container

  1. Display information about the container images to get the container ID for the API client app (the output is spread across two lines for legibility):

    docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
    CONTAINER ID   NAMES                   IMAGE      ...
    6b20c75830df   apiclient-apiclient-1   apiclient  ...
    ad9bdc05b07c   exciting_clarke         apiserver  ...
    
    
        ... CREATED             STATUS
        ... 6 minutes ago       Exited (0) 6 minutes ago
        ... About an hour ago   Up About an hour
  2. Inspect the container for the API client app. For <container_ID>, substitute the value from the CONTAINER ID field in the output above (here 6b20c75830df).

    The docker inspect command lets you inspect all launched containers, whether they are currently running or not. And that’s the problem – even though the container is not running, the output exposes the JWT in the Env array, insecurely saved in the container config.

    docker inspect <container_ID>
    ...
    "Env": [
      "JWT=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6InNpZ24ifQ.eyJpYXQiOjE2NzUyMDA...",
      "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
      "LANG=C.UTF-8",
      "GPG_KEY=A035C8C19219BA821ECEA86B64E628F8D684696D",
      "PYTHON_VERSION=3.11.2",
      "PYTHON_PIP_VERSION=22.3.1",
      "PYTHON_SETUPTOOLS_VERSION=65.5.1",
      "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/1a96dc5acd0303c4700e026...",
      "PYTHON_GET_PIP_SHA256=d1d09b0f9e745610657a528689ba3ea44a73bd19c60f4c954271b790c..."
    ]

Challenge 3: Use Local Secrets

By now you’ve learned that hardcoding secrets and using environment variables is not as safe as you (or your security team) need it to be.

To improve security, you can try using local Docker secrets to store sensitive information. Again, this isn’t the gold‑standard method, but it’s good to understand how it works. Even if you don’t use Docker in production, the important takeaway is how you can make it difficult to extract the secret from a container.

In Docker, secrets are exposed to a container via the file system mount /run/secrets/ where there’s a separate file containing the value of each secret.

In this challenge you pass a locally stored secret to the container using Docker Compose, then verify that the secret isn’t visible in the container when this method is used.

Pass a Locally Stored Secret to the Container

  1. As you might expect by now, you start by changing to the apiclient directory:

    cd ~/microservices-march/auth/apiclient
  2. Copy the app for this challenge – the one that uses secrets from within a container – to the working directory, overwriting the app.py file from Challenge 2:

    cp ./app_versions/better_secrets.py ./app.py
  3. Take a look at the Python code, which reads the JWT value from the /run/secrets/jot file. (And yes, we should probably be checking that the file only has one line. Maybe in Microservices March 2024?)

    cat app.py
    ...
    jotfile = "/run/secrets/jot"
    jwt = ""
    if os.path.isfile(jotfile):
        with open(jotfile) as jwtfile:
            for line in jwtfile:
                jwt = "Bearer " + line
    ...

    OK, so how are we going to create this secret? The answer is in the docker-compose.secrets.yml file.

  4. Take a look at the Docker Compose file, where the secret file is defined in the secrets section and then referenced by the apiclient service:

    cat docker-compose.secrets.yml
    ---
    version: "3.9"
    secrets:
      jot:
        file: token1.jwt
    services:
      apiclient:
        build: .
        extra_hosts:
          - "host.docker.internal:host-gateway"
        secrets:
          - jot

Verify the Secret Isn’t Visible in the Container

  1. Run the app. Because we’ve made the JWT accessible within the container, authentication succeeds with the now‑familiar message:

    docker compose -f docker-compose.secrets.yml up -build
    ...
    apiclient-apiclient-1  | 200 Success apiKey1
    apiclient-apiclient-1 exited with code 0
  2. Display information about the container images, noting the container ID for the API client app (for sample output, see Step 1 in Examine the Container from Challenge 2):

    docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
  3. Inspect the container for the API client app. For <container_ID>, substitute the value from the CONTAINER ID field in the output from the previous step. Unlike the output in Step 2 of Examine the Container, there is no JWT= line at the start of the Env section:

    docker inspect <container_ID>
    "Env": [
      "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
      "LANG=C.UTF-8",
      "GPG_KEY=A035C8C19219BA821ECEA86B64E628F8D684696D",
      "PYTHON_VERSION=3.11.2",
      "PYTHON_PIP_VERSION=22.3.1",
      "PYTHON_SETUPTOOLS_VERSION=65.5.1",
      "PYTHON_GET_PIP_URL=https://github.com/pypa/get-pip/raw/1a96dc5acd0303c4700e026...",
      "PYTHON_GET_PIP_SHA256=d1d09b0f9e745610657a528689ba3ea44a73bd19c60f4c954271b790c..."
    ]

    So far, so good, but our secret is in the container filesystem at /run/secrets/jot. Maybe we can extract it from there using the same method as in Retrieve the Secret from the Container Image from Challenge 1.

  4. Change to the extract directory (which you created during Challenge 1) and export the container into a tar archive:

    cd extract
    docker export -o api2.tar <container_ID>
  5. Look for the secrets file in the tar file:

    tar tvf api2.tar | grep jot
    -rwxr-xr-x  0 0      0           0 Mon DD hh:mm run/secrets/jot

    Uh oh, the file with the JWT in it is visible. Didn’t we say embedding secrets in the container was “secure”? Are things just as bad as in Challenge 1?

  6. Let’s see – extract the secrets file from the tar file and look at its contents:

    tar --extract --file=api2.tar run/secrets/jot
    cat run/secrets/jot

    Good news! There’s no output from the cat command, meaning the run/secrets/jot file in the container filesystem is empty – no secret to see in there! Even if there is a secrets artifact in our container, Docker is smart enough to not store any sensitive data in the container.

That said, even though this container configuration is secure, it has one shortcoming. It depends on the existence of a file called token1.jwt in the local filesystem when you run the container. If you rename the file, an attempt to restart the container fails. (You can try this yourself by renaming [not deleting!] token1.jwt and running the docker compose command from Step 1 again.)

So we are halfway there: the container uses secrets in a way that protects them from easy compromise, but the secret is still unprotected on the host. You don’t want secrets stored unencrypted in a plain text file. It’s time to bring in a secrets management tool.

Challenge 4: Use a Secrets Manager

A secrets manager helps you manage, retrieve, and rotate secrets throughout their lifecycles. There are a lot of secrets managers to choose from and they all fulfill similar a similar purpose:

  • Store secrets securely
  • Control access
  • Distribute them at run time
  • Enable secret rotation

Your options for secrets management include:

For simplicity, this challenge uses Docker Swarm, but the principles are the same for many secrets managers.

In this challenge, you create a secret in Docker, copy over the secret and API client code, deploy the container, see if you can extract the secret, and rotate the secret.

Configure a Docker Secret

  1. As is tradition by now, change to the apiclient directory:

    cd ~/microservices-march/auth/apiclient
  2. Initialize Docker Swarm:

    docker swarm init
    Swarm initialized: current node (t0o4eix09qpxf4ma1rrs9omrm) is now a manager.
    ...
  3. Create a secret and store it in token1.jwt:

    docker secret create jot ./token1.jwt
    qe26h73nhb35bak5fr5east27
  4. Display information about the secret. Notice that the secret value (the JWT) is not itself displayed:

    docker secret inspect jot
    [
      {
        "ID": "qe26h73nhb35bak5fr5east27",
        "Version": {
          "Index": 11
        },
        "CreatedAt": "YYYY-MM-DDThh:mm:ss.msZ",
        "UpdatedAt": "YYYY-MM-DDThh:mm:ss.msZ",
        "Spec": {
          "Name": "jot",
          "Labels": {}
        }
      }
    ]

Use a Docker Secret

Using the Docker secret in the API client application code is exactly the same as using a locally created secret – you can read it from the /run/secrets/ filesystem. All you need to do is change the secret qualifier in your Docker Compose YAML file.

  1. Take a look at the Docker Compose YAML file. Notice the value true in the external field, indicating we are using a Docker Swarm secret:

    cat docker-compose.secretmgr.yml
    ---
    version: "3.9"
    secrets:
      jot:
        external: true
    services:
      apiclient:
        build: .
        image: apiclient
        extra_hosts:
          - "host.docker.internal:host-gateway"
        secrets:
          - jot

    So, we can expect this Compose file to work with our existing API client application code. Well, almost. While Docker Swarm (or any other container orchestration platform) brings a lot of extra value, it does bring some additional complexity.

    Since docker compose does not work with external secrets, we’re going to have to use some Docker Swarm commands, docker stack deploy in particular. Docker Stack hides the console output, so we have to write the output to a log and then inspect the log.

    To make things easier, we also use a continuous while True loop to keep the container running.

  2. Copy the app for this challenge – the one that uses a secrets manager – to the working directory, overwriting the app.py file from Challenge 3. Displaying the contents of app.py, we see that the code is nearly identical to the code for Challenge 3. The only difference is the addition of the while True loop:

    cp ./app_versions/best_secretmgr.py ./app.py
    cat ./app.py
    ...
    while True:
        time.sleep(5)
        try:
            with urllib.request.urlopen(req) as response:
                the_page = response.read()
                message = response.getheader("X-MESSAGE")
                print("200 " + message, file=sys.stderr)
        except urllib.error.URLError as e:
            print(str(e.code) + " " + e.msg, file=sys.stderr)

Deploy the Container and Check the Logs

  1. Build the container (in previous challenges Docker Compose took care of this):

    docker build -t apiclient .
  2. Deploy the container:

    docker stack deploy --compose-file docker-compose.secretmgr.yml secretstack
    Creating network secretstack_default
    Creating service secretstack_apiclient
  3. List the running containers, noting the container ID for secretstack_apiclient (as before, the output is spread across multiple lines for readability).

    docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
    CONTAINER ID  ...  
    20d0c83a8b86  ... 
    ad9bdc05b07c  ... 
    
        ... NAMES                                             ...  
        ... secretstack_apiclient.1.0e9s4mag5tadvxs6op6lk8vmo ...  
        ... exciting_clarke                                   ...                                 
    
        ... IMAGE              CREATED          STATUS
        ... apiclient:latest   31 seconds ago   Up 30 seconds
        ... apiserver          2 hours ago      Up 2 hours
  4. Display the Docker log file; for <container_ID>, substitute the value from the CONTAINER ID field in the output from the previous step (here, 20d0c83a8b86). The log file shows a series of success messages, because we added the while True loop to the application code. Press Ctrl+c to exit the command.

    docker logs -f <container_ID>
    200 Success apiKey1
    200 Success apiKey1
    200 Success apiKey1
    200 Success apiKey1
    200 Success apiKey1
    200 Success apiKey1
    ...
    ^c

Try to Access the Secret

We know that no sensitive environment variables are set (but you can always check with the docker inspect command as in Step 2 of Examine the Container in Challenge 2).

From Challenge 3 we also know that /run/secrets/jot file is empty, but you can check:

cd extract
docker export -o api3.tar 
tar --extract --file=api3.tar run/secrets/jot
cat run/secrets/jot

Success! You can’t get the secret from the container, nor read it directly from the Docker secret.

Rotate the Secret

Of course, with the right privileges we can create a service and configure it to read the secret into the log or set it as an environment variable. In addition, you might have noticed that communication between our API client and server is unencrypted (plain text).

So leakage of secrets is still possible with almost any secrets management system. One way to limit the possibility of resulting damage is to rotate (replace) secrets regularly.

With Docker Swarm, you can only delete and then re‑create secrets (Kubernetes allows dynamic update of secrets). You also can’t delete secrets attached to running services.

  1. List the running services:

    docker service ls
    ID             NAME                    MODE         ... 
    sl4mvv48vgjz   secretstack_apiclient   replicated   ... 
    
    
        ... REPLICAS   IMAGE              PORTS
        ... 1/1        apiclient:latest
  2. Delete the secretstack_apiclient service.

    docker service rm secretstack_apiclient
  3. Delete the secret and re‑create it with a new token:

    docker secret rm jot
    docker secret create jot ./token2.jwt
  4. Re‑create the service:

    docker stack deploy --compose-file docker-compose.secretmgr.yml secretstack
  5. Look up the container ID for apiclient (for sample output, see Step 3 in Deploy the Container and Check the Logs):

    docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
  6. Display the Docker log file, which shows a series of success messages. For <container_ID>, substitute the value from the CONTAINER ID field in the output from the previous step. Press Ctrl+c to exit the command.

    docker logs -f <container_ID>
    200 Success apiKey2
    200 Success apiKey2
    200 Success apiKey2
    200 Success apiKey2
    ...
    ^c

See the change from apiKey1 to apiKey2? You’ve rotated the secret.

In this tutorial, the API server still accepts both JWTs, but in a production environment you can deprecate older JWTs by requiring certain values for claims in the JWT or checking the expiration dates of JWTs.

Note also that if you’re using a secrets system that allows your secret to be updated, your code needs to reread the secret frequently so as to pick up new secret values.

Clean Up

To clean up the objects you created in this tutorial:

  1. Delete the secretstack_apiclient service.

    docker service rm secretstack_apiclient
  2. Delete the secret.

    docker secret rm jot
  3. Leave the swarm (assuming you created a swarm just for this tutorial).

    docker swarm leave --force
  4. Kill the running apiserver container.

    docker ps -a | grep "apiserver" | awk {'print $1'} |xargs docker kill
  5. Delete unwanted containers by listing and then deleting them.

    docker ps -a --format "table {{.ID}}\t{{.Names}}\t{{.Image}}\t{{.RunningFor}}\t{{.Status}}"
    docker rm <container_ID>
  6. Delete any unwanted container images by listing and deleting them.

    docker image list   
    docker image rm <image_ID>

Next Steps

You can use this blog to implement the tutorial in your own environment or try it out in our browser‑based lab (register here). To learn more on the topic of exposing Kubernetes services, follow along with the other activities in Unit 2: Microservices Secrets Management 101.

To learn more about production‑grade JWT authentication with NGINX Plus, check out our documentation and read Authenticating API Clients with JWT and NGINX Plus on our blog.

Banner reading 'Microservices March 2023: Sign Up for Free, Register Today'

The post NGINX Tutorial: How to Securely Manage Secrets in Containers appeared first on NGINX.

]]>
NGINX Tutorial: How to Deploy and Configure Microservices https://www.nginx.com/blog/nginx-tutorial-deploy-configure-microservices/ Tue, 07 Mar 2023 16:16:31 +0000 https://www.nginx.com/?p=71316 This post is one of four tutorials that help you put into practice concepts from Microservices March 2023: Start Delivering Microservices: How to Deploy and Configure Microservices (this post) How to Securely Manage Secrets in Containers How to Use GitHub Actions to Automate Microservices Canary Releases How to Use OpenTelemetry Tracing to Understand Your Microservices All apps [...]

Read More...

The post NGINX Tutorial: How to Deploy and Configure Microservices appeared first on NGINX.

]]>
This post is one of four tutorials that help you put into practice concepts from Microservices March 2023: Start Delivering Microservices:

All apps require configuration, but the considerations when configuring a microservice may not be the same as for a monolithic app. We can look to Factor 3 (Store config in the environment) of the twelve‑factor app for guidance applicable to both types of apps, but that guidance can be adapted for microservices apps. In particular, we can adapt the way we define the service configuration, provide the configuration to a service, and make a service available as a configuration value for other services that may depend on it.

For a conceptual understanding of how to adapt Factor 3 for microservices – specifically the best practices for configuration files, databases, and service discovery – read Best Practices for Configuring Microservices Apps on our blog. This post is a great way to put that knowledge into practice.

Note: Our intention in this tutorial is to illustrate some core concepts, not to show the right way to deploy microservices in production. While it uses a real “microservices” architecture, there are some important caveats:

  • The tutorial does not use a container orchestration framework such as Kubernetes or Nomad. This is so that you can learn about microservices concepts without getting bogged down in the specifics of a certain framework. The patterns introduced here are portable to a system running one of these frameworks.
  • The services are optimized for ease of understanding rather than software engineering rigor. The point is to look at a service’s role in the system and its patterns of communication, not the specifics of the code. For more information, see the README files of the individual services.

Tutorial Overview

This tutorial illustrates how Factor 3 concepts apply to microservices apps. In four challenges, you’ll explore some common microservices configuration patterns and deploy and configure a service using those patterns:

  • In Challenge 1 and Challenge 2 you explore the first pattern, which concerns where you locate the configuration for a microservices app. There are three typical locations:

    • The application code
    • The deployment script for the application
    • Outside sources accessed by the deployment script
  • In Challenge 3 you set up two more patterns: exposing the app to the outside world via a NGINX as a reverse proxy and enabling service discovery using Consul.
  • In Challenge 4 you implement the final pattern: using an instance of your microservice as a “job runner” that performs a one-off action different from its usual function (in this case emulating a database migration).

The tutorial uses four technologies:

  • messenger – A simple chat API with message storage capabilities, created for this tutorial
  • NGINX Open Source – An entry point to the messenger service and the wider system at large
  • Consul – A dynamic service registry and key‑value store
  • RabbitMQ – A popular open source message broker that enables services to communicate asynchronously

Topology diagram showing NGINX Open Source and Consul template running together in a container. Consult template communicates with the Consult client. NGINX Open Source is a reverse proxy for the messenger service, which stores data in messenger_db and communicates with Rabbit MQ.

Watch this video to get an overview of the tutorial. The steps do not exactly match this post, but it helps in understanding the concepts.

Prerequisites and Set Up

Prerequisites

To complete the tutorial in your own environment, you need:

  • A Linux/Unix‑compatible environment
  • Basic familiarity with the Linux command line, JavaScript, and bash (but all code and commands are provided and explained, so you can still succeed with limited knowledge)
  • Docker and Docker Compose
  • Node.js 19.x or later

    • We tested version 19.x, but expect that newer versions of Node.js also work.
    • For detailed information about installing Node,js, see the README in the messenger service repository. You can also install asdf to get exactly the same Node.js version used in the containers.
  • curl (already installed on most systems)
  • The four technologies listed in Tutorial Overview: messenger (you’ll download it in the next section), NGINX Open Source, Consul, and RabbitMQ.

Set Up

  1. Start a terminal session (subsequent instructions will refer to this as the app terminal).
  2. In your home directory, create the microservices-march directory and clone the GitHub repositories for this tutorial into it. (You can also use a different directory name and adapt the instructions accordingly.)

    Note: Throughout the tutorial the prompt on the Linux command line is omitted, to make it easier to copy and paste the commands into your terminal. The tilde (~) represents your home directory.

    mkdir ~/microservices-march
    cd ~/microservices-march
    git clone https://github.com/microservices-march/platform.git --branch mm23-twelve-factor-start
    git clone https://github.com/microservices-march/messenger.git --branch mm23-twelve-factor-start
  3. Change to the platform repository and start Docker Compose:

    cd platform
    docker compose up -d --build

    This starts both RabbitMQ and Consul, which will be used in subsequent challenges.

    • The -d flag instructs Docker Compose to detach from the containers when they have started (otherwise the containers will remain attached to your terminal).
    • The --build flag instructs Docker Compose to rebuild all images on launch. This ensures that the images you are running stay updated through any potential changes to files.
  4. Change to the messenger repository and start Docker Compose:

    cd ../messenger
    docker compose up -d --build

    This starts the PostgreSQL database for the messenger service, which we’ll refer to as the messenger-database for the remainder of the tutorial.

Challenge 1: Define Application-Level Microservices Configuration

In this challenge you set up configuration in the first of the three locations we’ll look at in the tutorial: the application level. (Challenge 2 illustrates the second and third locations, deployment scripts and outside sources.)

The twelve‑factor app specifically excludes application‑level configuration, because such configuration doesn’t need to change between different deployment environments (which the twelve‑factor app calls deploys). Nonetheless, we cover all three types for completeness – the way you deal with each category as you develop, build, and deploy a service is different.

The messenger service is written in Node.js, with the entrypoint in app/index.mjs in the messenger repo. This line of the file:

app.use(express.json());

is an example of application‑level configuration. It configures the Express framework to deserialize request bodies that are of type application/json into JavaScript objects.

This logic is tightly coupled to your application code and isn’t what the twelve‑factor app considers “configuration”. However, in software everything depends on your situation, doesn’t it?

In the next two sections, you modify this line to implement two examples of application‑level configuration.

Example 1

In this example, you set the maximum size of a request body accepted by the messenger service. This size limit is set by the limit argument to the express.json function, as discussed in the Express API documentation. Here you add the limit argument to the configuration of the Express framework’s JSON middleware discussed above.

  1. In your preferred text editor, open app/index.mjs and replace:

    app.use(express.json())

    with:

    app.use(express.json({ limit: "20b" }));
  2. In the app terminal (the one you used in Set Up), change to the app directory and start the messenger service:

    cd app
    npm install
    node index.mjs
    messenger_service listening on port 4000
  3. Start a second, separate terminal session (which subsequent instructions call the client terminal) and send a POST request to the messenger service. The error message indicates that the request was successfully processed, because the request body was under the 20-byte limit set in Step 1, but that the content of the JSON payload is incorrect:

    curl -d '{ "text": "hello" }' -H "Content-Type: application/json" -X POST http://localhost:4000/conversations
    ...
    { "error": "Conversation must have 2 unique users" }
  4. Send a slightly longer message body (again in the client terminal). There’s much more output than in Step 3, including an error message that indicates this time the request body exceeds 20 bytes:

    curl -d '{ "text": "hello, world" }' -H "Content-Type: application/json" -X POST http://localhost:4000/conversations
    ...
    \”PayloadTooLargeError: request entity too large"

Example 2

This example uses convict, a library that lets you define an entire configuration “schema” in a single file. It also illustrates two guidelines from Factor 3 of the twelve‑factor app:

  • Store configuration in environment variables – You modify the app so that the maximum body size is set using an environment variable (JSON_BODY_LIMIT) instead of being hardcoded in the app code.
  • Clearly define your service configuration – This is an adaptation of Factor 3 for microservices. If you’re unfamiliar with this concept, we recommend that you take a moment to read about it in Best Practices for Configuring Microservices Apps on our blog.

The example also sets up some “plumbing” you’ll take advantage of in Challenge 2: the messenger deployment script you’ll create in that challenge sets the JSON_BODY_LIMIT environment variable which you insert into the app code here, as an illustration of configuration specified in a deployment script.

  1. Open the convict configuration file, app/config/config.mjs, and add the following as a new key after the amqpport key:

    jsonBodyLimit: {
      doc: `The max size (with unit included) that will be parsed by the
            JSON middleware. Unit parsing is done by the
            https://www.npmjs.com/package/bytes library.
            ex: "100kb"`,
      format: String,
      default: null,
      env: "JSON_BODY_LIMIT",
    },

    The convict library takes care of parsing the JSON_BODY_LIMIT environment variable when you use it to set the maximum body size on the command line in Step 3 below:

    • Pulls the value from the correct environment variable
    • Checks the variable’s type (String)
    • Enables access to it in the application under the jsonBodyLimit key
  2. In app/index.mjs replace:

    app.use(express.json({ limit: "20b" }));

    with

    app.use(express.json({ limit: config.get("jsonBodyLimit") }));
  3. In the app terminal (where you started the messenger service in Step 2 of Example 1), press Ctrl+c to stop the service. Then start it again, using the JSON_BODY_LIMIT environment variable to set the maximum body size to 27 bytes:

    ^c
    JSON_BODY_LIMIT=27b node index.mjs

    This is an example of modifying the configuration method when doing so makes sense for your use case – you’ve switched from hardcoding a value (in this case a size limit) in the app code to setting it with an environment variable, as recommended by the twelve-factor app.

    As mentioned above, in Challenge 2 the use of the JSON_BODY_LIMIT environment variable will become an example of the second location for configuration, when you use the messenger service’s deployment script to set the environment variable rather than setting it on the command line.

  4. In the client terminal, repeat the curl command from Step 4 of Example 1 (with the larger request body). Because you’ve now increased the size limit to 27 bytes, the request body no longer exceeds the limit and you get the error message that indicates the request was processed, but that the content of the JSON payload is incorrect:

    curl -d '{ "text": "hello, world" }' -H "Content-Type: application/json" -X POST http://localhost:4000/conversations
    { "error": "Conversation must have 2 unique users" }

    You can close the client terminal if you wish. You’ll issue all commands in the rest of the tutorial in the app terminal.

  5. In the app terminal, press Ctrl+c to stop the messenger service (you stopped and restarted the service in this terminal in Step 3 above).

    ^c
  6. Stop the messenger-database. You can safely ignore the error message shown, as the network is still in use by the infrastructure elements defined in the platform repository. Run this command at the root of the messenger repo.

    docker compose down
    ...failed to remove network mm_2023....

Challenge 2: Create Deployment Scripts for a Service

Configuration should be strictly separated from code (otherwise how can it vary between deploys?)
– From Factor 3 of the twelve‑factor app

At first glance, you might interpret this as saying “do not check configuration in to source control”. In this challenge, you implement a common pattern for microservices environments that may seem to break this rule, but in reality respects the rule while providing valuable process improvements that are critical for microservices environments.

In this challenge, you create deployment scripts to mimic the functionality of infrastructure-as-code and deployment manifests which provide configuration to a microservice, modify the scripts to use external sources of configuration, set a secret, and then run the scripts to deploy services and their infrastructure.

You create the deployment scripts in a newly created infrastructure directory in the messenger repo. A directory called infrastructure (or some variation of that name) is a common pattern in modern microservice architectures, used to store things like:

The benefits of this pattern include:

  • It assigns ownership of the service deployment and the deployment of service‑specific infrastructure (such as databases) to the team that owns the service.
  • The team can ensure changes to any of these elements go through its development process (code review, CI, etc.).
  • The team can easily make changes to how the service and its supporting infrastructure are deployed without depending on outside teams doing work for them.

As mentioned previously, our intention for the tutorial is not to show how to set up a real system, and the scripts you deploy in this challenge do not resemble a real production system. Rather, they illustrate some core concepts and problems solved by tool‑specific configuration when dealing with microservices‑related infrastructure deployment, while also abstracting the scripts to the minimum amount of specific tooling possible.

Create Initial Deployment Scripts

  1. In the app terminal, create an infrastructure directory at the root of the messenger repo and create files to contain the deployment scripts for the messenger service and the messenger-database. Depending on your environment, you might need to prefix the chmod commands with sudo:

    mkdir infrastructure
    cd infrastructure
    touch messenger-deploy.sh
    chmod +x messenger-deploy.sh
    touch messenger-db-deploy.sh
    chmod +x messenger-db-deploy.sh
  2. In your preferred text editor, open messenger-deploy.sh and add the following to create an initial deployment script for the messenger service:

    #!/bin/bash
    set -e
    
    JSON_BODY_LIMIT=20b
    
    docker run \
      --rm \
      -e JSON_BODY_LIMIT="${JSON_BODY_LIMIT}" \
      messenger

This script isn’t complete at this point, but it illustrates a couple concepts:

  • It assigns a value to environment variables by including that configuration directly in the deployment script.
  • It uses the -e flag on the docker run command to inject environment variables into the container at runtime.

It may seem redundant to set the value of environment variables this way, but it means that – no matter how complex this deployment script becomes – you can take a quick look at the very top of the script and understand how configuration data is being provided to the deployment.

Additionally, although a real deployment script may not explicitly invoke the docker run command, this sample script is meant to convey the core problems being solved by something like a Kubernetes manifest. When using a container orchestration system like Kubernetes, a deployment starts a container and the application configuration derived from your Kubernetes configuration files is made available to that container. Thus, we can consider this sample deployment file to be a minimal version of a deployment script that plays the same role as framework‑specific deployment files like Kubernetes manifests.

In a real development environment, you might check this file into source control and put it through code review. This gives the rest of your team an opportunity to comment on your settings and thus helps avoid incidents where misconfigured values lead to unexpected behavior. For example, in this screenshot a team member is rightly pointing out that a limit of 20 bytes for incoming JSON request bodies (set with JSON_BODY_LIMIT) is too low.

Screenshot of code review saying a limit of 20 bytes for the message body is too small

Modify Deployment Scripts to Query Configuration Values from External Sources

In this part of the challenge, you set up the third location for a microservice’s configuration: an external source that is queried at deployment time. Dynamically registering values and fetching them from an outside source at deployment time is a much better practice than hardcoding values, which must be updated constantly and can cause failures. For a discussion, see Best Practices for Configuring Microservices Apps on our blog.

At this point two infrastructure components are running in the background to provide auxiliary services required by the messenger service:

  1. RabbitMQ, owned by the Platform team in a real deployment (started in Step 3 of Set Up)
  2. The messenger-database, owned by your team in a real deployment (started in Step 4 of Set Up)

The convict schema for the messenger service in app/config/config.mjs defines the required environment variables corresponding to these pieces of external configuration. In this section you set up these two components to provide configuration by setting the values of the variables in a commonly accessible location so that they can be queried by the messenger service when it deploys.

The required connection information for RabbitMQ and the messenger-database is registered in the Consul Key/Value (KV) store, which is a common location accessible to all services as they are deployed. The Consul KV store is not a standard place to store this type of data, but this tutorial uses it for simplicity’s sake.

  1. Replace the contents of infrastructure/messenger-deploy.sh (created in Step 2 of the previous section) with the following:

    #!/bin/bash
    set -e
    
    # This configuration requires a new commit to change
    NODE_ENV=production
    PORT=4000
    JSON_BODY_LIMIT=100kb
    
    # Postgres database configuration by pulling information from 
    # the system
    POSTGRES_USER=$(curl -X GET http://localhost:8500/v1/kv/messenger-db-application-user?raw=true)
    PGPORT=$(curl -X GET http://localhost:8500/v1/kv/messenger-db-port?raw=true)
    PGHOST=$(curl -X GET http://localhost:8500/v1/kv/messenger-db-host?raw=true)
    
    # RabbitMQ configuration by pulling from the system
    AMQPHOST=$(curl -X GET http://localhost:8500/v1/kv/amqp-host?raw=true)
    AMQPPORT=$(curl -X GET http://localhost:8500/v1/kv/amqp-port?raw=true)
    
    docker run \
      --rm \
      -e NODE_ENV="${NODE_ENV}" \
      -e PORT="${PORT}" \
      -e JSON_BODY_LIMIT="${JSON_BODY_LIMIT}" \
      -e PGUSER="${POSTGRES_USER}" \
      -e PGPORT="${PGPORT}" \
      -e PGHOST="${PGHOST}" \
      -e AMQPPORT="${AMQPPORT}" \
      -e AMQPHOST="${AMQPHOST}" \
      messenger

    This script exemplifies two types of configuration:

    • Configuration specified directly in the deployment script – It sets the deployment environment (NODE_ENV) and port (PORT), and changes JSON_BODY_LIMIT to 100 KB, a more realistic value than 20 bytes.
    • Configuration queried from external sources – It fetches the values of the POSTGRES_USER, PGPORT, PGHOST, AMQPHOST, and AMQPPORT environment variables from the Consul KV store. You set the values of the environment variables in the Consul KV store in the following two steps.
  2. Open messenger-db-deploy.sh and add the following to create an initial deployment script for the messenger-database:

    #!/bin/bash
    set -e
    
    PORT=5432
    POSTGRES_USER=postgres
    
    docker run \
      -d \
      --rm \
      --name messenger-db \
      -v db-data:/var/lib/postgresql/data/pgdata \
      -e POSTGRES_USER="${POSTGRES_USER}" \
      -e POSTGRES_PASSWORD="${POSTGRES_PASSWORD}" \
      -e PGPORT="${PORT}" \
      -e PGDATA=/var/lib/postgresql/data/pgdata \
      --network mm_2023 \
      postgres:15.1
    
    # Register details about the database with Consul
    curl -X PUT http://localhost:8500/v1/kv/messenger-db-port \
      -H "Content-Type: application/json" \
      -d "${PORT}"
    
    curl -X PUT http://localhost:8500/v1/kv/messenger-db-host \
      -H "Content-Type: application/json" \
      -d 'messenger-db' # This matches the "--name" flag above
                        # (the hostname)
    
    curl -X PUT http://localhost:8500/v1/kv/messenger-db-application-user \
      -H "Content-Type: application/json" \
      -d "${POSTGRES_USER}"

    In addition to defining configuration that can be queried by the messenger service at deployment time, the script illustrates the same two concepts as the initial script for the messenger service from Create Initial Deployment Scripts):

    • It specifies certain configuration directly in the deployment script, in this case to tell the PostgreSQL database the port on which to run and the username of the default user.
    • It runs Docker with the -e flag to inject environment variables into the container at runtime. It also sets the name of the running container to messenger-db, which becomes the hostname of the database in the Docker network you created when you launched the platform service in Step 2 of Set Up.
  3. In a real deployment, it’s usually the Platform team (or similar) that handles the deployment and maintenance of a service like RabbitMQ in the platform repo, like you do for the messenger-database in the messenger repo. The Platform team then makes sure that the location of that infrastructure is discoverable by services that depend on it. For the purposes of the tutorial, set the RabbitMQ values yourself:

    curl -X PUT --silent --output /dev/null --show-error --fail \
      -H "Content-Type: application/json" \
      -d "rabbitmq" \
      http://localhost:8500/v1/kv/amqp-host
    
    curl -X PUT --silent --output /dev/null --show-error --fail \
      -H "Content-Type: application/json" \
      -d "5672" \
      http://localhost:8500/v1/kv/amqp-port

    (You might wonder why amqp is used to define RabbitMQ variables – it’s because AMQP is the protocol used by RabbitMQ.)

Set a Secret in the Deployment Scripts

There is only one (critical) piece of data missing in the deployment scripts for the messenger service – the password for the messenger-database!

Note: Secrets management is not the focus of this tutorial, so for simplicity the secret is defined in deployment files. Never do this in an actual environment – development, test, or production – it creates a huge security risk.

To learn about proper secrets management, check out Unit 2, Microservices Secrets Management 101 of Microservices March 2023. (Spoiler: a secrets management tool is the only truly secure method for storing secrets).

  1. Replace the contents of infrastructure/messenger-db-deploy.sh with the following to store the password secret for the messenger-database in the Consul KV store:

    #!/bin/bash
    set -e
    
    PORT=5432
    POSTGRES_USER=postgres
    # NOTE: Never do this in a real-world deployment. Store passwords
    # only in an encrypted secrets store.
    POSTGRES_PASSWORD=postgres
    
    docker run \
      --rm \
      --name messenger-db-primary \
      -d \
      -v db-data:/var/lib/postgresql/data/pgdata \
      -e POSTGRES_USER="${POSTGRES_USER}" \
      -e POSTGRES_PASSWORD="${POSTGRES_PASSWORD}" \
      -e PGPORT="${PORT}" \
      -e PGDATA=/var/lib/postgresql/data/pgdata \
      --network mm_2023 \
      postgres:15.1
    
    echo "Register key messenger-db-port\n"
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-port \
      -H "Content-Type: application/json" \
      -d "${PORT}"
    
    echo "Register key messenger-db-host\n"
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-host \
      -H "Content-Type: application/json" \
      -d 'messenger-db-primary' # This matches the "--name" flag above
                                # which for our setup means the hostname
    
    echo "Register key messenger-db-application-user\n"
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-application-user \
      -H "Content-Type: application/json" \
      -d "${POSTGRES_USER}"
    
    echo "Register key messenger-db-password-never-do-this\n"
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-password-never-do-this \
      -H "Content-Type: application/json" \
      -d "${POSTGRES_PASSWORD}"
    
    printf "\nDone registering postgres details with Consul\n"
  2. Replace the contents of infrastructure/messenger-deploy.sh with the following to fetch the messenger-database password secret from the Consul KV store:

    #!/bin/bash
    set -e
    
    # This configuration requires a new commit to change
    NODE_ENV=production
    PORT=4000
    JSON_BODY_LIMIT=100kb
    
    # Postgres database configuration by pulling information from 
    # the system
    POSTGRES_USER=$(curl -X GET http://localhost:8500/v1/kv/messenger-db-application-user?raw=true)
    PGPORT=$(curl -X GET http://localhost:8500/v1/kv/messenger-db-port?raw=true)
    PGHOST=$(curl -X GET http://localhost:8500/v1/kv/messenger-db-host?raw=true)
    # NOTE: Never do this in a real-world deployment. Store passwords
    # only in an encrypted secrets store.
    PGPASSWORD=$(curl -X GET http://localhost:8500/v1/kv/messenger-db-password-never-do-this?raw=true)
    
    # RabbitMQ configuration by pulling from the system
    AMQPHOST=$(curl -X GET http://localhost:8500/v1/kv/amqp-host?raw=true)
    AMQPPORT=$(curl -X GET http://localhost:8500/v1/kv/amqp-port?raw=true)
    
    docker run \
      --rm \
      -d \
      -e NODE_ENV="${NODE_ENV}" \
      -e PORT="${PORT}" \
      -e JSON_BODY_LIMIT="${JSON_BODY_LIMIT}" \
      -e PGUSER="${POSTGRES_USER}" \
      -e PGPORT="${PGPORT}" \
      -e PGHOST="${PGHOST}" \
      -e PGPASSWORD="${PGPASSWORD}" \
      -e AMQPPORT="${AMQPPORT}" \
      -e AMQPHOST="${AMQPHOST}" \
      --network mm_2023 \
      messenger

Run the Deployment Scripts

  1. Change to the app directory in the messenger repo and build the Docker image for the messenger service:

    cd ../app
    docker build -t messenger .
  2. Verify that only the containers that belong to the platform service are running:

    docker ps --format '{{.Names}}'
    consul-server
    consul-client
    rabbitmq
  3. Change to the root of the messenger repository and deploy the messenger-database and the messenger service:

    cd ..
    ./infrastructure/messenger-db-deploy.sh
    ./infrastructure/messenger-deploy.sh

    The messenger-db-deploy.sh script starts the messenger-database and registers the appropriate information with the system (which in this case is the Consul KV store).

    The messenger-deploy.sh script then starts the application and pulls the configuration registered by messenger-db-deploy.sh from the system (again, the Consul KV store).

    Hint: If a container fails to start, remove the second parameter to the docker run command ( the -d \ line) in the deployment script and run the script again. The container then starts in the foreground, which means its logs appear in the terminal and might identify the problem. When you resolve the problem, restore the -d \ line so that the actual container runs in the background.

  4. Send a simple health‑check request to the application to verify that deployment succeeded:

    curl localhost:4000/health
    curl: (7) Failed to connect to localhost port 4000 after 11 ms: Connection refused

    Oops, failure! As it turns out, you are still missing one critical piece of configuration and the messenger service is not exposed to the wider system. It’s running happily inside the mm_2023 network, but that network is only accessible from within Docker.

  5. Stop the running container in preparation for creating a new image in the next challenge:

    docker rm $(docker stop $(docker ps -a -q --filter ancestor=messenger --format="{{.ID}}"))

Challenge 3: Expose a Service to the Outside World

In a production environment, you don’t generally expose services directly. Instead, you follow a common microservices pattern and place a reverse proxy service in front of your main service.

In this challenge, you expose the messenger service to the outside world by setting up service discovery: the registration of new service information and dynamic updating of that information as accessed by other services. To do this, you use these technologies:

  • Consul, a dynamic service registry, and Consul template, a tool for dynamically updating a file based on Consul data
  • NGINX Open Source, as a reverse proxy and load balancer that exposes a single entry point for your messenger service which will be composed of multiple individual instances of the application running in containers

To learn more about service discovery, see Making a Service Available as Configuration in Best Practices for Configuring Microservices Apps on our blog.

Set Up Consul

The app/consul/index.mjs file in the messenger repo contains all the code necessary to register the messenger service with Consul at startup and deregister it at graceful shutdown. It exposes one function, register, which registers any newly deployed service with Consul’s service registry.

  1. In your preferred text editor, open app/index.mjs and add the following snippet after the other import statements, to import the register function from app/consul/index.mjs:

    import { register as registerConsul } from "./consul/index.mjs";

    Then modify the SERVER START section at the end of the script as shown, to call registerConsul() after the application has started:

    /* =================
      SERVER START
    ================== */
    app.listen(port, async () => {
      console.log(`messenger_service listening on port ${port}`);
      registerConsul();
    });
    
    export default app;
  2. Open the convict schema in app/config/config.mjs and add the following configuration values after the jsonBodyLimit key you added in Step 1 of Example 2.

      consulServiceName: {
        doc: "The name by which the service is registered in Consul. If not specified, the service is not registered",
        format: "*",
        default: null,
        env: "CONSUL_SERVICE_NAME",
      },
      consulHost: {
        doc: "The host where the Consul client runs",
        format: String,
        default: "consul-client",
        env: "CONSUL_HOST",
      },
      consulPort: {
        doc: "The port for the Consul client",
        format: "port",
        default: 8500,
        env: "CONSUL_PORT",
      },

    This configures the name under which a new service is registered and defines the hostname and port for the Consul client. In the next step you modify the deployment script for the messenger service to include this new Consul connection and service registration information.

  3. Open infrastructure/messenger-deploy.sh and replace its contents with the following to include in the messenger service configuration the Consul connection and service registration information you set in the previous step:

    #!/bin/bash
    set -e
    
    # This configuration requires a new commit to change
    NODE_ENV=production
    PORT=4000
    JSON_BODY_LIMIT=100kb
    
    CONSUL_SERVICE_NAME="messenger"
    
    # Consul host and port are included in each host since we
    # cannot query Consul until we know them
    CONSUL_HOST="${CONSUL_HOST}"
    CONSUL_PORT="${CONSUL_PORT}"
    
    # Postgres database configuration by pulling information from 
    # the system
    POSTGRES_USER=$(curl -X GET "http://localhost:8500/v1/kv/messenger-db-application-user?raw=true")
    PGPORT=$(curl -X GET "http://localhost:8500/v1/kv/messenger-db-port?raw=true")
    PGHOST=$(curl -X GET "http://localhost:8500/v1/kv/messenger-db-host?raw=true")
    # NOTE: Never do this in a real-world deployment. Store passwords
    # only in an encrypted secrets store.
    PGPASSWORD=$(curl -X GET "http://localhost:8500/v1/kv/messenger-db-password-never-do-this?raw=true")
    
    # RabbitMQ configuration by pulling from the system
    AMQPHOST=$(curl -X GET "http://localhost:8500/v1/kv/amqp-host?raw=true")
    AMQPPORT=$(curl -X GET "http://localhost:8500/v1/kv/amqp-port?raw=true")
    
    docker run \
      --rm \
      -d \
      -e NODE_ENV="${NODE_ENV}" \
      -e PORT="${PORT}" \
      -e JSON_BODY_LIMIT="${JSON_BODY_LIMIT}" \
      -e PGUSER="${POSTGRES_USER}" \
      -e PGPORT="${PGPORT}" \
      -e PGHOST="${PGHOST}" \
      -e PGPASSWORD="${PGPASSWORD}" \
      -e AMQPPORT="${AMQPPORT}" \
      -e AMQPHOST="${AMQPHOST}" \
      -e CONSUL_HOST="${CONSUL_HOST}" \
      -e CONSUL_PORT="${CONSUL_PORT}" \
      -e CONSUL_SERVICE_NAME="${CONSUL_SERVICE_NAME}" \
      --network mm_2023 \
      messenger

    The main things to note are:

    • The CONSUL_SERVICE_NAME environment variable tells the messenger service instance what name to use as it registers itself with Consul.
    • The CONSUL_HOST and CONSUL_PORT environment variables are for the Consul client running at the location where the deployment script runs.

    Note: In a real‑world deployment, this is an example of configuration which must be agreed upon between teams – the team responsible for Consul must provide the CONSUL_HOST and CONSUL_PORT environment variables in all environments since a service cannot query Consul without this connection information.

  4. In the app terminal, change to the app directory, stop any running instances of the messenger service, and rebuild the Docker image to bake in the new service registration code:

    cd app
    docker rm $(docker stop $(docker ps -a -q --filter ancestor=messenger --format="{{.ID}}"))
    docker build -t messenger .
  5. Navigate to http://localhost:8500 in a browser to see the Consul UI in action (though nothing interesting is happening yet).

  6. At the root of the messenger repository, run the deployment script to start an instance of the messenger service:

    CONSUL_HOST=consul-client CONSUL_PORT=8500 ./infrastructure/messenger-deploy.sh
  7. In the Consul UI in the browser, click Services in the header bar to verify that a single messenger service is running.

    Consul UI Services tab with one instance each of consul and messenger services

  8. Run the deployment script a few more times to start more instances of the messenger service. Verify in the Consul UI that they’re running.

    CONSUL_HOST=consul-client CONSUL_PORT=8500 ./infrastructure/messenger-deploy.sh

    Consul UI 'messenger' tab with five instances of the service

Set Up NGINX

The next step is to add NGINX Open Source as a reverse proxy and load balancer to route incoming traffic to all the running messenger instances.

  1. In the app terminal, change directory to the root of the messenger repo and create a directory called load-balancer and three files:

    mkdir load-balancer
    cd load-balancer
    touch nginx.ctmpl
    touch consul-template-config.hcl
    touch Dockerfile

    The Dockerfile defines the container where NGINX and Consul template run. Consul template uses the other two files to dynamically update the NGINX upstreams when the messenger service changes (service instances come up or go down) in its service registry.

  2. Open the nginx.ctmpl file created in Step 1 and add the following NGINX configuration snippet, which Consul template uses to dynamically update the NGINX upstream group:

    upstream messenger_service {
        {{- range service "messenger" }}
        server {{ .Address }}:{{ .Port }};
        {{- end }}
    }
    
    server {
        listen 8085;
        server_name localhost;
    
        location / {
            proxy_pass http://messenger_service;
            add_header Upstream-Host $upstream_addr;
        }
    }

    This snippet adds the IP address and port number of each messenger service instance as registered with Consul to the NGINX messenger_service upstream group. NGINX proxies incoming requests to the dynamically defined set of upstream service instances.

  3. Open the consul-template-config.hcl file created in Step 1 and add the following config:

    consul {
      address = "consul-client:8500"
    
      retry {
        enabled  = true
        attempts = 12
        backoff  = "250ms"
      }
    }
    template {
      source      = "/usr/templates/nginx.ctmpl"
      destination = "/etc/nginx/conf.d/default.conf"
      perms       = 0600
      command     = "if [ -e /var/run/nginx.pid ]; then nginx -s reload; else nginx; fi"
    }

    This config for Consul template tells it to re‑render the source template (the NGINX configuration snippet created in the previous step), place it at the specified destination, and finally run the specified command (which tells NGINX to reload its configuration).

    In practice this means that a new default.conf file is created every time a service instance is registered, updated, or deregistered in Consul. NGINX then reloads its configuration with no downtime, ensuring NGINX has an up-to-date, healthy set of servers (messenger service instances) to which it can send traffic.

  4. Open the Dockerfile file created in Step 1 and add the following contents, which builds the NGINX service. (You don’t need to understand the Dockerfile for the purposes of this tutorial, but the code is documented in‑line for your convenience.)

    FROM nginx:1.23.1
    
    ARG CONSUL_TEMPLATE_VERSION=0.30.0
    
    # Set an environment variable for the location of the Consul
    # cluster. By default, it tries to resolve to consul-client:8500
    # which is the behavior if Consul is running as a container in the 
    # same host and linked to this NGINX container (with the alias 
    # consul, of course). But this environment variable can also be
    # overridden as the container starts if we want to resolve to
    # another address.
    
    ENV CONSUL_URL consul-client:8500
    
    # Download the specified version of Consul template
    ADD https://releases.hashicorp.com/consul-template/${CONSUL_TEMPLATE_VERSION}/consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip /tmp
    
    RUN apt-get update \
      && apt-get install -y --no-install-recommends dumb-init unzip \
      && unzip /tmp/consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip -d /usr/local/bin \
      && rm -rf /tmp/consul-template_${CONSUL_TEMPLATE_VERSION}_linux_amd64.zip
    
    COPY consul-template-config.hcl ./consul-template-config.hcl
    COPY nginx.ctmpl /usr/templates/nginx.ctmpl
    
    EXPOSE 8085
    
    STOPSIGNAL SIGQUIT
    
    CMD ["dumb-init", "consul-template", "-config=consul-template-config.hcl"]
  5. Build a Docker image:

    docker build -t messenger-lb .
  6. Change to the root of the messenger directory, create a file named messenger-load-balancer-deploy.sh as a deployment file for the NGINX service (just like with the rest of the services you have deployed throughout the tutorial). Depending on your environment, you might need to prefix the chmod command with sudo:

    cd ..
    touch infrastructure/messenger-load-balancer-deploy.sh
    chmod +x infrastructure/messenger-load-balancer-deploy.sh
  7. Open messenger-load-balancer-deploy.sh and add the following contents:

    #!/bin/bash
    set -e
    
    # Consul host and port are included in each host since we
    # cannot query Consul until we know them
    CONSUL_HOST="${CONSUL_HOST}"
    CONSUL_PORT="${CONSUL_PORT}"
    
    docker run \
      --rm \
      -d \
      --name messenger-lb \
      -e CONSUL_URL="${CONSUL_HOST}:${CONSUL_PORT}"  \
      -p 8085:8085 \
      --network mm_2023 \
      messenger-lb
  8. Now that you have everything in place, deploy the NGINX service:

    CONSUL_HOST=consul-client CONSUL_PORT=8500 ./infrastructure/messenger-load-balancer-deploy.sh
  9. See if you can access the messenger service externally:

    curl -X GET http://localhost:8085/health
    OK

    It works! NGINX is now load balancing across all instances of the messenger service that have been created. You can tell because the X-Forwarded-For header is showing the same messenger service IP addresses as the ones in the Consul UI in Step 8 of the previous section.

Challenge 4: Migrate a Database Using a Service as a Job Runner

Large applications often make use of “job runners” with small worker processes that can be used to do one‑off tasks like modify data (examples are Sidekiq and Celery). These tools often require additional supporting infrastructure such as Redis or RabbitMQ. In this case, you use the messenger service itself as a “job runner” to run one‑off tasks. This makes sense because it’s so small already, is fully capable of interacting with the database and other pieces of infrastructure on which it depends, and is running completely separately from the application that is serving traffic.

There are three benefits to doing this:

  1. The job runner (including the scripts it runs) goes through exactly the same checks and review process as the production service.
  2. Configuration values such as database users can easily be changed to make the production deployment more secure. For example, you can run the production service with a “low privilege” user that can only write and query from existing tables. You can configure a different service instance to make changes to the database structure as a higher‑privileged user able to create and remove tables.
  3. Some teams run jobs from instances that are also handling service production traffic. This is dangerous because issues with the job can impact the other functions the application in the container is performing. Avoiding things like that is why we’re doing microservices in the first place, isn’t it?

In this challenge you explore how an artifact can be modified to fill a new role by changing some database configuration values and migrating the messenger database to use the new values and testing its performance.

Migrate the messenger Database

For a real‑world production deployment, you might create two distinct users with different permissions: an “application user” and a “migrator user”. For simplicity’s sake, in this example you use the default user as the application user and create a migrator user with superuser privileges. In a real situation, it’s worth spending more time deciding which specific minimal permissions are needed by each user based on its role.

  1. In the app terminal, create a new PostgreSQL user with superuser privileges:

    echo "CREATE USER messenger_migrator WITH SUPERUSER PASSWORD 'migrator_password';" | docker exec -i messenger-db-primary psql -U postgres
  2. Open the database deployment script (infrastructure/messenger-db-deploy.sh) and replace its contents to add the new user’s credentials.

    Note: Let’s take the time to reiterate – for a real‑world deployment, NEVER put secrets like database credentials in a deployment script or anywhere other than a secrets management tool. For details, see Unit 2: Microservices Secrets Management 101 of Microservices March 2023.

    #!/bin/bash
    set -e
    
    PORT=5432
    POSTGRES_USER=postgres
    # NOTE: Never do this in a real-world deployment. Store passwords
    # only in an encrypted secrets store.
    # Because we’re focusing on other concepts in this tutorial, we
    # set the password this way here for convenience.
    POSTGRES_PASSWORD=postgres
    
    # Migration user
    POSTGRES_MIGRATOR_USER=messenger_migrator
    # NOTE: As above, never do this in a real deployment.
    POSTGRES_MIGRATOR_PASSWORD=migrator_password
    
    docker run \
      --rm \
      --name messenger-db-primary \
      -d \
      -v db-data:/var/lib/postgresql/data/pgdata \
      -e POSTGRES_USER="${POSTGRES_USER}" \
      -e POSTGRES_PASSWORD="${POSTGRES_PASSWORD}" \
      -e PGPORT="${PORT}" \
      -e PGDATA=/var/lib/postgresql/data/pgdata \
      --network mm_2023 \
      postgres:15.1
    
    echo "Register key messenger-db-port\n"
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-port \
      -H "Content-Type: application/json" \
      -d "${PORT}"
    
    echo "Register key messenger-db-host\n"
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-host \
      -H "Content-Type: application/json" \
      -d 'messenger-db-primary' # This matches the "--name" flag above
                                # which for our setup means the hostname
    
    echo "Register key messenger-db-application-user\n"
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-application-user \
      -H "Content-Type: application/json" \
      -d "${POSTGRES_USER}"
    
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-password-never-do-this \
      -H "Content-Type: application/json" \
      -d "${POSTGRES_PASSWORD}"
    
    echo "Register key messenger-db-application-user\n"
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-migrator-user \
      -H "Content-Type: application/json" \
      -d "${POSTGRES_MIGRATOR_USER}"
    
    curl -X PUT --silent --output /dev/null --show-error --fail http://localhost:8500/v1/kv/messenger-db-migrator-password-never-do-this \
      -H "Content-Type: application/json" \
      -d "${POSTGRES_MIGRATOR_PASSWORD}"
    
    printf "\nDone registering postgres details with Consul\n"

    This change just adds the migrator user to the set of users that is set in Consul after the database deploys.

  3. Create a new file in the infrastructure directory called messenger-db-migrator-deploy.sh (again, you might need to prefix the chmod command with sudo):

    touch infrastructure/messenger-db-migrator-deploy.sh
    chmod +x infrastructure/messenger-db-migrator-deploy.sh
  4. Open messenger-db-migrator-deploy.sh and add the following:

    #!/bin/bash
    set -e
    
    # This configuration requires a new commit to change
    NODE_ENV=production
    PORT=4000
    JSON_BODY_LIMIT=100kb
    
    CONSUL_SERVICE_NAME="messenger-migrator"
    
    # Consul host and port are included in each host since we
    # cannot query Consul until we know them
    CONSUL_HOST="${CONSUL_HOST}"
    CONSUL_PORT="${CONSUL_PORT}"
    
    # Get the migrator user name and password
    POSTGRES_USER=$(curl -X GET "http://localhost:8500/v1/kv/messenger-db-migrator-user?raw=true")
    PGPORT=$(curl -X GET "http://localhost:8500/v1/kv/messenger-db-port?raw=true")
    PGHOST=$(curl -X GET http://localhost:8500/v1/kv/messenger-db-host?raw=true)
    # NOTE: Never do this in a real-world deployment. Store passwords
    # only in an encrypted secrets store.
    PGPASSWORD=$(curl -X GET "http://localhost:8500/v1/kv/messenger-db-migrator-password-never-do-this?raw=true")
    
    # RabbitMQ configuration by pulling from the system
    AMQPHOST=$(curl -X GET "http://localhost:8500/v1/kv/amqp-host?raw=true")
    AMQPPORT=$(curl -X GET "http://localhost:8500/v1/kv/amqp-port?raw=true")
    
    docker run \--rm \
      -d \
      --name messenger-migrator \
      -e NODE_ENV="${NODE_ENV}" \
      -e PORT="${PORT}" \
      -e JSON_BODY_LIMIT="${JSON_BODY_LIMIT}" \
      -e PGUSER="${POSTGRES_USER}" \
      -e PGPORT="${PGPORT}" \
      -e PGHOST="${PGHOST}" \
      -e PGPASSWORD="${PGPASSWORD}" \
      -e AMQPPORT="${AMQPPORT}" \
      -e AMQPHOST="${AMQPHOST}" \
      -e CONSUL_HOST="${CONSUL_HOST}" \
      -e CONSUL_PORT="${CONSUL_PORT}" \
      -e CONSUL_SERVICE_NAME="${CONSUL_SERVICE_NAME}" \
      --network mm_2023 \
      messenger

    This script is quite similar to the infrastructure/messenger-deploy.sh script in its final form, which you created in Step 3 of Set Up Consul. The main difference is that the CONSUL_SERVICE_NAME is messenger-migrator instead of messenger, and the PGUSER corresponds to the “migrator” superuser you created in Step 1 above.

    It’s important that the CONSUL_SERVICE_NAME is messenger-migrator. If it were set to messenger, NGINX would automatically put this service into rotation to receive API calls and it’s not meant to be handling any traffic.

    The script deploys a short‑lived instance in the role of migrator. This prevents any issues with the migration from affecting the serving of traffic by the main messenger service instances.

  5. Redeploy the PostgreSQL database. Because you are using bash scripts in this tutorial, you need to stop and restart the database service. In a production application, you typically just run aninfrastructure-as-code script instead, to add only the elements that have changed.

    docker stop messenger-db-primary
    CONSUL_HOST=consul-client CONSUL_PORT=8500 ./infrastructure/messenger-db-deploy.sh
  6. Deploy the PostgreSQL database migrator service:

    CONSUL_HOST=consul-client CONSUL_PORT=8500 ./infrastructure/messenger-db-migrator-deploy.sh
  7. Verify that the instance is running as expected:

    docker ps --format "{{.Names}}"
    ...
    messenger-migrator

    You can also verify in the Consul UI that the database migrator service has correctly registered itself with Consul as messenger-migrator (again, it doesn’t register under the messenger name because it doesn’t handle traffic):

    Consul UI Services tab with one instance each of consul and messenger-migrator services, and four instances of messenger service

  8. Now for the final step – run the database migration scripts! These scripts don’t resemble any real database migration scripts, but they do use the messenger-migrator service to run database‑specific scripts. Once the database has been migrated, stop the messenger-migrator service:

    docker exec -i -e PGDATABASE=postgres -e CREATE_DB_NAME=messenger messenger-migrator node scripts/create-db.mjs
    docker exec -i messenger-migrator node scripts/create-schema.mjs
    docker exec -i messenger-migrator node scripts/create-seed-data.mjs
    docker stop messenger-migrator

Test the messenger Service in Action

Now that you have migrated the messenger database to its final format, the messenger service is finally ready for you to watch in action! To do this, you run some basic curl queries against the NGINX service (you configured NGINX as the system’s entry point in Set Up NGINX).

Some of the following commands use the jq library to format the JSON output. You can install it as necessary or omit it from the command line if you wish.

  1. Create a conversation:

    curl -d '{"participant_ids": [1, 2]}' -H "Content-Type: application/json" -X POST 'http://localhost:8085/conversations'
    {
      "conversation": { "id": "1", "inserted_at": "YYYY-MM-DDT06:41:59.000Z" }
    }
  2. Send a message to the conversation from a user with ID 1:

    curl -d '{"content": "This is the first message"}' -H "User-Id: 1" -H "Content-Type: application/json" -X POST 'http://localhost:8085/conversations/1/messages' | jq
    {
      "message": {
        "id": "1",
        "content": "This is the first message",
        "index": 1,
        "user_id": 1,
        "username": "James Blanderphone",
        "conversation_id": 1,
        "inserted_at": "YYYY-MM-DDT06:42:15.000Z"
      }
    }
  3. Reply with a message from a different user (with ID 2):

    curl -d '{"content": "This is the second message"}' -H "User-Id: 2" -H "Content-Type: application/json" -X POST 'http://localhost:8085/conversations/1/messages' | jq
    {
      "message": {
        "id": "2",
        "content": "This is the second message",
        "index": 2,
        "user_id": 2,
        "username": "Normalavian Ropetoter",
        "conversation_id": 1,
        "inserted_at": "YYYY-MM-DDT06:42:25.000Z"
      }
    }
  4. Fetch the messages:

    curl -X GET 'http://localhost:8085/conversations/1/messages' | jq
    {
      "messages": [
        {
          "id": "1",
          "content": "This is the first message",
          "user_id": "1",
          "channel_id": "1",
          "index": "1",
          "inserted_at": "YYYY-MM-DDT06:42:15.000Z",
          "username": "James Blanderphone"
        },
        {
          "id": "2",
          "content": "This is the second message",
          "user_id": "2",
          "channel_id": "1",
          "index": "2",
          "inserted_at": "YYYY-MM-DDT06:42:25.000Z",
          "username": "Normalavian Ropetoter"
        }
      ]
    }

Clean Up

You have created a significant number of containers and images throughout this tutorial! Use the following commands to remove any Docker containers and images you don’t want to keep.

  • To remove any running Docker containers:

    docker rm $(docker stop $(docker ps -a -q --filter ancestor=messenger --format="{{.ID}}"))
    docker rm $(docker stop messenger-db-primary)
    docker rm $(docker stop messenger-lb)
  • To remove the platform services:

    # From the platform repository
    docker compose down
  • To remove all Docker images used throughout the tutorial:

    docker rmi messenger
    docker rmi messenger-lb
    docker rmi postgres:15.1
    docker rmi hashicorp/consul:1.14.4
    docker rmi rabbitmq:3.11.4-management-alpine

Next Steps

You might be thinking “This seems like a lot of work to set up something simple” – and you’d be right! Moving into a microservices‑focused architecture requires meticulousness around how you structure and configure services. Despite all the complexity, you made some solid progress:

  • You set up a microservices‑focused configuration that is easily understandable by other teams.
  • You set up the microservices system to be somewhat flexible both in terms of scaling and usage of the various services involved.

To continue your microservices education, check out Microservices March 2023. Unit 2, Microservices Secrets Management 101, provides an in‑depth but user‑friendly overview of secrets management in microservices environments.

Banner reading 'Microservices March 2023: Sign Up for Free, Register Today'

The post NGINX Tutorial: How to Deploy and Configure Microservices appeared first on NGINX.

]]>
Microservices March 2023: Start Delivering Microservices https://www.nginx.com/blog/microservices-march-2023-start-delivering-microservices/ Wed, 01 Feb 2023 20:25:04 +0000 https://www.nginx.com/?p=71040 What Is Microservices March? It’s a month‑long, free educational program designed to up‑level your knowledge of various microservices topics. (If you missed last year’s program, you can catch all the content on demand.) What Will I Learn? Microservices have transformed modern software by enabling developers to build a complex application out of small, communicating components [...]

Read More...

The post Microservices March 2023: Start Delivering Microservices appeared first on NGINX.

]]>
Banner reading 'Welcome to Microservices March: Start Delivering Microservices'

What Is Microservices March?

It’s a month‑long, free educational program designed to up‑level your knowledge of various microservices topics. (If you missed last year’s program, you can catch all the content on demand.)

What Will I Learn?

Microservices have transformed modern software by enabling developers to build a complex application out of small, communicating components that each perform a specific function. Even if your organization hasn’t transitioned into using microservices architectures, there’s a good chance it will in the future.

For Microservices March 2023, we’re addressing some of the key fundamentals of delivering microservices. Each unit includes a one-hour webinar that provides a high‑level overview of the topic, followed by a hands‑on lab where you can run through common scenarios using technologies related to delivering microservices. The entire curriculum is just eight hours of activities. You can complete it all or just pick the parts that interest you most!

New in 2023: We’re offering the opportunity to obtain a badge to show your network (and manager) what you learned. Learn more.

This year’s curriculum includes four units.

Unit 1: Apply the Twelve-Factor App to Microservices Architectures

  • Speakers – Javier Evans and Jason Schmidt of NGINX
  • Webinar overview – Learn which Twelve‑Factor App elements present hidden surprises and how you can avoid pain in your transition to microservices.
  • Lab overview – Learn about the different types of configuration and informational data, and how to apply them to a service.

Unit 2: Microservices Secrets Management 101

  • Speaker – Robert Haynes of NGINX
  • Webinar overview – Learn how to reduce information leakage and exploits through proper secrets management, including secrets storage, rotation, and distribution.
  • Lab overview – Use a mix of Linux tools and secrets management systems to safely distribute and use JSON Web Token (JWT) authentication.

Unit 3: Accelerate Microservices Deployments with Automation

  • Speaker – Christopher Harrison of GitHub
  • Webinar overview – Learn how to use GitHub actions to streamline and automate your processes, manage security, and quickly recover from failures.
  • Lab overview – Use GitHub Actions to build and deploy Docker images and rollback automatically if a deployment fails.

Unit 4: Manage Microservices Chaos and Complexity with Observability

  • Speaker – Dave McAllister of NGINX
  • Webinar overview – Learn about the three principal classes of observability data, the importance of infrastructure and app alignment, and ways to start analyzing deep data.
  • Lab overview – Use the official OpenTelemetry libraries to set up manual and automatic instrumentation of your applications, and Jaeger to set up data generation and collection, tailor data to your needs, and visualize the data.

Who Should Participate?

This program is 101-focused. You’ll benefit most if you’re transitioning into a company already using microservices, are currently deciding on an architecture, or are about to start your own conversion to microservices.

  • DevOps and engineering leaders – Learn the common issues faced by every organization implementing microservices, and strategies to address them in a way that’s right for you and your teams.
  • Platform and IT Ops – Learn the challenges faced by other groups using the system and understand what you need to consider as you guide them to success.
  • Site reliability engineers – While microservices help teams execute more independently, they also introduce a large number of additional points of failure. Learn about the common areas of failure so that you can be one step ahead of potential incidents and be a valuable educator for other roles.

How Do I Join Microservices March 2023?

It’s easy! Sign up for free to get access to the program.

Banner reading 'Microservices March 2023: Sign Up for Free, Register Today'

We love hearing about what you’re interested in and how we can make your Microservices March experience valuable and fun. If you have questions or suggestions, please feel free to leave them in the comments section below or connect with us on NGINX Community Slack. Stay tuned for more updates and we can’t wait to “see” you in March!

The post Microservices March 2023: Start Delivering Microservices appeared first on NGINX.

]]>
5 Videos from Sprint 2022 that Reeled Us In https://www.nginx.com/blog/5-videos-from-sprint-2022-that-reeled-us-in/ Tue, 10 Jan 2023 15:35:43 +0000 https://www.nginx.com/?p=70927 The oceanic theme at this year’s virtual Sprint conference made for smooth sailing – all the way back to our open source roots. The water was indeed fine, but the ship would never have left the dock without all of the great presentations from our open source team and community. That’s why we want to highlight [...]

Read More...

The post 5 Videos from Sprint 2022 that Reeled Us In appeared first on NGINX.

]]>
The oceanic theme at this year’s virtual Sprint conference made for smooth sailing – all the way back to our open source roots. The water was indeed fine, but the ship would never have left the dock without all of the great presentations from our open source team and community. That’s why we want to highlight some of our favorite videos, from discussions around innovative OSS projects to demos involving writing poetry with code. Here, we’ve picked five of our favorites to tide you over until next year’s Sprint.

In addition to the videos below, you can find all the talks, demos, and fireside chats from NGINX Sprint 2022 in the NGINX Sprint 2022 YouTube playlist.

Best Practices for Getting Started with NGINX Open Source

NGINX Open Source is the world’s most popular web server, but also much more – you can configure it as a reverse proxy, load balancer, API gateway, and cache. In this breakout session, Alessandro Fael Garcia, Senior Solutions Engineer for NGINX, simplifies the journey for anyone who’s just getting started with NGINX. Learn multiple best practices including using the official NGINX repo to install NGINX, the importance of knowing your NGINX key commands, and how small adjustments with NGINX directives can improve performance.

For more resources on installing NGINX Open Source, check out our blog and webinar Back to Basics: Installing NGINX Open Source and NGINX Plus.

How to Get Started with OpenTelemetry

In cloud‑native architectures, observability is critical for providing insight into application performance. OpenTelemetry has taken observability to the next level with the concept of distributed tracing. As one of the most active projects at the Cloud Native Computing Foundation (CNCF), OpenTelemetry is quickly becoming the standard for instrumentation and collection of observability data. If you can’t already tell, we believe it’s one of the top open source projects to keep on your radar.

In this session, Steve Flanders, Director of Engineering at Splunk, covers the fundamentals of OpenTelemetry and demos how you can start integrating it into your modern app infrastructure.

To learn how NGINX is using OpenTelemetry, read Integrating OpenTelemetry into the Modern Apps Reference Architecture – A Progress Report on our blog.

The Future of Kubernetes Connectivity

Kubernetes has become the de facto standard for container management and orchestration. But as organizations deploy Kubernetes in production across multi‑cloud environments, complex challenges often emerge. Processes need to be streamlined so teams can manage connectivity to Kubernetes services from cloud to edge. In this Sprint session, Brian Ehlert, Director of Product Management for NGINX, discusses the history of Kubernetes networking, current trends, and the potential future for providing client access to applications in a shared, multi‑tenant Kubernetes environment.

At NGINX, we recognize the importance of Kubernetes connectivity, which is why we developed a Secure Kubernetes Connectivity solution. With NGINX’s Secure Kubernetes Connectivity you can scale, observe, govern, and secure your Kubernetes apps in production while reducing complexity.

Fun Ways to Script NGINX Using the NGINX Javascript Module

Feeling overwhelmed by all the open source offerings and possibilities? Take a break! In this entertaining talk, Javier Evans, Solutions Engineer for NGINX, guides you through some fun ways to script NGINX Open Source using the NGINX JavaScript module (njs). You’ll learn how to generate QR codes, implement weather‑based authentication (WBA) to compose a unique poem based on the location’s current weather, and more. Creativity abounds and Javier holds nothing back.

New features and improvements are regularly added to njs to make it easier for teams to work and share njs code. Recent updates help make your NGINX config even more modular and reusable.

A Journey Through NGINX and Open Source with Kelsey Hightower

We were beyond excited to have renowned developer advocate Kelsey Hightower join us at Sprint for a deep dive into NGINX Unit, our open source, universal web app server. In a discussion with Rob Whiteley, NGINX Product Group VP and GM, Kelsey emphasizes how one of his primary goals when working with technology is to save time. Using a basic application inside a single container as his demo environment, Kelsey shows how NGINX Unit enables you to write less code. And while Kelsey’s impressed by NGINX Unit, he also offers feedback on how it can improve. We appreciate that, as we are committed to refining and enhancing our open source offerings.

Watch On Demand

Again, thank you for diving into the open source waters with us this year at Sprint! It was a blast and we loved seeing all of your comments, insight, and photos from the virtual booth.

Reminder: You can find all these videos, and more insightful talks, in the NGINX Sprint 2022 YouTube playlist and watch them on demand for free.

The post 5 Videos from Sprint 2022 that Reeled Us In appeared first on NGINX.

]]>
The Future of NGINX: Getting Back to Our Open Source Roots https://www.nginx.com/blog/future-of-nginx-getting-back-to-our-open-source-roots/ Tue, 23 Aug 2022 15:30:25 +0000 https://www.nginx.com/?p=70273 Time flies when you’re having fun. So it’s hard to believe that NGINX is now 18 years old. Looking back, the community and company have accomplished a lot together. We recently hit a huge milestone – as of this writing 55.6% of all websites are powered by NGINX (either by our own software or by products built [...]

Read More...

The post The Future of NGINX: Getting Back to Our Open Source Roots appeared first on NGINX.

]]>
Time flies when you’re having fun. So it’s hard to believe that NGINX is now 18 years old. Looking back, the community and company have accomplished a lot together. We recently hit a huge milestone – as of this writing 55.6% of all websites are powered by NGINX (either by our own software or by products built atop NGINX). We are also the number one web server by market share. We are very proud of that and grateful that you, the NGINX community, have given us this resounding vote of confidence.

We also recognize, more and more, that open source software continues to change the world. A larger and larger percentage of applications are built using open source code. From Bloomberg terminals and news to the Washington Post to Slack to Airbnb to Instagram and Spotify, thousands of the world’s most recognizable brands and properties rely on NGINX Open Source to power their websites. In my own life – between Zoom for work meetings and Netflix at night – I probably spend 80% of my day using applications built atop NGINX.

Image reading "Open Source Software Changed the World" with logos of prominent open source projects

NGINX is only one element in the success story of open source. We would not be able to build the digital world – and increasingly, to control and manage the physical world – without all the amazing open source projects, from Kubernetes and containers to Python and PyTorch, from WordPress to Postgres to Node.js. Open source has changed the way we work. There are more than 73 million developers on GitHub who have collectively merged more than 170 million pull requests (PRs). A huge percentage of those PRs have been on code repositories with open source licenses.

We are thrilled that NGINX has played such a fundamental role in the rise and success of open source – and we intend to both keep it going and pay it forward. At the same time, we need to reflect on our open source work and adapt to the ongoing evolution of the movement. Business models for companies profiting from open source have become controversial at times. This is why NGINX has always tried to be super clear about what is open source and what is commercial. Above all, this meant never, ever trying to charge for functionality or capabilities that we had included in the open source versions of our software.

Open Source is Evolving Fast. NGINX Is Evolving, Too.

We now realize that we need to think hard about our commitment to open source, provide more value and capabilities in our open source products, and, yes, up our game in the commercial realm as well. We can’t simply keep charging for the same things as in the past, because the world has changed – some features included only in our commercial products are now table stakes for open source developers. We also know that open source security is top of mind for developers. For that reason, our open source projects need to be just as secure as our commercial products.

We also have to acknowledge reality. Internally, we have had a habit of saying that open source was not really production‑ready because it lacked features or scalability. The world has been proving us wrong on that count for some time now: many thousands of organizations are running NGINX open source software in production environments. And that’s a good thing, because it shows how much they believe in our open source versions. We can build on that.

In fact, we are doing that constantly with our core products. To those who say that the original NGINX family of products has grown long of tooth, I say you have not been watching us closely:

  • For the core NGINX Open Source software, we continue to add new features and functionality and to support more operating system platforms. Two critical capabilities for security and scalability of web applications and traffic, HTTP3 and QUIC, are coming in the next version we ship.
  • A quiet but incredibly innovative corner of the NGINX universe is NGINX JavaScript (njs), which enables developers to integrate JavaScript code into the event‑processing model of the NGINX HTTP and TCP/UDP (Stream) modules and extend NGINX configuration syntax to implement sophisticated capabilities. Our users have done some pretty amazing things, everything from innovative cache purging and header manipulations to support for more advanced protocols like MQTTv5.
  • Our universal web application server, NGINX Unit, was conceived by the original author of NGINX Open Source, Igor Sysoev, and it continues to evolve. Unit occupies an important place in our vision for modern applications and a modern application stack that goes well beyond our primary focus on the data plane and security. As we develop Unit, we are rethinking how applications should be architected for the evolving Web, with more capabilities that are cloud‑native and designed for distributed and highly dynamic apps.

The Modern Apps Reference Architecture

We want to continue experimenting and pushing forward on ways to help our core developer constituency more efficiently and easily deploy modern applications. Last year at Sprint 2.0 we announced the NGINX Modern Apps Reference Architecture (MARA), and I am happy to say it recently went into general availability as version 1.0.0. MARA is a curated and opinionated stack of tools, including Kubernetes, that we have wired together to make it easy to deploy infrastructure and application architecture as code. With a few clicks, you can configure and deploy a MARA reference architecture that is integrated with everything you need to create a production‑grade, cloud‑native environment – security, logging, networking, application server, configuration and YAML management, and more.

Diagram showing topology of the NGINX Modern Apps Reference Architecture

MARA is a modular architecture, and by design. You can choose your own adventure and design from the existing modules a customized reference architecture that can actually run applications. The community has supported our idea and we have partnered with a number of innovative technology companies on MARA. Sumo Logic has added their logging chops to MARA and Pulumi has contributed modules for automation and workflow orchestration. Our hope is that, with MARA, any developer can get a full Kubernetes environment up and running in a matter of minutes, complete with all the supporting pieces, secure, and ready for app deployment. This is just one example of how I think we can all put our collective energy into advancing a big initiative in the industry.

The Future of NGINX: Modernize, Optimize, Extend

Each year at NGINX Sprint, our virtual user conference, we make new commitments for the coming year. This year is no different. Our promises for the next twelve months can be captured in three words: modernize, optimize, and extend. We intend to make sure these are not just business buzzwords; we have substantial programs for each one and we want you to hold us to our promises.

Promise #1: Modernize Our Approach, Presence, and Community Management

Obviously, we are rapidly modernizing our code and introducing new products and projects. But modernization is not just about code – it encompasses code management, transparency around decision making, and how we show up in the community. While historically the NGINX Open Source code base has run on the Mercurial version control system, we recognize that the open source world now lives on GitHub. Going forward, all NGINX projects will be born and hosted on GitHub because that’s where the developer and open source communities work.

We also are going to modernize how we govern and manage NGINX projects. We pledge to be more open to contributions, more transparent in our stewardship, and more approachable to the community. We will follow all expected conventions for modern open source work and will be rebuilding our GitHub presence, adding Codes of Conduct to all our projects, and paying close attention to community feedback. As part of this commitment to modernize, we are adding an NGINX Community channel on Slack. We will staff the channel with our own experts to answer your questions. And you, the community, will be there to help each other, as well – in the messaging tool you already use for your day jobs.

Promise #2: Optimize Our Developer Experience

Developers are our primary users. They build and create the applications that have made us who we are. Our tenet has always been that NGINX is easy to use. And that’s basically true – NGINX does not take days to install, spin up, and configure. That said, we can do better. We can accelerate the “time to value” that developers experience on our products by making the learning curve shorter and the configuration process easier. By “value” I mean deploying code that does something truly valuable, in production, full stop. We are going to revamp our developer experience by streamlining the installation experience, improving our documentation, and adding coverage and heft to our community forums.

We are also going to release a new SaaS offering that natively integrates with NGINX Open Source and will help you make it useful and valuable in seconds. There will be no registration, no gate, no paywall. This SaaS will be free to use, forever.

In addition, we recognize that many critical features which developers now view as table stakes are on the wrong side of the paywall for NGINX Open Source and NGINX Plus. For example, DNS service discovery is essential for modern apps. Our promise is to make those critical features free by adding them to NGINX Open Source. We haven’t yet decided on all of the features to move and we want your input. Tell us how to optimize your experience as developers. We are listening.

Promise #3: Extend the Power and Capabilities of NGINX

As popular as NGINX is today, we know we need to keep improving if we want to be just as relevant ten years from now. Our ambitious goal is this: we want to create a full stack of NGINX applications and supporting capabilities for managing and operating modern applications at scale.

To date, NGINX has mostly been used as a Layer 7 data plane. But developers have to put up a lot of scaffolding around NGINX to make it work. You have to wire up automation and CI/CD capabilities, set up proper logging, add authentication and certificate management, and more. We want to make a much better extension of NGINX where every major requirement to test and deploy an app is satisfied by one or more high‑quality open source components that seamlessly integrate with NGINX. In short, we want to provide value at every layer of the stack and make it free. For example, if you are using NGINX Open Source or NGINX Plus as an API gateway, we want to make sure you have everything you need to manage and scale that use case – API import, service discovery, firewall, policy rules and security – all available as high‑quality open source options.

To summarize, our dream is to build an ecosystem around NGINX that extends into every facet of application management and deployment. MARA is the first step in building that ecosystem and we want to continue to attract partners. My goal is to see, by the end of 2022, an entire pre‑wired app launch and run in minutes in an NGINX environment, instrumented with a full complement of capabilities – distributed tracing, logging, autoscaling, security, CI/CD hooks – that are all ready to do their jobs.

Introducing Kubernetes API Gateway, a Brand New Amplify, and NGINX Agent

We are committed to all this. And here are three down payments on my three promises.

  1. Earlier this year we launched NGINX Kubernetes Gateway, based on the Kubernetes API Gateway SIG’s reference architecture. This modernizes our product family and keeps us in line with the ongoing evolution of cloud native. The NGINX Kubernetes Gateway is also something of an olive branch we’re extending to the community. We realize it complicated matters when we created both a commercial and an open source Ingress controller for Kubernetes, both different from the community Ingress solution (also built on NGINX). The range of choices confused the community and put us in a bad position.

    It’s pretty clear that the Gateway API is going to take the place of the Ingress controller in the Kubernetes architecture. So we are changing our approach and will make the NGINX Kubernetes Gateway – which will be offered only as an open source product – the focal point of our Kubernetes networking efforts (in lockstep with the evolving standard). It will both integrate and extend into other NGINX products and optimize the developer experience on Kubernetes.

  2. A few years back, we launched NGINX Amplify, a monitoring and telemetry SaaS offering for NGINX fleets. We didn’t really publicize it much. But thousands of developers found it and are still using it today. Amplify was and remains free. As part of our modernization pledge, we are adding a raft of new capabilities to Amplify. We aim to make it your trusted co‑pilot for standing up, watching over, and managing NGINX products at scale in real time. Amplify will not only monitor your NGINX instances but will help you configure, apply scripts to, and troubleshoot NGINX deployments.
  3. We are launching NGINX Agent, a lightweight app that you deploy alongside NGINX Open Source instances. It will include features previously offered only in commercial products, for example the dynamic configuration API. With NGINX Agent, you’ll be able to use NGINX Open Source in many more use cases and with far greater flexibility. It will also include far more granular controls which you can use to extend your applications and infrastructure. Agent helps you make smarter decisions about managing, deploying, and configuring NGINX. We’re working hard on NGINX Agent – keep an eye out for a blog coming in the next couple months to announce its availability!

Looking Ahead

In a year, I hope you ask me about these promises. If I can’t report real progress on all three, then hold me to it, please. And please understand – we are engaged and ready to talk with all of you. You are our best product roadmap. Please take our annual survey. Join NGINX Community Slack and tell us what you think. Comment and file PRs on the projects at our GitHub repo.

It’s going to be a great year, the best ever. We look forward to hearing more from you and please count on hearing more from us. Help us help you. It’s going to be a great year, the best ever. We look forward to hearing more from you and please count on hearing more from us.

The post The Future of NGINX: Getting Back to Our Open Source Roots appeared first on NGINX.

]]>
Getting Your Time Back with NGINX Unit https://www.nginx.com/blog/getting-your-time-back-with-nginx-unit/ Thu, 18 Aug 2022 17:34:43 +0000 https://www.nginx.com/?p=70260 NGINX Sprint, our annual (and yes, virtual!) event is almost here. This year’s oceanic theme is Deep Dive into the World of NGINX, and August 23 through 25 NGINX sails home to its open source roots. Our destination? To show you how NGINX open source innovations help bring our vision for modern applications to life, [...]

Read More...

The post Getting Your Time Back with NGINX Unit appeared first on NGINX.

]]>
NGINX Sprint, our annual (and yes, virtual!) event is almost here. This year’s oceanic theme is Deep Dive into the World of NGINX, and August 23 through 25 NGINX sails home to its open source roots. Our destination? To show you how NGINX open source innovations help bring our vision for modern applications to life, in collaboration with CNCF contributors and projects like Grafana Labs and OpenTelemetry.

The modern application landscape is expansive, and in thought‑provoking demos and talks our Sprint experts will make sure you don’t get lost at sea. For example, renowned developer advocate and Principal Engineer at Google, Kelsey Hightower, takes the helm at the end of Day 2 to show Sprint attendees how to use NGINX Unit – our open source, universal web application server, reverse proxy, and static file server – to its full, time‑saving potential.

A Journey Through NGINX and Open Source with Kelsey Hightower airs August 24 at 9:10 AM PDT on the NGINX Sprint platform.

 

We really hope you’ll register and attend Sprint (it’s completely free), so we’re not going to give too much away about Kelsey’s session. But read on for a sneak peek.

NGINX Unit Lets You Write Less Code

When it comes to application servers, Kelsey’s #1 goal is save time. The less code you have to write to deploy your app, the better. In his Sprint demo, Kelsey keeps it simple. With a basic application inside a single container, he shows just how much time you can get back when deploying NGINX Unit as a web application server.

Kelsey gets NGINX Unit to run on Cloud Run, serving multiple Go applications and static files from a single container image – and many things happen simultaneously in the background. For example, he doesn’t write any code for logging but logs still emerge for free, with NGINX Unit as the web application server doing the heavy lifting.

Kelsey steers away from complexity and discusses how NGINX Unit checks all the boxes by:

  • Running multiple binaries in the background
  • Proxying using a lower‑level protocol
  • Sending data to the application
  • Returning the response to the requester

Kelsey also offers critiques on ways that NGINX Unit can improve. We appreciate that, because our aim is to continuously refine and enhance our open source offerings.

Our Community Comes First

When F5 acquired NGINX, we made a promise to stay committed to open source. This included increasing our investments in developing NGINX open source projects. Today, we’re still committed to keeping that promise.

In a chat with NGINX Product Group VP and GM Rob Whiteley during the session, Kelsey admits he was initially skeptical about NGINX keeping our word about open source. However, once he played around with NGINX Unit, he saw NGINX does in fact innovate – rather than copying and pasting what’s already out there – while staying continuously aware of the patterns open source communities crave.

Just as we value Kelsey’s opinion, we also want your thoughts and feedback. NGINX is committed both to listening to our community and to investing in a better world. For every post‑event survey filled out during Sprint, we will donate to The Ocean Cleanup, a non‑profit organization developing advanced technologies to rid the oceans of plastic – their goal is to remove 90% over time!

Dive into NGINX Open Source

There’s still time to gear up for NGINX Sprint, which takes place August 23–25 and is packed with educational discussions, demos, and self‑paced labs. Dive on in and register today for free – the water’s fine!

The post Getting Your Time Back with NGINX Unit appeared first on NGINX.

]]>
Dive In and Register for Sprint 2022 https://www.nginx.com/blog/dive-in-register-sprint-2022/ Tue, 02 Aug 2022 17:16:19 +0000 https://www.nginx.com/?p=69891 Grab your wetsuit, dive computer, and depth gauge, because we’re back with NGINX Sprint and we’re diving deep into the world of NGINX open source. NGINX Sprint is virtual again this year and taking place August 23–25, 2022. Register today to save your free seat now and get ready for three days of fun and [...]

Read More...

The post Dive In and Register for Sprint 2022 appeared first on NGINX.

]]>
Grab your wetsuit, dive computer, and depth gauge, because we’re back with NGINX Sprint and we’re diving deep into the world of NGINX open source.

NGINX Sprint is virtual again this year and taking place August 23–25, 2022.

Banner reading _F5 NGINX Sprint, Deep Dive into the World of NGINX, August 23-25_

Register today to save your free seat now and get ready for three days of fun and interaction with developers, technology professionals, and fellow NGINX community members as we come together to explore new and improved ways to build the next generation of modern applications.

“Why Should I Attend?”

Your feedback from Sprint 2.0 in 2021 told us you want more focus on open source, and Sprint 3.0 is dedicated to NGINX open source technologies. Come together with developers, technologists, community leaders, and tech luminaries to network, share ideas and innovations, and explore our specially curated selection of keynotes, demos, labs, and in‑depth trainings.

“What Can I Expect?”

Learn from leading experts who will guide you through complex open source environments, in talks and self‑paced labs. Improve your skills in the in‑depth technical how‑to sessions, and then practice them in hands‑on virtual labs.

Catch compelling demos that showcase innovations in NGINX open source and partner ecosystem integrations. See how you can use NGINX tools in conjunction with other open source tools to build, deliver, and monitor modern applications.

Get a preview of NGINX’s vision for building and scaling modern applications, and how we’re bringing that to life with open source tools. NGINX is focused on building high‑performance, lightweight, open source tools that you can easily deploy in any environment.

Discover the latest news on NGINX launches, announcements, and upcoming releases. From HTTP/3 and NGINX JavaScript to NGINX Unit and Kubernetes Gateway API advances, you’ll discover new capabilities and what’s coming next in our roadmaps.

We built the agenda with your busy schedule in mind and each day of NGINX Sprint ends by early afternoon. Choose the sessions that make sense for you and your team, whether that means technical topics or hands‑on opportunities.

Wetsuits Are Optional

You’re one click away from saving your free seat at NGINX Sprint on August 23–25. Don’t worry that registering will trigger a tsunami of emails. We’ll collect only what we need to send calendar reminders for the sessions you selected, and to save your spot for the self‑paced labs you’ve chosen.

We can’t wait to “sea” you in August.

Banner reading _F5 NGINX Sprint, Deep Dive into the World of NGINX, August 23-25_

The post Dive In and Register for Sprint 2022 appeared first on NGINX.

]]>