Opinion Archives - NGINX https://www.nginx.com/category/opinion/ The High Performance Reverse Proxy, Load Balancer, Edge Cache, Origin Server Wed, 13 Mar 2024 19:27:27 +0000 en-US hourly 1 The Ingress Controller: Touchstone for Securing AI/ML Apps in Kubernetes https://www.nginx.com/blog/the-ingress-controller-touchstone-for-securing-ai-ml-apps-in-kubernetes/ Wed, 28 Feb 2024 19:15:14 +0000 https://www.nginx.com/?p=72925 One of the key advantages of running artificial intelligence (AI) and machine learning (ML) workloads in Kubernetes is a having a central point of control for all incoming requests through the Ingress Controller. It is a versatile module that serves as a load balancer and API gateway, providing a solid foundation for securing AI/ML applications [...]

Read More...

The post The Ingress Controller: Touchstone for Securing AI/ML Apps in Kubernetes appeared first on NGINX.

]]>
One of the key advantages of running artificial intelligence (AI) and machine learning (ML) workloads in Kubernetes is a having a central point of control for all incoming requests through the Ingress Controller. It is a versatile module that serves as a load balancer and API gateway, providing a solid foundation for securing AI/ML applications in a Kubernetes environment.

As a unified tool, the Ingress Controller is a convenient touchpoint for applying security and performance measures, monitoring activity, and mandating compliance. More specifically, securing AI/ML applications at the Ingress Controller in a Kubernetes environment offers several strategic advantages that we explore in this blog.

Diagram of Ingress Controller ecosystem

Centralized Security and Compliance Control

Because Ingress Controller acts as a gateway to your Kubernetes cluster, it allows MLOps and platform engineering teams to implement a centralized point for enforcing security policies. This reduces the complexity of configuring security settings on a per-pod or per-service basis. By centralizing security controls at the Ingress level, you simplify the compliance process and make it easier to manage and monitor compliance status.

Consolidated Authentication and Authorization

The Ingress Controller is also the logical location to implement and enforce authentication and authorization for access to all your AI/ML applications. By adding strong certificate authority management, the Ingress Controller is also the linchpin of building zero trust (ZT) architectures for Kubernetes. ZT is crucial for ensuring continuous security and compliance of sensitive AI applications running on highly valuable proprietary data.

Rate Limiting and Access Control

The Ingress Controller is an ideal place to enforce rate limiting, protecting your applications from abuse, like DDoS attacks or excessive API calls, which is crucial for public-facing AI/ML APIs. With the rise of novel AI threats like model theft and data leaking, enforcing rate limiting and access control becomes more important in protecting against brute force attacks. It also helps prevent adversaries from abusing business logic or jailbreaking guardrails to extract data and model training or weight information.

Web Application Firewall (WAF) Integration

Many Ingress Controllers support integration with WAFs, which are table stakes for protecting exposed applications and services. WAFs provide an additional layer of security against common web vulnerabilities and attacks like the OWASP 10. Even more crucial, when properly tuned, WAFs protect against more targeted attacks aimed at AI/ML applications. A key consideration for AI/ML apps, where latency and performance are crucial, is potential overhead introduced by a WAF. Also, to be effective for AI/ML apps, the WAF must be tightly integrated into the Ingress Controller for monitoring and observability dashboards and alerting structures. If the WAF and Ingress Controller can share a common data plane, this is ideal.

Conclusion: Including the Ingress Controller Early in Planning for AI/ML Architectures

Because the Ingress Controller occupies such an important place in Kubernetes application deployment for AI/ML apps, it is best to include its capabilities as part of architecting AI/ML applications. This can alleviate duplication of functionality and can lead to a better decision on an Ingress Controller that will scale and grow with your AI/ML application needs. For MLOps teams, the Ingress Controller becomes a central control point for many of their critical platform and ops capabilities, with security among the top priorities.

Get Started with NGINX

NGINX offers a comprehensive set of tools and building blocks to meet your needs and enhance security, scalability, and observability of your Kubernetes platform.

You can get started today by requesting a free 30-day trial of Connectivity Stack for Kubernetes.

The post The Ingress Controller: Touchstone for Securing AI/ML Apps in Kubernetes appeared first on NGINX.

]]>
The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey https://www.nginx.com/blog/mission-critical-patient-care-use-case-became-kubernetes-odyssey/ Wed, 17 May 2023 15:00:51 +0000 https://www.nginx.com/?p=71589 Downtime can lead to serious consequences. These words are truer for companies in the medical technology field than in most other industries – in their case, the "serious consequences" can literally include death. We recently had the chance to dissect the tech stack of a company that’s seeking to transform medical record keeping from pen-and-paper [...]

Read More...

The post The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey appeared first on NGINX.

]]>
Downtime can lead to serious consequences.

These words are truer for companies in the medical technology field than in most other industries – in their case, the "serious consequences" can literally include death. We recently had the chance to dissect the tech stack of a company that’s seeking to transform medical record keeping from pen-and-paper to secure digital data that is accessible anytime, and anywhere, in the world. These data range from patient information to care directives, biological markers, medical analytics, historical records, and everything else shared between healthcare teams.

From the outset, the company has sought to address a seemingly simple question: “How can we help care workers easily record data in real time?” As the company has grown, however, the need to scale and make data constantly available has made solving that challenge increasingly complex. Here we describe how the company’s tech journey has led them to adopt Kubernetes and NGINX Ingress Controller.

Tech Stack at a Glance

Here’s a look at where NGINX fits into their architecture:

Diagram how NGINX fits into their architecture

The Problem with Paper

Capturing patient status and care information at regular intervals is a core duty for healthcare personnel. Traditionally, they have recorded patient information on paper, or more recently on laptop or tablet. There are a couple serious downsides:

  • Healthcare workers may interact dozens of patients per day, so it’s usually not practical to write detailed notes while providing care. As a result, workers end up writing their notes at the end of their shift. At that point, mental and physical fatigue make it tempting to record only generic comments.
  • The workers must also depend on their memory of details about patient behavior. Inaccuracies might mask patterns that facilitate diagnosis of larger health issues if documented correctly and consistently over time.
  • Paper records can’t easily be shared among departments within a single department, let alone with other entities like EMTs, emergency room staff, and insurance companies. The situation isn’t much better with laptops or tablets if they’re not connected to a central data store or the cloud.

To address these challenges, the company created a simplified data recording system that provides shortcuts for accessing patient information and recording common events like dispensing medication. This ease of access and use makes it possible to record patient interactions in real time as they happen.

All data is stored in cloud systems maintained by the company, and the app integrates with other electronic medical records systems to provide a comprehensive longitudinal view of resident behaviors. This helps caregivers provide better continuity of care, creates a secure historical record, and can be easily shared with other healthcare software systems.

Physicians and other specialists also use the platform when admitting or otherwise engaging with patients. There’s a record of preferences and personal needs that travel with the patient to any facility. These can be used to help patients feel comfortable in a new setting, which improve outcomes like recovery time.

There are strict legal requirements about how long companies must store patient data. The company’s developers have built the software to offer extremely high availability with uptime SLAs that are much better than those of generic cloud applications. Keeping an ambulance waiting because a patient’s file won’t load isn’t an option.

The Voyage from the Garage to the Cloud to Kubernetes

Like many startups, the company initially saved money by running the first proof-of-concept application on a server in a co-founder’s home. Once it became clear the idea had legs, the company moved its infrastructure to the cloud rather than manage hardware in a data center. Being a Microsoft shop, they chose Azure. The initial architecture ran applications on traditional virtual machines (VMs) in Azure App Service, a managed application delivery service that runs Microsoft’s IIS web server. For data storage and retrieval, the company opted to use Microsoft’s SQL Server running in a VM as a managed application.

After several years running in the cloud, the company was growing quickly and experiencing scaling pains. It needed to scale infinitely, and horizontally rather than vertically because the latter is slow and expensive with VMs. This requirement led rather naturally to containerization and Kubernetes as a possible solution. A further point in favor of containerization was that the company’s developers need to ship updates to the application and infrastructure frequently, without risking outages. With patient notes being constantly added across multiple time zones, there is no natural downtime to push changes to production without the risk of customers immediately being affected by glitches.

A logical starting point for the company was Microsoft’s managed Kubernetes offering, Azure Kubernetes Services (AKS). The team researched Kubernetes best practices and realized they needed an Ingress controller running in front of their Kubernetes clusters to effectively manage traffic and applications running in nodes and pods on AKS.

Traffic Routing Must Be Flexible Yet Precise

The team tested AKS’s default Ingress controller, but found its traffic-routing features simply could not deliver updates to the company’s customers in the required manner. When it comes to patient care, there’s no room for ambiguity or conflicting information – it’s unacceptable for one care worker to see an orange flag and another a red flag for the same event, for example. Hence, all users in a given organization must use the same version of the app. This presents a big challenge when it comes to upgrades. There’s no natural time to transition a customer to a new version, so the company needed a way to use rules at the server and network level to route different customers to different app versions.

To achieve this, the company runs the same backend platform for all users in an organization and does not offer multi-tenancy with segmentation at the infrastructure layer within the organization. With Kubernetes, it is possible to split traffic using virtual network routes and cookies on browsers along with detailed traffic rules. However, the company’s technical team found that AKS’s default Ingress controller can split traffic only on a percentage basis, not with rules that operate at level of customer organization or individual user as required.

In its basic configuration, the NGINX Ingress Controller based on NGINX Open Source has the same limitation, so the company decided to pivot to the more advanced NGINX Ingress Controller based on NGINX Plus, an enterprise-grade product which supports granular traffic control. Finding recommendations from NGINX Ingress Controller from Microsoft and the Kubernetes community based on the high level of flexibility and control helped solidify the choice. The configuration better supports the company’s need for pod management (as opposed to classic traffic management), ensuring that pods are running in the appropriate zones and traffic is routed to those services. Sometimes traffic is being routed internally but in most use cases, it is routed back out through NGINX Ingress Controller for observability reasons.

Here Be Dragons: Monitoring, Observability and Application Performance

With NGINX Ingress Controller, the technical team has complete control over the developer and end user experience. Once users log in and establish a session, they can immediately be routed to a new version or reverted back to an older one. Patches can be pushed simultaneously and nearly instantaneously to all users in an organization. The software isn’t reliant on DNS propagation or updates on networking across the cloud platform.

NGINX Ingress Controller also meets the company’s requirement for granular and continuous monitoring. Application performance is extremely important in healthcare. Latency or downtime can hamper successful clinical care, especially in life-or-death situations. After the move to Kubernetes, customers started reporting downtime that the company hadn’t noticed. The company soon discovered the source of the problem: Azure App Service relies on sampled data. Sampling is fine for averages and broad trends, but it completely misses things like rejected requests and missing resources. Nor does it show the usage spikes that commonly occur every half hour as care givers check in and log patient data. The company was getting only an incomplete picture of latency, error sources, bad requests, and unavailable service.

The problems didn’t stop there. By default Azure App Service preserves stored data for only a month – far short of the dozens of years mandated by laws in many countries.  Expanding the data store as required for longer preservation was prohibitively expensive. In addition, the Azure solution cannot see inside of the Kubernetes networking stack. NGINX Ingress Controller can monitor both infrastructure and application parameters as it handles Layer 4 and Layer 7 traffic.

For performance monitoring and observability, the company chose a Prometheus time-series database attached to a Grafana visualization engine and dashboard. Integration with Prometheus and Grafana is pre-baked into the NGINX data and control plane; the technical team had to make only a small configuration change to direct all traffic through the Prometheus and Grafana servers. The information was also routed into a Grafana Loki logging database to make it easier to analyze logs and give the software team more control over data over time. 

This configuration also future-proofs against incidents requiring extremely frequent and high-volume data sampling for troubleshooting and fixing bugs. Addressing these types of incidents might be costly with the application monitoring systems provided by most large cloud companies, but the cost and overhead of Prometheus, Grafana, and Loki in this use case are minimal. All three are stable open source products which generally require little more than patching after initial tuning.

Stay the Course: A Focus on High Availability and Security

The company has always had a dual focus, on security to protect one of the most sensitive types of data there is, and on high availability to ensure the app is available whenever it’s needed. In the shift to Kubernetes, they made a few changes to augment both capacities.

For the highest availability, the technical team deploys an active-active, multi-zone, and multi-geo distributed infrastructure design for complete redundancy with no single point of failure. The team maintains N+2 active-active infrastructure with dual Kubernetes clusters in two different geographies. Within each geography, the software spans multiple data centers to reduce downtime risk, providing coverage in case of any failures at any layer in the infrastructure. Affinity and anti-affinity rules can instantly reroute users and traffic to up-and-running pods to prevent service interruptions. 

For security, the team deploys a web application firewall (WAF) to guard against bad requests and malicious actors. Protection against the OWASP Top 10 is table stakes provided by most WAFs. As they created the app, the team researched a number of WAFs including the native Azure WAF and ModSecurity. In the end, the team chose NGINX App Protect with its inline WAF and distributed denial-of-service (DDoS) protection.

A big advantage of NGINX App Protect is its colocation with NGINX Ingress Controller, which both eliminates a point of redundancy and reduces latency. Other WAFs must be placed outside of the Kubernetes environment, contributing to latency and cost. Even miniscule delays (say 1 millisecond extra per request) add up quickly over time.

Surprise Side Quest: No Downtime for Developers

Having completed the transition to AKS for most of its application and networking infrastructure, the company has also realized significant improvements to its developer experience (DevEx). Developers now almost always spot problems before customers notice any issues themselves. Since the switch, the volume of support calls about errors is down about 80%!

The company’s security and application-performance teams have a detailed Grafana dashboard and unified alerting, eliminating the need to check multiple systems or implement triggers for warning texts and calls coming from different processes. The development and DevOps teams can now ship code and infrastructure updates daily or even multiple times per day and use extremely granular blue-green patterns. Formerly, they were shipping updates once or twice per week and having to time there for low-usage windows, a stressful proposition. Now, code is shipped when ready and the developers can monitor the impact directly by observing application behavior.

The results are positive all around – an increase in software development velocity, improvement in developer morale, and more lives saved.

The post The Mission-Critical Patient-Care Use Case That Became a Kubernetes Odyssey appeared first on NGINX.

]]>
Microservices and Kubernetes: A Gardening Guide to Software Ecosystems https://www.nginx.com/blog/microservices-and-kubernetes-a-gardening-guide-to-software-ecosystems/ Thu, 30 Mar 2023 15:05:34 +0000 https://www.nginx.com/?p=71437 Like gardening, software development requires an understanding of how different elements work together to create a beautiful, productive ecosystem. That was the catalyst for our choice of a gardening theme for this year’s Microservices March program. As is the case with any garden, one problematic weed keeps creeping into the conversation: what is the difference [...]

Read More...

The post Microservices and Kubernetes: A Gardening Guide to Software Ecosystems appeared first on NGINX.

]]>
Like gardening, software development requires an understanding of how different elements work together to create a beautiful, productive ecosystem. That was the catalyst for our choice of a gardening theme for this year’s Microservices March program. As is the case with any garden, one problematic weed keeps creeping into the conversation: what is the difference between microservices and Kubernetes?

Banner with title of this section -- Microservices: The Plants in Your Garden

Plants are the main elements of a garden’s ecosystem. They are the living organisms that comprise your garden and contribute to its beauty and functionality. Similarly, microservices are the main building blocks of modern software applications. They’re the individual services that perform specific functions and work together to create a larger, more complex application.

Like plants, microservices vary in size, shape, and purpose. Some are small and specialized, tasked with one operation. Others are larger, more generalized in their operations, or comprised of a cluster of the smaller components. For example, a strawberry plant is small and only generates strawberries. This is akin to a microservice that only processes credit card transactions (small and specialized). A more generalized plant like cilantro produces not only the leafy greens used in salsa and guacamole, but also the coriander seed. This kind of plant is akin to a service that handles user authentication (a larger and more general process).

Banner with title of this section -- Containers: The Pots for Your Microservices

Most plants need to be planted in soil so they can draw nutrients and moisture – leave them bareroot and they’ll die. While it’s perfectly acceptable to put them straight into the ground, many gardeners opt for container gardening with pots because it lets them customize the soil, better defend against pests, and move plants around the garden (or even inside for the winter). Similarly, you have multiple options for where to put microservices, with the most common aptly being in a “container”. A containerized microservice includes everything you need to run the app, and it’s easily portable across runtime environments.

Banner with title of this section -- Kubernetes: The Gardening Tools

So we’ve got our microservices (plants) in our containers (pots) but how do we deploy, maintain, and scale them? With tools! Enter Kubernetes. Kubernetes provides many features and add‑ons that make it easier to manage and maintain microservices, and it’s considered the de facto tool for container orchestration. For example, Kubernetes provides automated load balancing, service discovery, and self‑healing capabilities (to take some of the maintenance and monitoring work off the shoulders of your Devs). And when everyone wants your microservices (plants) to produce more? Kubernetes supports automatic scaling and rolling updates to keep your microservice applications running smoothly.

Note: Kubernetes alternatives, such as Docker Swarm and HashiCorp Nomad, offer much the same functionality.

Banner with title of this section -- Microservices Without Kubernetes: The 'Wild Garden' Option

We’ve established the case for using Kubernetes to manage microservices and prevent applications from growing wild and out of control. But Kubernetes and containers are not always necessary. Depending on the complexity of your application, you might instead choose to use a traditional virtual (or even physical) machine platform, which can still be deployed or provisioned using automation tools. These tools don’t provide the same level of automation and scalability as Kubernetes, but you might be okay with that tradeoff for less complicated or non‑critical apps. Another option is to use serverless computing platforms (such as AWS Lambda) that enable you to run your microservices code in response to events without worrying about the underlying infrastructure.

Banner with title of this section -- Not Everything in Kubernetes Is Microservices

Savvy gardeners know that companion planting – for example, marigolds alongside tomatoes – makes better use of resources and contributes to a healthier garden. Similarly, Kubernetes provides a wide range of features and add‑ons that make it good for more than microservices management. Kubernetes can be used for non‑microservices apps (such as machine learning models or monoliths), for instance. We see a lot of customers using Kubernetes to manage microservices and other applications, with some even deploying virtual machines (VMs) inside Kubernetes! This can be a great option for organizations that are transitioning legacy monolithic apps to a microservices‑oriented architecture.

Getting to the Root of the Matter

We hope you take two things away from this post (aside from a hankering to visit your local nursery):

  • Not everything deployed in Kubernetes is a microservice. The features and add‑ons Kubernetes provide make it a useful tool in a variety of ecosystems.
  • Microservices don’t always need to be managed with Kubernetes. Some applications flourish with alternative tools and a “wild garden” approach.

Check out the following resources for more details about the fundamentals of microservices:

The post Microservices and Kubernetes: A Gardening Guide to Software Ecosystems appeared first on NGINX.

]]>
The Benefits of an API-First Approach to Building Microservices https://www.nginx.com/blog/benefits-of-api-first-approach-to-building-microservices/ Thu, 19 Jan 2023 16:01:37 +0000 https://www.nginx.com/?p=70971 APIs are the connective tissue of cloud‑native applications – the means by which an application’s component microservices communicate. As applications grow and scale, so does the number of microservices and APIs. While this is an unavoidable outcome in most cases, it creates significant challenges for the Platform Ops teams responsible for ensuring the reliability, scalability, and [...]

Read More...

The post The Benefits of an API-First Approach to Building Microservices appeared first on NGINX.

]]>
APIs are the connective tissue of cloud‑native applications – the means by which an application’s component microservices communicate. As applications grow and scale, so does the number of microservices and APIs. While this is an unavoidable outcome in most cases, it creates significant challenges for the Platform Ops teams responsible for ensuring the reliability, scalability, and security of modern applications. We call this problem API sprawl and wrote about it in a previous blog post.

As a first attempt to solve API sprawl, an organization might try to use a top‑down approach by implementing tools for automated API discovery and remediation. While this is effective in the near term, it often imposes an undue burden on the teams responsible for building and operating APIs and microservices. They either have to rework existing microservices and APIs to address security and compliance issues or go through an arduous review process to obtain the required approvals. This is why many large software organizations adopt a decentralized approach that uses adaptive governance to give developers the autonomy they need.

Rather than putting in last‑minute safeguards, a bottom‑up approach to the problem is more effective over the long term. The teams building and operating APIs for different microservices and applications are the first to be involved, and often begin by adopting an API‑first approach to software development in your organization.

What Is API-First?

APIs have been around for decades. But they are no longer simply “application programming interfaces”. At their heart APIs are developer interfaces. Like any user interface, APIs need planning, design, and testing. API‑first is about acknowledging and prioritizing the importance of connectivity and simplicity across all the teams operating and using APIs. It prioritizes communication, reuseability, and functionality for API consumers, who are almost always developers.

There are many paths to API‑first, but a design‑led approach to software development is the end goal for most companies embarking on an API‑first journey. In practice, this approach means API are completely defined before implementation. Work begins with designing and documenting how the API will function. The team relies on the resulting artifact, often referred to as the API contract, to inform how they implement the application’s functionality.

Explore design techniques to support an API‑first approach to software development that is both durable and flexible in Chapter 1 of the eBook Mastering API Architecture from O’Reilly, compliments of NGINX.

The Value of API-First for Organizations

An API‑first strategy is often ideal for microservices architectures because it ensures application ecosystems begin life as modular and reusable systems. Adopting an API‑first software development model provides significant benefits for both developers and organizations, including:

  • Increased developer productivity – Development teams can work in parallel, able to update backend applications without impacting the teams working on other microservices which depend on the applications’ APIs. Collaboration is often easier across the API lifecycle since every team can refer to the established API contract.
  • Enhanced developer experience – API‑first design prioritizes the developer experience by ensuring that an API is logical and well‑documented. This creates a seamless experience for developers when they interact with an API. Learn why it’s so important for Platform Ops teams to take the API developer experience into consideration.
  • Consistent governance and security – Cloud and platform architects can organize the API ecosystem in a consistent way by incorporating security and governance rules during the API design phase. This avoids the costly reviews required when issues are discovered later in the software process.
  • Improved software quality – Designing APIs first ensures security and compliance requirements are met early in the development process, well before the API is ready to be deployed to production. With less need to fix security flaws in production, your operations, quality, and security engineering teams have more time to work directly with the development teams to ensure quality and security standards are met in the design phase.
  • Faster time to market – With fewer dependencies and a consistent framework for interservice communication, different teams can build and improve their services much more efficiently. A consistent, machine‑readable API specification is one tool that can help developers and Platform Ops teams to work better together.

Overall, adopting an API‑first approach can help a company build a more flexible, scalable, and secure microservices architecture.

How Adopting a Common API Specification Can Help

In the typical enterprise microservice and API landscape, there are more components in play than a Platform Ops team can keep track of day to day. Embracing and adopting a standard, machine‑readable API specification helps teams understand, monitor, and make decisions about the APIs currently operating in their environments.

Adopting a common API specification can also help improve collaboration with stakeholders during the API design phase. By producing an API contract and formalizing it into a standard specification, you can ensure that all stakeholders are on the same page about how an API will work. It also makes it easier to share reusable definitions and capabilities across teams.

Today there are three common API specifications, each supporting most types of APIs:

  • OpenAPI – JSON or YAML descriptions of all web APIs and webhooks
  • AsyncAPI – JSON or YAML descriptions of event‑driven APIs
  • JSON Schema – JSON descriptions of the schema objects used for APIs

REST APIs make up the bulk of APIs in production today and the OpenAPI Specification is the standard way to write an API definition for a REST API. It provides a machine‑readable contract that describes how a given API functions. The OpenAPI Specification is widely supported by a variety of API management and API gateway tools, including NGINX. The rest of this blog will focus on how you can use the OpenAPI Specification to accomplish a few important use cases.

The OpenAPI Specification is an open source format for defining APIs in either JSON or YAML. You can include a wide range of API characteristics, as illustrated by the following simple API example. Here a simple HTTP GET request returns a list of items on an imaginary grocery list.

openapi: 3.0.0
info:
  version: 1.0.0
  title: Grocery List API
  description: An example API to illustrate the OpenAPI Specification

servers:
url: https://api.example.io/v1

paths:
  /list:
    get:
      description: Returns a list of stuff on your grocery list             
      responses:
        '200':
          description: Successfully returned a list
          content:
            schema:
              type: array
              items:
                type: object
                properties:
                   item_name:
                      type: string

Definitions that follow the OpenAPI Specification are both human‑ and machine‑readable. This means there is a single source of truth that documents how each API functions, which is especially important in organizations with many teams building and operating APIs. Of course, to manage, govern, and secure APIs at scale you need to make sure that the rest of the tools in your API platform – API gateways, developer portals, and advanced security – also support the OpenAPI Specification.

Dive deeper into how to design REST APIs using the OpenAPI Specification in Chapter 1 of Mastering API Architecture.

Benefits of Adopting a Common API Specification

Using a common API specification, such as the OpenAPI Specification, has several benefits:

  • Improved interoperability – A common, machine‑readable specification means different systems and clients can consume and use the API contract. This makes it easier for Platform Ops teams to integrate, manage, and monitor complex architectures.
  • Consistent documentation – The API contract is documented in a standard format, including the endpoints, request and response formats, and other relevant details. Many systems can use the contract to generate comprehensive documentation, providing clarity and making it easier for developers to understand how to use the API.
  • Better testing – API specifications can be used to automatically generate and run tests, which can help ensure the API implementation adheres to the contract and is working as expected. This can help identify issues with an API before it is published to production.
  • Improved security – Advanced security tools can use the OpenAPI Specification to analyze API traffic and user behavior. They can apply positive security by verifying that API requests comply with the methods, endpoints, and parameters supported by the API endpoint. Non‑conforming traffic is blocked by default, reducing the number of calls your microservices have to process.
  • Easier evolution – API specifications can help facilitate the evolution of the API contract and application itself over time by providing a clear and standard way to document and communicate changes in both machine‑ and human‑readable formats. When coupled with proper versioning practices, this helps minimize the impacts of API changes on API consumers and ensures that an API remains backward compatible.

Overall, using a common API specification can help to improve the interoperability, documentation, testing, security, and gradual evolution of an API.

How NGINX Supports API-First Software Development

NGINX provides a set of lightweight, cloud‑native tools that make it easy to operate, monitor, govern, and secure APIs at scale. For example, API Connectivity Manager, part of F5 NGINX Management Suite, provides a single management plane for your API operations. With it you can configure and manage API gateways and developer portals. As an API‑first tool itself, every function is accessible via REST API, making CI/CD automation and integration easy for DevOps teams.

Using the OpenAPI Specification, you can use NGINX products to:

Diagram showing how API Connectivity Manager leverages an OpenAPI Specification for three uses: publishing the API to an API gateway, publishing documentation at the developer portal, and setting security policies on a WAF
Use the OpenAPI Specification to publish an API to the API gateway and documentation to the developer portal, and to set security policies for the WAF via CI/CD pipelines or the user interface

Publish APIs to the API Gateway

API Connectivity Manager uses the OpenAPI Specification to streamline API publication and management. API developers can publish APIs to the API gateway using either the NGINX Management Suite user interface or the fully declarative REST API. APIs are added to the gateway as API proxies, which contain all the ingress, backend, and routing configurations the API gateway needs to direct incoming API requests to the backend microservice. You can use the REST API to deploy and manage APIs as code by creating simple CI/CD automation scripts with tools like Ansible.

For complete instructions on using the OpenAPI Specification to publish an API, see the API Connectivity Manager documentation.

Generate API Documentation for the Developer Portal

Maintaining documentation is often a headache for API teams. But out-of-date documentation on developer portals is also a major symptom of API sprawl. API Connectivity Manager uses the OpenAPI Specification to automatically generate documentation and publish it to the developer portal, saving API developers time and ensuring API consumers can always find what they need. You can upload OpenAPI Specification files directly via the API Connectivity Manager user interface or REST API.

For complete instructions on publishing API documentation to the developer portal, see the API Connectivity Manager documentation.

Apply Positive Security to Protect API Endpoints

You can also use the OpenAPI Specification to verify that API requests to the NGINX Plus API gateway comply with what an API supports. By applying positive security (a security model that defines what is allowed and blocks everything else), you can prevent malicious requests from probing your backend services for potential vulnerabilities.

At time of writing, you can’t use API Connectivity Manager to configure NGINX App Protect WAF; this functionality will be available later in 2023. You can, however, use Instance Manager (another NGINX Management Suite module) and the OpenAPI Specification to write custom policies for your WAF. For additional information, see the documentation for NGINX App Protect WAF and Instance Manager.

Learn more about API security and threat modeling, and how to apply authentication and authorization at the API gateway in Chapter 7 of Mastering API Architecture.

Summary

An API‑first approach to building microservices and applications can benefit your organization in many ways. Aligning teams around the OpenAPI Specification (or another common API specification that is both human‑ and machine‑readable) helps enable collaboration, communication, and operations across teams.

Modern applications operate in complex, cloud‑native environments. Adopting tools that enable an API‑first approach to operating APIs is a critical step towards realizing your API‑first strategy. With NGINX you can use the OpenAPI Specification to manage your APIs at scale across distributed teams and environments.

Start a 30‑day free trial of NGINX Management Suite, which includes access to API Connectivity Manager, NGINX Plus as an API gateway, and NGINX App Protect to secure your APIs.

The post The Benefits of an API-First Approach to Building Microservices appeared first on NGINX.

]]>
Which 12 Metrics to Monitor for a Successful API Strategy https://www.nginx.com/blog/which-12-metrics-to-monitor-for-a-successful-api-strategy/ Tue, 17 Jan 2023 16:02:21 +0000 https://www.nginx.com/?p=70959 As companies adopt API‑first design practices to build modern applications, measuring the operational performance and value of those APIs becomes a top priority. Establishing a framework that clearly defines and connects API metrics with key performance indicators (KPIs) is one of the most important steps to ensure a successful API strategy. Typically, KPIs are tied [...]

Read More...

The post Which 12 Metrics to Monitor for a Successful API Strategy appeared first on NGINX.

]]>
As companies adopt API‑first design practices to build modern applications, measuring the operational performance and value of those APIs becomes a top priority. Establishing a framework that clearly defines and connects API metrics with key performance indicators (KPIs) is one of the most important steps to ensure a successful API strategy.

Typically, KPIs are tied to specific goals. They have a defined time frame and are aligned to the outcomes that your API strategy needs to deliver. API metrics, in contrast, are significant data points. Not every metric is a KPI, but every KPI begins as a metric.

So, how do you start? First, you need to be clear – at the outset – about the goal of your API strategy and then choose the metrics that align with that goal. Remember that each team needs to measure and track different metrics depending on what is important to it and what is essential for the business.

Broadly, there are three overarching types of API metrics that companies can track, and each type answers a different question:

Imagine these overarching metrics as a pyramid. At the bottom, operational metrics measure the tactical performance of individual APIs and the infrastructure supporting them. At the top, product metrics measure the business value created by your APIs. The two are connected by adoption metrics, which track the growth of the API program with end users (developers). Generally, product metrics and adoption metrics align to the business outcomes you need to measure, while operational metrics align with the technical standards you need to maintain.

In this post we break down 12 specific metrics that are critical to measure, discuss how they enable infrastructure and application teams, and explain the ways the metrics relate to KPIs.

Operational Metrics

When you are just getting started, operational metrics are usually the first thing to measure. They are tactical and provide insights into how APIs are functioning. Operational metrics are not usually KPIs themselves. Instead, they help you measure the quality and performance of the software your teams are building. They can provide early indicators of emerging problems, or help you drill down and discover issues that might be impacting your critical KPIs.

The operational metrics you track will vary by team and responsibility.

Infrastructure Teams

Platform Ops is the team responsible for maintaining, connecting, and securing the collection of infrastructure and technologies used by different teams to deliver applications. For API programs, this often includes API gateways and API developer portals.

Key metrics for infrastructure teams like Platform Ops include:

  1. Uptime – Even as one of the most basic metrics, uptime is the gold standard for measuring the availability of a service. This is often tied to a service level agreement (SLA).
  2. CPU and memory usage – Tracking resource utilization at the API gateway is critical to identifying when you might need to scale out your instances. It also acts as an early warning when something is starting to break or usage is spiking due to errors.
  3. Total pass and error rates – Measuring how often APIs trigger HTTP error (non‑200) status codes helps you understand how error‑prone your APIs may be. This aggregate measure provides information to help judge the overall quality of the APIs your teams are putting into production.

Application Teams

Application teams, made up of API developers and service owners, are responsible for building and operating individual services or applications. These could be used as part of a larger product, to integrate with a partner, or when delivering APIs as a service to developers.

The following metrics are important for application teams to measure:

  1. Requests per minute – This performance metric measures the number of requests your API is handling. While it varies over time, you typically want to downwardly manage it to ensure the best experience for API users.
  2. Average and maximum latency – Tracking the average time it takes your API to receive a request and return a response is crucial. A single slow API can negatively impact the user experience and therefore negatively impact the business.
  3. Errors per minute – Like everything else, no API is perfect. Failures are a matter of when, not if. You need to monitor errors and have in place a planned course of action for fixing them before they suddenly start to tick up.

Dig into API operations and learn which KPIs and metrics are critical from a business perspective in chapters 3–5 of the eBook Mastering API Architecture from O’Reilly, compliments of NGINX.

Adoption Metrics

For an API‑first business, it’s essential to look beyond engineering metrics and understand how developers are interacting with your APIs. You also need to measure and monitor the API developer experience to ensure developers are adopting and getting value from your APIs.

A few examples of adoption metrics include:

  1. Unique API consumers – Often time‑bound into monthly users, this metric measures how many developers are adopting and using your APIs. Ideally this metric grows over time as more developers integrate your API into their applications.
  2. API usage growth – This metric also measures API adoption and is often the preferred metric for doing so. Ideally, API traffic grows monthly as the number of applications and developers using them also increases.
  3. Time to first call – This metric measures how long it takes a developer to create an account, generate API credentials, and run the first API call. Enabling developers to get up and running as fast as possible is a high priority, making this metric the most important for measuring the overall API developer experience.

Note: We recommend that at least one of your KPIs seeks to measure API adoption. This helps calculate the overall growth of your API program. For example, you might set a KPI to increase the number of developers who have created an ongoing integration or app using your API.

Product Metrics

API product metrics play a major role in understanding the value of an API. Although only a small subset of APIs may directly contribute to revenue, every API needs to provide value to the business.

Key product metrics to measure include:

  1. Direct and indirect revenue – These metrics target the different ways APIs contribute to revenue. While some APIs are directly monetized, others support integrations with business partners or are third‑party integrations valued by customers. As with the adoption rate for your APIs, tracking indirect revenue helps developers build revenue‑generating apps for partners.
  2. Applications per API – APIs need to be reusable. This metric measures how many applications integrate with an API to see which APIs provide the most value.
  3. Number of partners – APIs often enable business relationships. Tracking the number of partner API integrations helps drive adoption and demonstrate value to other business units.

Note: These product metrics align closely with business impact, and you might choose to turn some into KPIs depending on your business goals. For example, if the business goal of your API strategy is to reach more customers through third‑party providers, you want to track both the number of partners using your APIs and the indirect revenue generated through those integrations.

Explore the API product lifecycle – including business and operational KPIs – in more depth in chapters 3–5 of Mastering API Architecture.

Conclusion

Aligning API metrics and business KPIs is one of the principal ways to make data‑driven decisions and ensure your API strategy delivers the value your organization requires. And not only that – gaining visibility into your APIs can also empower infrastructure and application teams to measure the operational metrics that matter most to them.

At NGINX, we provide visibility into dozens of important API metrics. You can view real‑time and historical metrics, and easily export them to your preferred application performance monitoring (APM) or data analytics solution.

Get started today with a 30‑day free trial of NGINX Management Suite, which includes access to API Connectivity Manager, NGINX Plus as an API gateway, and NGINX App Protect to secure your APIs.

The post Which 12 Metrics to Monitor for a Successful API Strategy appeared first on NGINX.

]]>
Why Managing WAFs at Scale Requires Centralized Visibility and Configuration Management https://www.nginx.com/blog/why-managing-wafs-at-scale-requires-centralized-visibility-and-configuration-management/ Wed, 11 Jan 2023 17:00:34 +0000 https://www.nginx.com/?p=70939 In F5’s The State of Application Strategy in 2022 report, 90% of IT decision makers reported that their organizations manage between 200 and 1,000 apps, up 31% from five years ago. In another survey by Enterprise Strategy Group about how Modern App Security Trends Drive WAAP Adoption (May 2022, available courtesy of F5), the majority of IT decision makers said application [...]

Read More...

The post Why Managing WAFs at Scale Requires Centralized Visibility and Configuration Management appeared first on NGINX.

]]>
In F5’s The State of Application Strategy in 2022 report, 90% of IT decision makers reported that their organizations manage between 200 and 1,000 apps, up 31% from five years ago. In another survey by Enterprise Strategy Group about how Modern App Security Trends Drive WAAP Adoption (May 2022, available courtesy of F5), the majority of IT decision makers said application security has become more difficult over the past 2 years, with 72% using a WAF to protect their web applications. As organizations continue their digital transformation and web applications continue to proliferate, so too does the need for increased WAF protection. But as with most tools, the more WAFs you have, the harder they are to manage consistently and effectively.

The challenges of managing WAFs at scale include:

  • Lack of adequate visibility into application‑layer attack vectors and vulnerabilities, especially given the considerable number of them
  • Balancing WAF configurations between overly permissive or overly protective; it’s time‑consuming to fix the resulting false positives or negatives, especially manually and at scale
  • Ensuring consistent application policy management at high volumes, which is required to successfully identify suspicious code and injection attempts
  • Potential longtail costs – some extremely damaging – of failure to maintain even a single WAF in your fleet, including monetary loss, damage to reputation and brand, loss of loyal customers, and penalties for regulatory noncompliance
  • The need to support and update WAF configuration over time

WAF management at scale means both security and application teams are involved in setup and maintenance. To effectively manage WAFs – and secure applications properly – they need proper tooling that combines holistic visibility into attacks and WAF performance with the ability to edit and publish configurations on a global scale. In this blog, we explore the benefits of centralized security visualization and configuration management for your WAF fleet.

Actionable Security Insights at Scale with Centralized WAF Visibility

To easily manage WAFs at scale and gain the insight needed to make informed decisions, you need a management plane that offers visibility across your WAF fleet from a single pane of glass. You can view information about top violations and attacks, false positives and negatives, apps under attack, and bad actors. You can discover how to tune your security policies based on attack graphs – including geo‑locations – and drill down into WAF event logs.

How NGINX Can Help: F5 NGINX Management Suite Security Monitoring

We are happy to announce the general availability of the Security Monitoring module in F5 NGINX Management Suite, the unified traffic management and security solution for your NGINX fleet which we introduced in August 2022. Security Monitoring is a visualization tool for F5 NGINX App Protect WAF that’s easy to use out of the box. It not only reduces the need for third‑party tools, but also delivers unique, curated insights into the protection of your apps and APIs. Your security, development, and Platform Ops teams gain the ability to analyze threats, view protection insights, and identify areas for policy tuning – making it easier for them to detect problems and quickly remediate issues.

NMS Security Monitoring dashboard showing web attacks, bot attacks, threat intelligence, attack requests and top attack geolocations
Figure 1: The Security Monitoring main dashboard provides security teams overview visibility of all web attacks, bot attacks, threat intelligence, attack requests, and top attack geolocations, plus tabs for further detailed threat analysis and quick remediation of issues.

With the Security Monitoring module, you can:

  • Use dashboards to quickly see top violations, bot attacks, signatures, attacked instances, CVEs, and threat campaigns triggered per app or in aggregate. Filter across various security log parameters for more detailed analysis.
  • Make tuning decisions with insights into signature‑triggered events, including information about accuracy, level of risk, and what part of the request payload triggered signatures for enforcement.
  • Discover top attack actors (client IP addresses), geolocation vectors, and attack targets (URLs) per app or in aggregate.
  • See WAF events with details about requests and violations, searchable by request identifiers and other metrics logged by NGINX App Protect WAF.

Configuration Management for Your Entire NGINX App Protect WAF Fleet

While awareness and visibility are vital to identifying app attacks and vulnerabilities, they’re of little value if you can’t also act on the insights you gain by implementing WAF policies that detect and mitigate attacks automatically. The real value of a WAF is defined by the speed and ease with which you can create, deploy, and modify policies across your fleet of WAFs. Manual updates require vast amounts of time and accurate recordkeeping, leaving you more susceptible to attacks and vulnerabilities. And third‑party tools – while potentially effective – add unnecessary complexity.

A centralized management plane enables configuration management with the ability to update security policies and push them to one, several, or all your WAFs with a single press of a button. This method has two clear benefits:

  • You can quickly deploy and scale policy updates in response to current threats across your total WAF environment.
  • Your security team has the ability to control the protection of all the apps and APIs your developers are building.

How NGINX Can Help: F5 NGINX Management Suite Instance Manager – Configuration Management

You can now manage NGINX App Protect WAF at scale with the Instance Manager module in NGINX Management Suite. This enhancement gives you a centralized interface for creating, modifying, and publishing policies, attack signatures, and threat campaigns for NGINX App Protect WAF, resulting in more responsive protection against threats and handling of traffic surges.

NMS Instance Manager showing policies selection for a publication to a WAF instance group.
Figure 2: Instance Manager enables security teams to create, modify, and publish policies to one, several, or an entire fleet of NGINX App Protect WAF instances. This image shows policies being selected for publication to a WAF instance group.

With the Instance Manager module, you can:

  • Define configuration objects in a single location and push them out to the NGINX App Protect WAF instances of your choosing. The objects include security policies and deployments of attack signature updates and threat campaign packages.
  • Choose a graphical user interface (GUI) or REST API for configuration management. With the API, you can deploy configuration objects in your CI/CD pipeline.
  • See which policies and versions are deployed on different instances.
  • Use a JSON visual editor to create, view, and edit NGINX App Protect WAF policies, with the option to deploy instantly.
  • Compile NGINX App Protect WAF policies before deployment, to decrease the time required for updates on WAF instances.
  • View WAF logs and metrics through NGINX Management Suite Security Monitoring.

Take Control of Your WAF Security with NGINX Management Suite

To learn more, visit NGINX Management Suite and Instance Manager on our website or check out our documentation:

Ready to try NGINX Management Suite for managing your WAFs? Request your free 30-day trial.

The post Why Managing WAFs at Scale Requires Centralized Visibility and Configuration Management appeared first on NGINX.

]]>
NGINX Chats with ChatGPT and Gets Mixed Results from the Hottest Bot du Jour https://www.nginx.com/blog/nginx-chats-with-chatgpt-gets-mixed-results/ Thu, 15 Dec 2022 16:00:50 +0000 https://www.nginx.com/?p=70851 In the short time since its debut, ChatGPT has taken the technology world by storm. Created by AI research and deployment company OpenAI, ChatGPT is a browser‑based chatbot that uses OpenAI’s natural language models to learn how to interact with humans in what OpenAI calls a “conversational way”. It can do amazing things, like write [...]

Read More...

The post NGINX Chats with ChatGPT and Gets Mixed Results from the Hottest Bot du Jour appeared first on NGINX.

]]>
In the short time since its debut, ChatGPT has taken the technology world by storm. Created by AI research and deployment company OpenAI, ChatGPT is a browser‑based chatbot that uses OpenAI’s natural language models to learn how to interact with humans in what OpenAI calls a “conversational way”. It can do amazing things, like write songs and even Shakespearean sonnets. Programmers have been blown away by the detail of some responses ChatGPT provided to basic questions.

But some clouds have appeared. The Q&A site StackOverflow has temporarily banned responses generated by ChatGPT because too many were incorrect. And some users reported responses that were clearly incorrect or simply made up for oddball and even mainstream questions.

We are huge fans of new ways to help our users and community learn quickly and answer questions. We were curious to see whether ChatGPT provides quality answers for NGINX‑related questions and gave it a quick test drive.

Hey ChatGPT…

We asked ChatGPT four questions. We show ChatGPT’s answer and assess its accuracy.

Question 1

Question: What is NGINX?

Answer: NGINX is a web server that can be used to serve websites and applications. It is known for its high performance and stability, making it a popular choice for many websites and applications. NGINX can also be used as a reverse proxy, load balancer, and HTTP cache. It is open source software, and is widely used on the Linux operating system.

Accuracy: Basically accurate. It does fail to mention API gateway as an additional common use case.

Question 2

Question: Does NGINX Open Source have a WAF?

Answer: Yes, the open source version of NGINX includes a web application firewall (WAF) module. The NGINX WAF is based on the ModSecurity open source WAF, which provides a flexible and powerful rules engine for protecting your web applications from common vulnerabilities such as SQL injection and cross‑site scripting attacks. The NGINX WAF can be configured to provide both positive and negative security models, allowing you to specify exactly which requests should be allowed and which should be blocked. However, keep in mind that the open source version of NGINX does not include support for the WAF module, so if you need help setting it up or using it, you will need to rely on community support or consult with an expert.

Accuracy: A mixed bag. The NGINX Open Source software available at nginx.org does not include a WAF, either as source code, in the precompiled binaries, or as a prebuilt dynamic module. However, it is possible to build a WAF for NGINX Open Source based on the open source ModSecurity WAF, in the form of a dynamic module built from the libmodsecurity library provided by Trustwave’s SpiderLabs. We provide instructions on our blog. Without these details, however, a reader trying to track down the WAF based on this answer might find it difficult to find the needed resources and is likely to be confused and frustrated.

On the other hand, the description of the ModSecurity WAF is informative, and it is true that as open source software the ModSecurity WAF dynamic module is not supported. (For completeness, we’ll mention that F5 NGINX did have a supported WAF based on ModSecurity for NGINX Plus, but it is no longer available for sale and will reach end of life in 2024. Instead, we now offer NGINX App Protect WAF to our paid subscribers as an add‑on to NGINX Plus and other products.)

Question 3

Question: Tell me how to configure NGINX

Answer: To configure NGINX, you will need to edit its configuration file, which is typically called nginx.conf and is located in the conf directory inside the NGINX installation directory.

The configuration file is written in the form of directives, which are instructions that tell NGINX how to behave. Each directive is specified on a new line, and typically takes the form directive_name directive_value;.

Here is an example configuration file that shows some common directives:

[Editor – We put this part of the answer in the monospace font and adjusted line lengths for easier reading.]

# This is a comment

# The number of worker processes to run.
# It is usually best to set this to the number of CPU cores on the server
worker_processes 4;

# The directory where the log files will be stored
error_log /var/log/nginx/error.log;

# The default server, which will be used if no other server matches 
# the request
server {
    # The listen directive specifies the port that the server will 
    # listen on
    listen 80;

    # The root directive specifies the root directory for the server
    root /var/www/html;

    # The server_name directive specifies the server's hostname
    server_name example.com;

    # The location directive specifies a URL pattern and associated 
    # handling rules
    location / {
        # This directive specifies that requests for the root URL 
        # should be handled by the index.html file in the root directory
        index index.html;
    }
}

You can find more information about the various directives that are available in the NGINX documentation, which you can access at https://nginx.org/en/docs/.

Accuracy: Well, it’s pretty good as far as it goes (the most dubious statement is that nginx.conf is located in the conf directory inside the NGINX installation directory, which applies at best only to certain OS vendor distributions). The descriptions of directives in the sample configuration are correct, but the overall structure isn’t complete, particularly in that it doesn’t enclose the server block in the http{} context as required. The sample configuration is also so sketchy that it’s not really useful. But given the wide variety of use cases which NGINX supports, it’s probably not reasonable to expect a complete answer.

However, this answer itself raises an interesting question – what sources is ChatGPT using for its answers? Neither the text nor the configuration in this answer is taken directly from documentation on nginx.org or nginx.com, and a Google search for several phrases in the answer yields no matches from any source. ChatGPT includes a link to official NGINX documentation, which begs the question why the answer itself isn’t derived from that source.

Question 4

Question: What are the Top 10 NGINX Configuration Mistakes?

Answer: Could not provide an answer.

Accuracy: This was a softball question based on the titles of a blog post and corresponding webinar, both of which are very similar to the question. The fact that ChatGPT could not answer in any way underlines how it’s not to be confused with Google, which has a much broader index and found the blog. In fact, Google not only returned the blog as the first result – it provided the first seven items from the summary list that’s in the intro of the blog itself.

Screenshot of Google search results for 'Top 10 NGINX Configuration Mistakes'

Conclusion

ChatGPT is a super powerful and super fun technology. It does a fantastic job answering some questions in a format that is more conversational than other bots. In some instances, it appears to do a good job providing detailed responses to programming questions. But its coverage remains spotty and your mileage will certainly vary with regard to NGINX information. Naturally, the best info on NGINX is at nginx.org and docs.nginx.com. But stay tuned as we continue to play with ChatGPT and see how it develops.

What’s your experience with ChatGPT? Join the conversation on the NGINX Community Slack to discuss with other NGINX power users how they are using this, and other, OpenAI tools.

The post NGINX Chats with ChatGPT and Gets Mixed Results from the Hottest Bot du Jour appeared first on NGINX.

]]>
2022 NGINX State of App and API Delivery Report https://www.nginx.com/blog/2022-nginx-state-of-app-api-delivery-report/ Tue, 13 Dec 2022 16:44:19 +0000 https://www.nginx.com/?p=70833 December is a natural time for reflection and introspection. As the year draws to a close, many organizations – including NGINX – are thinking about lessons learned over the past 12 months. Like us, you might be asking questions like: What insights can our data provide? What did we learn? What will we do differently and where should [...]

Read More...

The post 2022 NGINX State of App and API Delivery Report appeared first on NGINX.

]]>
Banner reading '2002 NGINX State of App and API Delivery Report'
December is a natural time for reflection and introspection. As the year draws to a close, many organizations – including NGINX – are thinking about lessons learned over the past 12 months. Like us, you might be asking questions like:

  • What insights can our data provide?
  • What did we learn?
  • What will we do differently and where should we keep powering forward?

At NGINX, our retrospective includes analyzing the input and feedback that our community shares with us in our annual survey. In 2022, the survey both yielded surprises and confirmed trends we’d been picking up throughout the year. In this blog, we surface key insights and share the 2022 NGINX State of App and API Delivery Report.

2022 Insights

Insight #1: Security (still) isn’t everybody’s job…and that’s ok.

As is typical in most surveys, we asked respondents to select from a list of job roles so we can understand who is completing the survey and detect interesting trends. We then used “job role” as a filter for a question about the extent to which respondents are responsible for security in their organizations. A mere 15% of respondents overall say they have nothing to do with security, with the expected slight variations across job roles (for example, data scientists are less likely than Platform Ops engineers to deal with security).

However, it gets more interesting when we segmented by size of organization: 44% of employees at large enterprises said they have nothing to do with security. This may sound alarming, but it doesn’t indicate that security isn’t important at large enterprises. Instead, the data tells us that large enterprises are more likely to have dedicated security teams.

Insight #2: Hardware isn’t dead and hybrid cloud is here to stay.

Not that long ago it was predicted that organizations delivering modern apps would shift their investments from on‑premises hardware to cloud deployments. We observed the rise of microservices architectures and containerization technologies (including Kubernetes) reflected in our annual surveys, and like others, thought that someday this might result in the end of hardware.

But the growing usage of clouds and modern app technologies only tells part of the story. Our 2022 survey data reinforces what we’re hearing from customers: most organizations use hybrid architectures. In fact, respondents indicated the usage of both on‑premises hardware and public cloud increases in parallel with their workload size. The reason behind this trend is simple: cloud can be expensive and hard to secure. When an app doesn’t need the flexibility and agility offered by a cloud architecture, then it makes sense to choose a traditional architecture located on premises where it’s generally easier to secure.

Insight #3: System Administrators are not an endangered species.

The title System Administrator – or SysAdmin – is often associated with hardware, and there have been murmurs for years that the role was becoming obsolete. Given that hardware isn’t going away, it’s logical to think that SysAdmin jobs are also “safe”, but the survey data tells us an even more interesting and reassuring story. Of the 562 people who said they do SysAdmin work:

  • 57% also claimed a Development team role
  • 43% also claimed a Platform Operations role
  • 38% also claimed a Leadership role
  • 31% also claimed a SecOps role
  • 17% also claimed a Data Science role

So what does this mean? When we add a filter by company size, we discover respondents from small companies (under 50 employees) are more likely to hold a SysAdmin role and unsurprisingly, are responsible for multiple job functions. Our conclusion is that if you’re a SysAdmin looking to broaden your skill set for modern app development, a startup is a great place to leverage what you already know and potentially transition towards a new career. #StartupLife

Insight #4: Organizational constraints are more prevalent than tool/technology limitations.

Each year we seek to learn about the barriers facing people working on app and API delivery projects. This year uncovered a list of problems that can be divided into two categories: work challenges and tool challenges.

While 17% reported they aren’t facing any challenges (great news!), the remaining 83% are encountering problems. The most common challenge is a lack of technical skills, and within that group, many told us they’re also facing a steep learning curve for tools along with a lack of staffing or resources. These work challenges typically relate to an organization’s practices around skills development, processes, and tool sprawl.

2022 NGINX State of App and API Delivery Report

The four insights we’ve shared are just a few of the many you shared with us in your survey responses. Check out the full report:

Thank You!

A huge thank you to everyone who participated in our survey!

Did you miss the survey or have more feedback? We still want to hear from you! Just drop us a note in the blog comments, DM us on social media (we’re on LinkedIn, Twitter, and Facebook), or join NGINX Community Slack.

You can view the findings from past surveys here:

The post 2022 NGINX State of App and API Delivery Report appeared first on NGINX.

]]>
Why You Need an API Developer Portal for API Discovery https://www.nginx.com/blog/why-you-need-api-developer-portal-for-api-discovery/ Tue, 06 Dec 2022 16:00:51 +0000 https://www.nginx.com/?p=70811 Enterprises increasingly rely on APIs to connect applications and data across business lines, integrate with partners, and deliver customer experiences. According to TechRadar, today the average enterprise is leveraging a total of 15,564 APIs, up 201% year-on-year. As the number of APIs continues to grow, the complexity of managing your API portfolio increases. It gets harder to [...]

Read More...

The post Why You Need an API Developer Portal for API Discovery appeared first on NGINX.

]]>
Enterprises increasingly rely on APIs to connect applications and data across business lines, integrate with partners, and deliver customer experiences. According to TechRadar, today the average enterprise is leveraging a total of 15,564 APIs, up 201% year-on-year.

As the number of APIs continues to grow, the complexity of managing your API portfolio increases. It gets harder to discover and track what APIs are available and where they are located, as well as find documentation about how to use them. Without a holistic API strategy in place, APIs can proliferate faster than your Platform Ops teams can manage them. We call this problem API sprawl and in a previous post we explained why it’s such a significant threat. In this post we explore in detail how you can fight API sprawl by setting up an API developer portal with help from NGINX.

Build an Inventory of Your APIs

Ultimately, APIs can’t be useful until they are used – which means API consumers need a way to find them. Without the proper systems in place, API sprawl makes it difficult for developers to find the APIs they need for their applications. At best, lists of APIs are kept by different lines of business and knowledge is shared across teams through informal networks of engineers.

One of the first steps toward fighting API sprawl is creating a single source of truth for your APIs. That process starts with building an inventory of your APIs. An accurate inventory is a challenge, though – it’s a constantly moving target as new APIs are introduced and old ones are deprecated. You also need to find any “shadow APIs” across your environments – APIs that have been forgotten over time, were improperly deprecated, or were built outside your standard processes.

Unmanaged APIs are one of the most insidious symptoms of API sprawl, with both obvious security implications and hidden costs. Without an accurate inventory of available APIs, your API teams must spend time hunting down documentation. There’s significant risk of wasteful duplicated effort as various teams build similar functionality. And changes to a given API can lead to costly cascades of rework or even outages without proper version control.

Techniques like automated API discovery can help you identify and treat the symptom of unmanaged APIs. But to solve the problem, you need to eliminate the root causes: broken processes and lack of ownership. In practice, integrating API inventory and documentation into your CI/CD pipelines is the only approach that ensures visibility across your API portfolio in the long term. Instead of having to manually track every API as it comes online, you only need to identify and remediate exceptions.

Streamline API Discovery with an API Developer Portal

Streamlining API discovery is one area where an API developer portal can help. It provides a central location for API consumers to discover APIs, read documentation, and try out APIs before integrating them into their applications. Your API developer portal can also serve as the central API catalog, complete with ownership and contact info, so everyone knows who is responsible for maintaining APIs for different services.

A core component of our API reference architecture, an effective API developer portal enables a few key use cases:

  • Streamline API discovery – Publish your APIs in an accessible location so developers can easily find and use your APIs in their projects
  • Provide clear, up-to-date documentation – Ensure developers always have access to the most up-to-date documentation about how an API functions
  • Ensure proper versioning – Introduce new versions of an API without creating outages for downstream applications, with support for versioning
  • Generate API credentials – Streamline the onboarding process so developers can sign in and generate credentials to use for API access
  • Try out APIs – Enable developers to try out APIs on the portal before they integrate them into their projects

As part of your API strategy, you need to figure out how to maintain your API developer portal. You need an automated, low‑touch approach that seamlessly integrates publishing, versioning, and documenting APIs without creating more work for your API teams.

Create a Single Source of Truth for Your APIs with NGINX

To enable seamless API discovery, you need to create a single source of truth where developers can find your APIs, learn how to use them, and onboard them into their projects. That means you’ll need a developer portal – and up-to-date documentation.

API Connectivity Manager, part of F5 NGINX Management Suite, helps you integrate publication, versioning, and documentation of APIs directly into your development workflows, so your API developer portal is never out of date. In addition to making it easy to create API developer portals to host your APIs and documentation, API Connectivity Manager lets you add custom pages and completely customize the developer portal to match your branding.

Let’s look at how API Connectivity Manager helps you address some specific use cases. Refer to the API Connectivity Manager documentation for detailed instructions about setting up a developer portal cluster and publishing a developer portal.

Automatically Generate API Documentation

There is often a wide gulf between the level of quality and detail your API consumers expect from documentation and what your busy API developers can realistically deliver with limited time and resources. Many homegrown documentation tools fail to integrate with the development lifecycle or other engineering systems. This doesn’t have to be the case.

How NGINX can help: API Connectivity Manager uses the OpenAPI Specification to publish APIs to the API gateway and automatically generate the accompanying documentation on the developer portal, saving API developers time and ensuring API consumers can always find what they need. You can upload OpenAPI Specification files directly via the API Connectivity Manager user interface, or by sending a call via the REST API. This makes it easy to automate the documentation process via your CI/CD pipeline.

To publish documentation in API Connectivity Manager, click Services in the left navigation column to open the Services tab. Click the name of your Workspace or create a new one.

Once you are in the Workspace, click API Docs below the box that has the name and description of your Workspace (example-api in the screenshot). Simply click the  Add API Doc  button to upload your OpenAPI Specification file. Click the  Save  button to publish the documentation to the developer portal.

Screenshot of window for adding API documentation in API Connectivity Manager
Figure 1: Creating documentation by uploading an OpenAPI Specification file to
API Connectivity Manager

Ensure Proper Versioning

Version changes must always be handled with care, and this is especially true in microservices environments where many services might be interacting with a single API. Without a careful approach to introducing new versions and retiring old ones, a single breaking change can lead to a cascading outage across dozens of microservices.

How NGINX can help: Using OpenAPI Specification files with API Connectivity Manager enables easy version control for your APIs. In addition to setting the version number, you can provide documentation for each version and manage its status (latest, active, retired, or deprecated).

To publish a new version of an API, click Services in the left navigation column. Click the name of your Workspace in the table, and then click the name of your Environment on the page that opens. Next, click the  + Add Proxy  button. From here you can upload the OpenAPI Specification, set the base path and version to create the URI (for example, /api/v2/), and input other important metadata. Click the  Publish  button to save and publish your API proxy.

The original version of the API remains available alongside your new version. This gives your users time to gradually migrate their applications or services to the most recent version. When you are ready, you can fully deprecate the original version of your API. Figure 2 shows two versions of the Sentence Generator API published and in production.

Screenshot of Services Workspace in API Connectivity Manager with two API versions
Figure 2: Managing active API versions within API Connectivity Manager

Generate API Credentials

To drive adoption of your APIs, you need to make the onboarding process as simple as possible for your API consumers. Once users find their APIs, they need a method to securely sign into the developer portal and generate credentials. These credentials grant them access to the functionality of your API. Most often you’ll want to implement a self‑managed workflow so users can sign up on their own.

How NGINX can help: API Connectivity Manager supports self‑managed API workflows on the developer portal so users can generate their own resource credentials for accessing APIs. Resource credentials can be managed on the portal using API keys or HTTP Basic authentication. You can also enable single sign‑on (SSO) on the developer portal to secure access and allow authenticated API consumers to manage resource credentials.

To quickly enable SSO on the developer portal, click Infrastructure in the left navigation column. Click the name of your Workspace in the table (in Figure 3, it’s team-sentence).

Screenshot of Infrastructure Workspaces tab in API Connectivity Manager
Figure 3: List of Workspaces on the Infrastructure tab

In the table on the Workspace page, click the name of the Environment you want to configure (in Figure 4, it’s prod).

Screenshot of list of Environments on Infrastructure Workspaces tab in API Connectivity Manager
Figure 4: List of Environments in a Workspace

In the Developer Portal Clusters section, click the icon in the Actions column for the developer portal you are working on and select Edit Advanced Config from the drop‑down menu. In Figure 5, the single Developer Portal Cluster is devportal-cluster.

Screenshot of how to edit a Developer Portal Cluster to define a policy in API Connectivity Manager
Figure 5: Selecting Edit Advanced Config option for a Developer Portal Cluster

Next, click Global Policies in the left navigation column. Configure the OpenID Connect Relying Party policy by clicking on the icon in the Actions column of its row and selecting Edit Policy from the drop‑down menu. For more information, see the API Connectivity Manager documentation.

Screenshot of how to activate a policy for single sign-on in API Connectivity Manager
Figure 6: Configuring the OpenID Connect Relaying Party global policy to enable single sign‑on

Try Out APIs on the Developer Portal

One way you might measure the success of your API strategy is to track the “time to first API call” metric, which reveals how long it takes a developer to send a basic request with your API.

We’ve established that clear, concise documentation is essential as the first entry point for your API, where your users get a basic understanding of how to work with an API. Usually, developers must then write new code to integrate the API into their application before they can test API requests. You can help developers get started much faster by providing a way to directly interact with an API on the developer portal using real data – effectively making their first API call without writing a single line of code for their application!

How NGINX can help: Once you enable SSO for your API Connectivity Manager developer portals, API consumers can use the API Explorer to try out API calls on your documentation pages. They can use API Explorer to explore the API’s endpoints, parameters, responses, and data models, and test API calls directly with their browsers.

Figure 7 shows the API Explorer in action – in this case, trying out the Sentence Generator API. The user selects the appropriate credentials, creates the request, and receives a response with actual data from the API.

Screenshot of testing an API in a developer portal with the API Connectivity Manager API Explorer tool
Figure 7: Testing an API on the developer portal

Summary

APIs are crucial to your organization. And the first step towards governing and securing your APIs starts with taking an inventory of every API, wherever it is. But API discovery is only part of the solution – you need to build API inventory, documentation, and versioning into your development and engineering lifecycle to address the root causes of API sprawl.

Start a 30-day free trial of NGINX Management Suite, which includes access to API Connectivity Manager, NGINX Plus as an API gateway, and NGINX App Protect to secure your APIs.

The post Why You Need an API Developer Portal for API Discovery appeared first on NGINX.

]]>
Shifting Security Left with F5 NGINX App Protect on Amazon EKS https://www.nginx.com/blog/shifting-security-left-f5-nginx-app-protect-amazon-eks/ Tue, 22 Nov 2022 16:00:17 +0000 https://www.nginx.com/?p=70738 According to The State of Application Strategy in 2022 report from F5, digital transformation in the enterprise continues to accelerate globally. Most enterprises deploy between 200 and 1,000 apps spanning across multiple cloud zones, with today’s apps moving from monolithic to modern distributed architectures. Kubernetes first hit the tech scene for mainstream use in 2016, a mere six years [...]

Read More...

The post Shifting Security Left with F5 NGINX App Protect on Amazon EKS appeared first on NGINX.

]]>
According to The State of Application Strategy in 2022 report from F5, digital transformation in the enterprise continues to accelerate globally. Most enterprises deploy between 200 and 1,000 apps spanning across multiple cloud zones, with today’s apps moving from monolithic to modern distributed architectures.

Kubernetes first hit the tech scene for mainstream use in 2016, a mere six years ago. Yet today more than 75% of organizations world‑wide run containerized applications in production, up 30% from 2019. One critical issue in Kubernetes environments, including Amazon Elastic Kubernetes Service (EKS), is security. All too often security is “bolted on” at the end of the app development process, and sometimes not even until after a containerized application is already up and running.

The current wave of digital transformation, accelerated by the COVID‑19 pandemic, has forced many businesses to take a more holistic approach to security and consider a “shift left” strategy. Shifting security left means introducing security measures early into the software development lifecycle (SDLC) and using security tools and controls at every stage of the CI/CD pipeline for applications, containers, microservices, and APIs. It represents a move to a new paradigm called DevSecOps, where security is added to DevOps processes and integrates into the rapid release cycles typical of modern software app development and delivery.

DevSecOps represents a significant cultural shift. Security and DevOps teams work with a common purpose: to bring high‑quality products to market quickly and securely. Developers no longer feel stymied at every turn by security procedures that stop their workflow. Security teams no longer find themselves fixing the same problems repeatedly. This makes it possible for the organization to maintain a strong security posture, catching and preventing vulnerabilities, misconfigurations, and violations of compliance or policy as they occur.

Shifting security left and automating security as code protects your Amazon EKS environment from the outset. Learning how to become production‑ready at scale is a big part of building a Kubernetes foundation. Proper governance of Amazon EKS helps drive efficiency, transparency, and accountability across the business while also controlling cost. Strong governance and security guardrails create a framework for better visibility and control of your clusters. Without them, your organization is exposed to greater risk of security breaches and the accompanying longtail costs associated with damage to revenue and reputation.

To find out more about what to consider when moving to a security‑first strategy, take a look at this recent report from O’Reilly, Shifting Left for Application Security.

Automating Security for Amazon EKS with GitOps

Automation is an important enabler for DevSecOps, helping to maintain consistency even at a rapid pace of development and deployment. Like infrastructure as code, automating with a security-as-code approach entails using declarative policies to maintain the desired security state.

GitOps is an operational framework that facilitates automation to support and simplify application delivery and cluster management. The main idea of GitOps is having a Git repository that stores declarative policies of Kubernetes objects and the applications running on Kubernetes, defined as code. An automated process completes the GitOps paradigm to make the production environment match all stored state descriptions.

The repository acts as a source of truth in the form of security policies, which are then referenced by declarative configuration-as-code descriptions as part of the CI/CD pipeline process. As an example, NGINX maintains a GitHub repository with an Ansible role for F5 NGINX App Protect which we hope is useful for helping teams wanting to shift security left.

With such a repo, all it takes to deploy a new application or update an existing one is to update the repo. The automated process manages everything else, including applying configurations and making sure that updates are successful. This ensures that everything happens in the version control system for developers and is synchronized to enforce security on business‑critical applications.

When running on Amazon EKS, GitOps makes security seamless and robust, while virtually eliminating human errors and keeping track of all versioning changes that are applied over time.

Diagram showing how to shift left using security as code with NGINX App Protect WAF and DoS, Jenkins, and Ansible
Figure 1: NGINX App Protect helps you shift security lift with security as code at all phases of your software development lifecycle

NGINX App Protect and NGINX Ingress Controller Protect Your Apps and APIs in Amazon EKS

A robust design for Kubernetes security policy must accommodate the needs of both SecOps and DevOps and include provisions for adapting as the environment scales. Kubernetes clusters can be shared in many ways. For example, a cluster might have multiple applications running in it and sharing its resources, while in another case there are multiple instances of one application, each for a different end user or group. This implies that security boundaries are not always sharply defined and there is a need for flexible and fine‑grained security policies.

The overall security design must be flexible enough to accommodate exceptions, must integrate easily into the CI/CD pipeline, and must support multi‑tenancy. In the context of Kubernetes, a tenant is a logical grouping of Kubernetes objects and applications that are associated with a specific business unit, team, use case, or environment. Multi‑tenancy, then, means multiple tenants securely sharing the same cluster, with boundaries between tenants enforced based on technical security requirements that are tightly connected to business needs.

An easy way to implement low‑latency, high‑performance security on Amazon EKS is by embedding the NGINX App Protect WAF and DoS modules with NGINX Ingress Controller. None of our other competitors provide this type of inline solution. Using one product with synchronized technology provides several advantages, including reduced compute time, costs, and tool sprawl. Here are some additional benefits.

  • Securing the application perimeter – In a well‑architected Kubernetes deployment, NGINX Ingress Controller is the only point of entry for data‑plane traffic flowing to services running within Kubernetes, making it an ideal location for a WAF and DoS protection.
  • Consolidating the data plane – Embedding the WAF within NGINX Ingress Controller eliminates the need for a separate WAF device. This reduces complexity, cost, and the number of points of failure.
  • Consolidating the control plane – WAF and DoS configuration can be managed with the Kubernetes API, making it significantly easier to automate CI/CD processes. NGINX Ingress Controller configuration complies with Kubernetes role‑based access control (RBAC) practices, so you can securely delegate the WAF and DoS configurations to a dedicated DevSecOps team.

The configuration objects for NGINX App Protect WAF and DoS are consistent across both NGINX Ingress Controller and NGINX Plus. A master configuration can easily be translated and deployed to either device, making it even easier to manage WAF configuration as code and deploy it to any application environment

To build NGINX App Protect WAF and DoS into NGINX Ingress Controller, you must have subscriptions for both NGINX Plus and NGINX App Protect WAF or DoS. A few simple steps are all it takes to build the integrated NGINX Ingress Controller image (Docker container). After deploying the image (manually or with Helm charts, for example), you can manage security policies and configuration using the familiar Kubernetes API.

Diagram showing topology for deploying NGINX App Protect WAF and DoS on NGINX Ingress Controller in Amazon EKS
Figure 2: NGINX App Protect WAF and DoS on NGINX Ingress Controller routes app and API traffic to pods and microservices running in Amazon EKS

The NGINX Ingress Controller based on NGINX Plus provides granular control and management of authentication, RBAC‑based authorization, and external interactions with pods. When the client is using HTTPS, NGINX Ingress Controller can terminate TLS and decrypt traffic to apply Layer 7 routing and enforce security.

NGINX App Protect WAF and NGINX App Protect DoS can then be deployed to enforce security policies to protect against point attacks at Layer 7 as a lightweight software security solution. NGINX App Protect WAF secures Kubernetes apps against OWASP Top 10 attacks, and provides advanced signatures and threat protection, bot defense, and Dataguard protection against exploitation of personally identifiable information (PII). NGINX App Protect DoS provides an additional line of defense at Layers 4 and 7 to mitigate sophisticated application‑layer DoS attacks with user behavior analysis and app health checks to protect against attacks that include Slow POST, Slowloris, flood attacks, and Challenger Collapsar.

Such security measures protect both REST APIs and applications accessed using web browsers. API security is also enforced at the Ingress level following the north‑south traffic flow.

NGINX Ingress Controller with NGINX App Protect WAF and DoS can secure Amazon EKS traffic on a per‑request basis rather than per‑service: this is a more useful view of Layer 7 traffic and a far better way to enforce SLAs and north‑south WAF security.

Diagram showing NGINX Ingress Controller with NGINX App Protect WAF and DoS routing north-south traffic to nodes in Amazon EKS
Figure 3: NGINX Ingress Controller with NGINX App Protect WAF and DoS routes north-south traffic to nodes in Amazon EKS

The latest High‑Performance Web Application Firewall Testing report from GigaOm shows how NGINX App Protect WAF consistently delivers strong app and API security while maintaining high performance and low latency, outperforming the other three WAFs tested – AWS WAF, Azure WAF, and Cloudflare WAF – at all tested attack rates.

As an example, Figure 4 shows the results of a test where the WAF had to handle 500 requests per second (RPS), with 95% (475 RPS) of requests valid and 5% of requests (25 RPS) “bad” (simulating script injection). At the 99th percentile, latency for NGINX App Protect WAF was 10x less than AWS WAF, 60x less than Cloudflare WAF, and 120x less than Azure WAF.

Graph showing latency at 475 RPS with 5% bad traffic at various percentiles for 4 WAFs: NGINX App Protect WAF, AWS WAF, Azure WAF, and Cloudflare WAF
Figure 4: Latency for 475 RPS with 5% bad traffic

Figure 5 shows the highest throughput each WAF achieved at 100% success (no 5xx or 429 errors) with less than 30 milliseconds latency for each request. NGINX App Protect WAF handled 19,000 RPS versus Cloudflare WAF at 14,000 RPS, AWS WAF at 6,000 RPS, and Azure WAF at only 2,000 RPS.

Graph showing maximum throughput at 100% success rate: 19,000 RPS for NGINX App Protect WAF; 14,000 RPS for Cloudflare WAF; 6,000 RPS for AWS WAF; 2,000 RPS for Azure WAF
Figure 5: Maximum throughput at 100% success rate

How to Deploy NGINX App Protect and NGINX Ingress Controller on Amazon EKS

NGINX App Protect WAF and DoS leverage an app‑centric security approach with fully declarative configurations and security policies, making it easy to integrate security into your CI/CD pipeline for the application lifecycle on Amazon EKS.

NGINX Ingress Controller provides several custom resource definitions (CRDs) to manage every aspect of web application security and to support a shared responsibility and multi‑tenant model. CRD manifests can be applied following the namespace grouping used by the organization, to support ownership by more than one operations group.

When publishing an application on Amazon EKS, you can build in security by leveraging the automation pipeline already in use and layering the WAF security policy on top.

Additionally, with NGINX App Protect on NGINX Ingress Controller you can configure resource usage thresholds for both CPU and memory utilization, to keep NGINX App Protect from starving other processes. This is particularly important in multi‑tenant environments such as Kubernetes which rely on resource sharing and can potentially suffer from the ‘noisy neighbor’ problem.

Configuring Logging with NGINX CRDs

The logs for NGINX App Protect and NGINX Ingress Controller are separate by design, to reflect how security teams usually operate independently of DevOps and application owners. You can send NGINX App Protect logs to any syslog destination that is reachable from the Kubernetes pods, by setting the parameter to the app-protect-security-log-destination annotation to the cluster IP address of the syslog pod. Additionally, you can use the APLogConf resource to specify which NGINX App Protect logs you care about, and by implication which logs are pushed to the syslog pod. NGINX Ingress Controller logs are forwarded to the local standard output, as for all Kubernetes containers.

This sample APLogConf resource specifies that all requests are logged (not only malicious ones) and sets the maximum message and request sizes that can be logged.

apiVersion: appprotect.f5.com/v1beta1 
kind: APLogConf 
metadata: 
 name: logconf 
 namespace: dvwa 
spec: 
 content: 
   format: default 
   max_message_size: 64k 
   max_request_size: any 
 filter: 
   request_type: all

Defining a WAF Policy with NGINX CRDs

The APPolicy Policy object is a CRD that defines a WAF security policy with signature sets and security rules based on a declarative approach. This approach applies to both NGINX App Protect WAF and DoS, while the following example focuses on WAF. Policy definitions are usually stored on the organization’s source of truth as part of the SecOps catalog.

apiVersion: appprotect.f5.com/v1beta1 
kind: APPolicy 
metadata: 
  name: sample-policy
spec: 
  policy: 
    name: sample-policy 
    template: 
      name: POLICY_TEMPLATE_NGINX_BASE 
    applicationLanguage: utf-8 
    enforcementMode: blocking 
    signature-sets: 
    - name: Command Execution Signatures 
      alarm: true 
      block: true
[...]

Once the security policy manifest has been applied on the Amazon EKS cluster, create an APLogConf object called log-violations to define the type and format of entries written to the log when a request violates a WAF policy:

apiVersion: appprotect.f5.com/v1beta1 
kind: APLogConf 
metadata: 
  name: log-violations
spec: 
  content: 
    format: default 
    max_message_size: 64k 
    max_request_size: any 
  filter: 
    request_type: illegal

The waf-policy Policy object then references sample-policy for NGINX App Protect WAF to enforce on incoming traffic when the application is exposed by NGINX Ingress Controller. It references log-violations to define the format of log entries sent to the syslog server specified in the logDest field.

apiVersion: k8s.nginx.org/v1 
kind: Policy 
metadata: 
  name: waf-policy 
spec: 
  waf: 
    enable: true 
    apPolicy: "default/sample-policy" 
    securityLog: 
      enable: true 
      apLogConf: "default/log-violations" 
      logDest: "syslog:server=10.105.238.128:5144"

Deployment is complete when DevOps publishes a VirtualServer object that configures NGINX Ingress Controller to expose the application on Amazon EKS:

apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
  name: eshop-vs
spec:
  host: eshop.lab.local
  policies:
  - name: default/waf-policy
  upstreams:
  - name: eshop-upstream
    service: eshop-service
    port: 80
  routes:
  - path: /
    action:
      pass: eshop-upstream

The VirtualServer object makes it easy to publish and secure containerized apps running on Amazon EKS while upholding the shared responsibility model, where SecOps provides a comprehensive catalog of security policies and DevOps relies on it to shift security left from day one. This enables organizations to transition to a DevSecOps strategy.

Conclusion

For companies with legacy apps and security solutions built up over years, shifting security left on Amazon EKS is likely a gradual process. But reframing security as code that is managed and maintained by the security team and consumed by DevOps helps deliver services faster and make them production ready.

To secure north‑south traffic in Amazon EKS, you can leverage NGINX Ingress Controller embedded with NGINX App Protect WAF for protect against point attacks at Layer 7 and NGINX App Protect DoS for DoS mitigation at Layers 4 and 7.

To try NGINX Ingress Controller with NGINX App Protect WAF, start a free 30-day trial on the AWS Marketplace or contact us to discuss your use cases.

To discover how you can prevent security breaches and protect your Kubernetes apps at scale using NGINX Ingress Controller and NGINX App Protect WAF and DoS on Amazon EKS, please download our eBook, Add Security to Your Amazon EKS with F5 NGINX.

To learn more about how NGINX App Protect WAF outperforms the native WAFs for AWS, Azure, and Cloudflare, download the High-Performance Web Application Firewall Testing report from GigaOm and register for the webinar on December 6 where GigaOm analyst Jake Dolezal reviews the results.

The post Shifting Security Left with F5 NGINX App Protect on Amazon EKS appeared first on NGINX.

]]>