NGINX.COM
Web Server Load Balancing with NGINX Plus

In the context of computer security, the perimeter is a conceptual line that establishes a “zone of trust” for applications and other infrastructure components inside it.

In traditional infrastructure environments using a castle-and-moat approach, the perimeter separated the intranet (internal network) from the extranet or Internet. The intranet was presumed safe, and threat was assumed to come only from outside it. Security posture was rather static and consisted of establishing fortification around the intranet with packet inspection and access control measures.

Over time, however, external attackers have found ways to circumvent security controls and compromise components in the intranet, or craft external attacks and make it look like their requests originate from within the perimeter – since the intranet was considered safe. This has motivated a change to a Zero Trust security model, where all entities (internal and external) are continually assessed before trust is established. The intranet is no longer assumed safe.

Digital transformation and new app architectures like microservices have also introduced additional security challenges. Many corporate apps are hosted in public clouds, or distributed across the cloud and on‑premises topologies, meaning the security infrastructure protecting apps is no longer entirely under the control of a local administrator. As a result, many organizations now establish the perimeter around individual apps (or small groups of apps with direct structural dependencies among them). In this graphic the green dotted line represents the perimeter.

Regardless of the architectural model, there’s a “gatekeeper” that sits on the perimeter to inspect incoming traffic and enforce security policies that protect the apps inside it. We refer to this gatekeeper as the edge. In the following illustration of three common deployment patterns, the edge is represented by the red box enclosing the word PROTECT.

In containerized architectures, such as the Kubernetes framework, the same concepts apply. An Ingress controller acts as the edge for an entire Kubernetes cluster, managing access from external clients and routing requests to the Kubernetes services in the cluster. As shown in the following diagram, however, security policies can be enforced at a more granular level within the cluster as well – per‑Pod and per‑Service:

  • With per‑Pod protection (depicted on the left), the Pod defines the perimeter containing an app or app component in one or more containers.
  • With per‑Service protection (depicted on the right), a Service exposes the instances of an app deployment through one or more Pods. The perimeter is established around the pods behind the Service.

Securing the Perimeter with NGINX App Protect

NGINX App Protect is a modern application security solution built on F5’s market‑leading web application firewall (WAF) technology. With NGINX App Protect, you can enforce security for your apps with agility by protecting the perimeter, regardless of the deployment environment or app architecture: on‑premises, cloud, hybrid, microservices‑based, or containerized. For more information, see Introducing NGINX App Protect: Advanced F5 Application Security for NGINX Plus.

Implementing security at the edge with NGINX App Protect offers the main advantage of performing security outside of the perimeter. Essentially, traffic inspection and access control occur at the edge to ward off threats before they cross the perimeter. As the last hop before the apps, the edge is where you can best see the type and number of threats against your apps.

The following snippet configures NGINX App Protect to secure three apps (app1, app2, and app3) that are accessed separately within a perimeter:

load_module modules/ngx_http_app_protect_module.so; 
error_log /var/log/nginx/error.log debug;

http {
    # Enable NGINX App Protect in 'http' context
    app_protect_enable on;  

    # Enable remote logging        
    app_protect_security_log_enable on; 

    # Default JSON security policy
    app_protect_policy_file "/etc/nginx/NginxDefaultPolicy.json";  

    # Set remote logging options (in referenced file) and log server IP address/port
    app_protect_security_log "/etc/nginx/log-default.json" syslog:server=127.0.0.1:515;

    server { 
        listen 80;
        server_name app1.com;
        app_protect_policy_file "/etc/nginx/NginxApp1Policy.json"; # JSON policy for app1

        location / {
            proxy_pass http://www.app1.com:8080$request_uri;
        }
    }

    server {
        listen 80;
        server_name app2.com;
        app_protect_policy_file "/etc/nginx/NginxApp2Policy.json"; # JSON policy for app2

        location / {
            proxy_pass http://www.app2.com:8080$request_uri;
        }
    }

    server {
        listen 80;
        server_name app3.com;
        app_protect_policy_file "/etc/nginx/NginxApp3Policy.json"; # JSON policy for app3

        location / {
            proxy_pass http://www.app3.com:8080$request_uri;
        }
    }
}

The following snippet configures NGINX App Protect to secure app1, app2, and app3, which are embedded in and presented through a single application within a perimeter:

load_module modules/ngx_http_app_protect_module.so; 
error_log /var/log/nginx/error.log debug;

http {
    server {
        listen      80;
        server_name app.com;

        # Enable NGINX App Protect in 'http' context
        app_protect_enable on;  

        # Enable remote logging        
        app_protect_security_log_enable on; 

        # Default JSON security policy
        app_protect_policy_file "/etc/nginx/NginxDefaultPolicy.json";  

        # Set remote logging options (in referenced file) and log server IP address/port 
        app_protect_security_log "/etc/nginx/log-default.json" 
                                 syslog:server=10.1.20.6:5144;

        location / {
            # Main JSON policy file
            app_protect_policy_file "/etc/nginx/policy/policy_main.json";
            proxy_pass http://app.com$request_uri;
        }

        location /app1 {
            # JSON policy file for app1
            app_protect_policy_file "/etc/nginx/policy/policy_app1.json"; 
            proxy_pass http://app.com$request_uri;
        }

        location /app2 {
            # JSON policy file for app2
            app_protect_policy_file "/etc/nginx/policy/policy_app2.json"; 
            proxy_pass http://app.com$request_uri;
        }

        location /app3 {
            # JSON policy file for app3
            app_protect_policy_file "/etc/nginx/policy/policy_app3.json"; 
            proxy_pass http://app.com$request_uri;
        }
    }
}

In both configurations, there is a separate app_protect_policy_file directive for each of the apps, assigning each a distinct security policy because they have different security requirements.

For additional NGINX App Protect configurations, see the documentation.

Perimeter Security in the CI/CD Pipeline with NGINX App Protect

NGINX App Protect addresses the security challenges of scalability and automation for modern apps. By inserting NGINX App Protect directly into multiple integration points in your CI/CD pipeline, you can secure your apps, Pods, and Services closer to their code, bridging the gap between development, operations, and security.

Integrating security directly in your app development cycle enables you to perform and automate security testing to discover the security risks to your apps and app components. You can define security policy enforcement and revert the publication of apps if the security compliance requirements are not met. Effectively, you continuously deliver secure apps by integrating NGINX App Protect into new versions of your apps before they are published.

Ready to Secure Your Perimeter with NGINX App Protect?

Start your free 30-day trial of NGINX App Protect and NGINX Plus today or contact us to discuss your use cases. You can also read the product documentation and learn more about the full set of F5 web app and API protection solutions.

Hero image
Web Application Security

O'Reilly eBook with Compliments of NGINX



About The Author

Isaac Noumba

Product Manager (Security)

About F5 NGINX

F5, Inc. is the company behind NGINX, the popular open source project. We offer a suite of technologies for developing and delivering modern applications. Together with F5, our combined solution bridges the gap between NetOps and DevOps, with multi-cloud application services that span from code to customer.

Learn more at nginx.com or join the conversation by following @nginx on Twitter.