F5 Distributed Cloud
233 TopicsIntroducing Secure MCN features on F5 Distributed Cloud
Introduction F5 Distributed Cloud Services offers many secure multi-cloud networking features. In the video linked below, I demonstrate how to connect a Secure Mesh Customer Edge (CE) Site running on VMware and using common hardware. This on-prem CE is joined to a site mesh group of three other CE's, two of which are run on the public cloud providers AWS and Azure. Secure Mesh CE is a newly enhanced feature in Distributed Cloud that allows CE's not running in public cloud providers to run on hardware with unique and different configurations. Specifically, it's now possible to deploy site mesh transit networking to all CE's having one, two, or more NIC's, with each CE having its own unique physical configuration for networking. See my article on Secure Mesh Site Networking to learn how to set up and configure secure mesh sites. In addition to secure mesh networking, on-prem CE's can be deployed without app management features, giving organizations the flexibility to conserve deployed resources. Organizations can now choose whether to deploy AppStack CE's, where the CE's can manage and run K8s compute workloads deployed at the site, or use networking-focused CE's freeing up resources that would otherwise be used managing the apps. Whether deploying an AppStack or Secure Mesh CE, both types support Distributed Cloud's comprehensive set of security features, including DDoS, WAF, API protection, Bot, and Risk management. Secure MCN deployment capabilities include the following capabilities: Secure Multi-Cloud Network Fabric (secure connectivity) Discover any app running anywhere across your environments Cloud/On-Prem Customer Edge (CE) Private link connectivity orchestration with F5 XC as-a-service using any transport provider ➡️ Example: AWS PrivateLink, Azure CloudLink, Private transport (IP, MPLS, etc) L3 Network Connect & L7 App Connect capabilities L3/L4 DDoS + Enhanced intent-based firewall policies Security Service insertion w/ support for BIG-IP and Palo Alto Firewalls Application Security Services - WAF, API Protection, L7 DoS, Bot Defense, Client-side defense and more SaaS and Automation for Security, Network, & Edge Compute Powerful monitoring dashboards & troubleshooting tools for the entire secure multi-cloud network fabric Gain visibility into how and which API's are being consumed in workflows ➡️ Monitor and troubleshoot apps including their API's In the following video, I introduce the components that make up a Secure MCN deployment, and then walk through configuring the security features and show how to observe app performance and remediate security related incidents. 0-3:32 - Overview of Secure MCN features 3:32-9:20 - Product Demo Resources Distributed Cloud App Delivery Fabric Workflow Guide (GitHub) Secure MCN Article Series Secure MCN Intro: Introducing Secure MCN features on F5 Distributed Cloud Secure MCN Part 1: Using Distributed Application Security Policies in Secure Multicloud Networking Customer Edge Sites Secure MCN Part 2: The App Delivery Fabric with Secure Multicloud Networking Secure MCN Part 3: The Secure Network Fabric with Multicloud Network Segmentation & Private Provider Network Connectivity Related Technical Articles 🔥 ➡️ Combining the key aspects of Secure MCN with GenAI apps: Protect multi-cloud and Edge Generative AI applications with F5 Distributed Cloud Scale Your DMZ with F5 Distributed Cloud Services Driving Down Cost & Complexity: App Migration in the Cloud How To Secure Multi-Cloud Networking with Routing & Web Application and API Protection Secure Mesh Site Networking (DevCentral) A Complete Multi-Cloud Networking Walkthrough (DevCentral) Product Documentation How-To Create Secure Mesh Sites Product Information Distributed Cloud Network Connect Distributed Cloud App Connect2KViews1like0CommentsHow to Split DNS with Managed Namespace on F5 Distributed Cloud (XC) Part 2 – TCP & UDP
Re-Introduction In Part 1, we covered the deployment of the DNS workloads to our Managed Namespace and creating an HTTPS Load Balancer and Origin Pool for DNS over HTTPS. If you missed Part 1, feel free to jump over and give it a read. In Part 2, we will cover creating a TCP and UDP Load Balancer and Origin Pools for standard TCP & UDP DNS. TCP Origin Pool First, we need to create an origin pool. On the left menu, under Manage, Load Balancers, click Origin Pools. Let’s give our origin pool a name, and add some Origin Servers, so under Origin Servers, click Add Item. In the Origin Server settings, we want to select K8s Service Name of Origin Server on given Sites as our type, and enter our service name, which will be the service name from Part 1 and our namespace, so “servicename.namespace”. For the Site, we select one of the sites we deployed the workload to, and under Select Network on the Site, we want to seledt vK8s Networks on the Site, then click Apply. Do this for each site we deployed to so we have several servers in our Origin Pool. In Part 1, our Services defined the targetPort as 5553. So, we set Port to 5553 on the origin. This is all we need to configure for our TCP Origin, so click Save and Exit. TCP Load Balancer Next, we are going to make a TCP Load Balancer, since its less steps (and quicker) than a UDP Load Balancer (today). On the left menu under Manage, Load Balancers, select TCP Load Balancers. Let’s set a name for our TCP LB and set our listen port, 53 is a reserved port on Customer Edge Sites so we need to use something else, so let’s use 5553 again, under origin pools we set the origin that we created previously, and then we get to the important piece, which is Where to Advertise. In Part 1 we advertised to the internet with some extra steps on how to advertise to an internal network, in this part we will advertise internally. Select Advertise Custom, then click edit configuration. Then under Custom Advertise VIP Configuration, click Add Item. We want to select the Site where we are going to advertise, the network interface we will advertise. Click Apply, then Apply again. We don’t need to configure anything else, so click Save and Exit. UDP Load Balancer For UDP Load Balancers we need to jump to the Load Balancer section again, but instead of a load balancer, we are going to create a Virtual Host which are not listed in the Distributed Applications tile, so from the top drop down “Select Service” choose the Load Balancers tile. In the left menu under Manage, we go to Virtual Hosts instead of Load Balancers. The first thing we will configure is an Advertise Policy, so let’s select that. Advertise Policy Let’s give the policy a name, select the location we want to advertise on the Site Local Inside Network, and set the port to 5553. Save and Exit. Endpoints Now back to Manage, Virtual Hosts, and Endpoints so we can add an endpoint. Name the endpoint and specify based on the screenshot below. Endpoint Specifier: Service Selector Info Discovery: Kubernetes Service: Service Name Service Name: service-name.namespace Protocol: UDP Port: 5553 Virtual Site or Site or Network: Site Reference: Site Name Network Type: Site Local Service Network Save and Exit. Cluster The Cluster configuration will be simple, from Manage, Virtual Hosts, Clusters, add Cluster. We just need a name and select the Origin Servers / Endpoints and select the endpoint we just created. Save and Exit. Route The Route configuration will be simple as well, from Manage, Virtual Hosts, Routes, add Route. Name the route and under List of Routes click Configure, then Add Item. Leave most settings as they are, and under Actions, choose Destination List, then click Configure. Under Origin Pools and Weights, click Add Item. Under Cluster with Weight and Priority select the cluster we created previously, leave Weight as null for this configuration, then click Apply, apply again, apply again, Apply again, Save and Exit. Now we can Finally create a Virtual Host. Virtual Host Under Manage, Virtual Host, Select Virtual Host, then Click Add Virtual Host. There are a ton of options here, but we only care about a couple. Give the Virtual Host a name. Proxy Type: UDP Proxy Advertise Policy: previously created policy Moment of Truth, Again Now that we have our services published we can give them a test. Since they are currently on a non standard port, and most systems dont let us specify a port in default configurations we need to test with dig, nslookup, etc. To test TCP with nslookup: nslookup -port=5553 -vc google.com 192.168.125.229 Server: 192.168.125.229 Address: 192.168.125.229#5553 Non-authoritative answer: Name: google.com Address: 142.251.40.174 To test UDP with nslookup: nslookup -port=5553 google.com 192.168.125.229 Server: 192.168.125.229 Address: 192.168.125.229#5553 Non-authoritative answer: Name: google.com Address: 142.251.40.174 IP Tables for Non-Standard DNS Ports If we wanted to use the nonstandard port tcp/udp dns on Linux or MacOS, we can use IPTABLES to forward all the traffic for us. There isnt a way to set this up in Windows OS today, but as in Part 1, Windows Server 2022 supports encrypted DNS over HTTPS, and it can be pushed as policy through Group Policy as well. iptables -t nat -A PREROUTING -i eth0 -p udp –dport 53 -j DNAT –to XXXXXXXXXX:5553 iptables -t nat -A PREROUTING -i eth0 -p udp –dport 53 -j DNAT –to XXXXXXXXXX:5553 iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 53 -j DNAT –to XXXXXXXXXX:5553 iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 53 -j DNAT –to XXXXXXXXXX:5553 "Nature is a mutable cloud, which is always and never the same." - Ralph Waldo Emerson We might not wax that philosophically around here, but our heads are in the cloud nonetheless! Join the F5 Distributed Cloud user group today and learn more with your peers and other F5 experts. Conclusion I hope this helps with a common use-case we are hearing every day, and shows how simple it is to deploy workloads into our Managed Namespaces.1.7KViews2likes4CommentsF5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF)
Introduction For those of you following along with the F5 Hybrid Security Architectures series, welcome back! If this is your first foray into the series and would like some background, have a look at the intro article. This series is using the F5 Hybrid Security Architectures GitHub repo and CI/CD platform to deploy F5 based hybrid security solutions based on DevSecOps principles. This repo is a community-supported effort to provide not only a demo and workshop, but also a stepping stone for using these practices in your own F5 deployments. If you find any bugs or have any enhancement requests, open an issue, or better yet, contribute! Here in our first example solution, we will be using Terraform to deploy an application server running the OWASP Juice Shop application serviced by a F5 BIG-IP Advanced WAF Virtual Edition. We will supplement this with F5 Distributed Cloud Web App and API Protection to provide complimentary security at the edge. Everything will be tied together using GitHub Actions for CI/CD and Terraform Cloud to maintain state. Distributed Cloud WAF: Available for SaaS-based deployments in a distributed environment that reduces operational overhead with an optional fully managed service. BIG-IP Advanced WAF: Available for on-premises / data center and public or private cloud (virtual edition) deployment, for robust, high-performance web application, and API security with granular, self-managed controls. XC WAF + BIG-IP Advanced WAF Workflow GitHub Repo: F5 Hybrid Security Architectures Prerequisites: F5 Distributed Cloud Account (F5 XC) Create an F5 XC API certificate AWS Account — Due to the assets being created, a free tier will not work. NOTE: You must be subscribed to the F5 BIG-IP AMI being used in the AWS Marketplace. Terraform Cloud Account GitHub Account Assets: xc: F5 Distributed Cloud WAAP bigip-base: F5 BIG-IP Base deployment bigip-awaf: F5 BIG-IP Advanced WAF config infra: AWS Infrastructure (VPC, IGW, etc.) juiceshop: OWASP Juice Shop test web application Tools: Cloud Provider: AWS Infrastructure as Code: Terraform Infrastructure as Code State: Terraform Cloud CI/CD: GitHub Actions Terraform Cloud: Workspaces: Create a workspace for each asset in the workflow chosen Workflow Workspaces xc-bigip infra, bigip-base, bigip-awaf, juiceshop, xc Workspace Sharing: Under the settings for each Workspace, set the Remote state sharing to share with each Workspace created. Your Terraform Cloud console should resemble the following: Variable Set: Create a Variable Set with the following values. IMPORTANT: Ensure sensitive values are appropriately marked. AWS_ACCESS_KEY_ID: Your AWS Access Key ID - Environment Variable AWS_SECRET_ACCESS_KEY: Your AWS Secret Access Key - Environment Variable AWS_SESSION_TOKEN: Your AWS Session Token - Environment Variable VOLT_API_P12_FILE: Your F5 XC API certificate. Set this to api.p12 - Environment Variable VES_P12_PASSWORD: Set this to the password you supplied when creating your F5 XC API key. - Environment Variable ssh_key: Your ssh key for access to created BIG-IP and compute assets. - Terrraform Variable admin_src_addr: The source address of your administrative workstation. - Terraform Variable Environment Variable tf_cloud_organization: Your Terraform Cloud Organization name - Terraform Variable Your Variable Set should resemble the following: GitHub: Fork and Clone Repo: F5 Hybrid Security Architectures Actions Secrets: Create the following GitHub Actions secrets in your forked repo P12: The base64 encoded F5 XC API certificate TF_API_TOKEN: Your Terraform Cloud API token TF_CLOUD_ORGANIZATION: Your Terraform Cloud Organization TF_CLOUD_WORKSPACE_workspace: Create for each workspace used in your workflow. EX: TF_CLOUD_WORKSPACE_BIGIP_BASE would be created with the value bigip-base Your GitHub Actions Secrets should resemble the following: Terraform Local Variables: Step 1: Rename infra/terraform.tfvars.examples to infra/terraform.tfvars and add the following data project_prefix = "Your project identifier" resource_owner = "You" aws_region = "Your AWS region" ex: us-west-1 azs = "Your AWS availability zones" ex: ["us-west-1a", "us-west-1b"] #Assets nic = false nap = false bigip = true bigip-cis = false Step 2: Rename bigip-base/terraform.tfvars.examples to bigip-base/terraform.tfvars and add the following data f5_ami_search_name = "F5 BIGIP-16.1.3* PAYG-Adv WAF Plus 25Mbps*" aws_secretmanager_auth = false #Provisioning set to nominal or none asm = "nominal" apm = "none" Step 3: Rename bigip-awaf/terraform.tfvars.examples to bigip-awaf/terraform.tfvars and add the following data awaf_config_payload = "awaf-config.json" Step 4: Rename xc/terraform.tfvars.examples to xc/terraform.tfvars and add the following data api_url = "https://<YOUR TENANT>.console.ves.volterra.io/api" xc_tenant = "Your tenant id available in F5 XC Administration section Tenant Overview" xc_namespace = "Your XC Namespace" app_domain = "Your APP FQDN" xc_waf_blocking = true Step 4: Commit your changes Deployment Workflow: Step 1: Check out a branch for the deploy workflow using the following naming convention xc-bigip deployment branch: deploy-xc-bigip Step 2: Push your deploy branch to the forked repo Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your build Step 4: Once the pipeline completes, verify your assets were deployed to AWS and F5 XC Note: Check the terraform outputs of the bigip-base job for the randomly generated password for BIG-IP GUI access F5 BIG-IP Terraform Outputs: Step 5: Verify your app is available by navigating to the app domain FQDN you provided in the setup. Note: The autocert process takes time. It may be 5 to 10 minutes before Let's Encrypt has provided the cert F5 XC Terraform Outputs: Destroy Workflow: Step 1: From your main branch, check out a new branch for the destroy workflow using the following naming convention xc-bigip destroy branch: destroy-xc-bigip Step 2: Push your destroy branch to the forked repo Step 3: Back in GitHub, navigate to the Actions tab of your forked repo and monitor your workflow Step 4: Once the pipeline completes, verify your assets were destroyed in AWS and F5 XC Conclusion In this article, we have shown how to utilize the F5 Hybrid Security Architectures GitHub repo and CI/CD pipeline to deploy a tiered security architecture utilizing F5 XC WAF and BIG-IP Advanced WAF to protect a test web application. While the code and security policies deployed are generic and not inclusive of all use-cases, they can be used as a steppingstone for deploying F5 based hybrid architectures in your own environments. Workloads are increasingly deployed across multiple diverse environments and application architectures. Organizations need the ability to protect their essential applications regardless of deployment or architecture circumstances. Equally important is the need to deploy these protections with the same flexibility and speed as the apps they protect. With the F5 WAF portfolio, coupled with DevSecOps principles, organizations can deploy and maintain industry-leading security without sacrificing the time to value of their applications. Not only can Edge and Shift Left principles exist together, but they can also work in harmony to provide a more effective security solution. Teachable Course: Here, You can access hands-on course for F5 Hybrid XC WAF with BIG-IP Advanced WAF through the following link. Training Course Article Series: F5 Hybrid Security Architectures (Intro - One WAF Engine, Total Flexibility) F5 Hybrid Security Architectures (Part 1 - F5's Distributed Cloud WAF and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF) F5 Hybrid Security Architectures (Part 3 - F5 XC API Protection and NGINX Ingress Controller) F5 Hybrid Security Architectures (Part 4 - F5 XC BOT and DDoS Defense and BIG-IP Advanced WAF) F5 Hybrid Security Architectures (Part 5 - F5 XC, BIG-IP APM, CIS, and NGINX Ingress Controller) For further information or to get started: F5 Distributed Cloud Platform F5 Distributed Cloud WAAP Services F5 Distributed Cloud WAAP YouTube series F5 Distributed Cloud WAAP Get Started6.9KViews4likes0CommentsMitigation of OWASP API Security Risk: BOPLA using F5 XC Platform
Introduction: OWASP API Security Top 10 - 2019 has two categories “Mass Assignment” and “Excessive Data Exposure” which focus on vulnerabilities that stem from manipulation of, or unauthorized access to an object's properties. For ex: let’s say there is a user information in json format {“UserName”: ”apisec”, “IsAdmin”: “False”, “role”: ”testing”, “Email”: “apisec@f5.com”}. In this object payload, each detail is considered as a property, and so vulnerabilities around modifying/showing these sensitive properties like email/role/IsAdmin will fall under these categories. These risks shed light on the hidden vulnerabilities that might appear when modifying the object properties and highlighted the essence of having a security solution to validate user access to functions/objects while also ensuring access control for specific properties within objects. As per them, role-based access, sanitizing the user input, and schema-based validation play a crucial role in safeguarding your data from unauthorized access and modifications. Since these two risks are similar, the OWASP community felt they could be brought under one radar and were merged as “Broken Object Property Level Authorization” (BOPLA) in the newer version of OWASP API Security Top 10 – 2023. Mass Assignment: Mass Assignment vulnerability occurs when client requests are not restricted to modifying immutable internal object properties. Attackers can take advantage of this vulnerability by manually parsing requests to escalate user privileges, bypass security mechanisms or other approaches to exploit the API Endpoints in an illegal/invalid way. For more details on F5 Distributed Cloud mitigation solution, check this link: Mitigation of OWASP API6: 2019 Mass Assignment vulnerability using F5 XC Excessive Data Exposure: Application Programming Interfaces (APIs) don’t have restrictions in place and sometimes expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks to gain access to customer information, and identifying the sensitive information in these huge chunks of API response data is crucial in data safety. For more details on this risk and F5 Distributed Cloud mitigation solution, check this link: Mitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC Conclusion: Wrapping up, this article covers the overview of the newly added category of BOPLA in OWASP Top 10 – 2023 edition. Finally, we have also provided minutiae on each section in this risk and reference articles to dig deeper into F5 Distributed Cloud mitigation solutions. Reference links or to get started: F5 Distributed Cloud Services F5 Distributed Cloud WAAP Introduction to OWASP API Security Top 10 2023521Views3likes0CommentsMitigating OWASP API Security Risk: Excessive Data Exposure using F5 XC Platform
This is part of the OWASP API Security TOP 10 mitigation series, and you can refer here for an overview of these categories and F5 Distributed Cloud Platform (F5 XC) Web Application and API protection (WAAP). Introduction to Excessive Data Exposure Application Programming Interfaces (APIs) are the foundation stone of modern evolving web applications which are driving the digital world. They are part of all phases in product development life cycle, starting from design, testing to end customer using them in their day-to-day tasks. Since they don't have restrictions in place, sometimes APIs expose sensitive data such as Personally Identifiable Information (PII), Credit Card Numbers (CCN) and Social Security Numbers (SSN), etc. Because of these issues, they are the most exploited blocks in cybercrime to gain access to customer information which can be sold or further used in other exploits like credential stuffing, etc. Most of the time, the design stage doesn't include this security perspective and relies on 3rd party tools to perform sanitization of the data before displaying the results to customers. Identifying the sensitive information in these huge chunks of API response data is sophisticated and most of the available security tools in the market don't support this capability. So instead of relying on third party tools it's recommended to follow shift left strategies and add security as part of the development phase. During this phase, developers must review and ensure that the API returns only required details instead of providing unnecessary properties to avoid sensitive data exposure. Excessive data exposure attack scenario-1 To showcase this category, we are exposing sensitive details like CCN and SSN in one of the product reviews of Juice shop application (refer links for more info) as below - Overview of Data Guard: Data Guard is F5 XC load balancer feature which shields the responses from exposing sensitive information like CCN/SSN by masking these fields with a string of asterisks (*). Depending on the customer's requirement, they can have multiple rules configured to apply or skip processing for certain paths and routes. Preventing excessive data exposure using F5 Distributed Cloud Step1: Create origin pool - Refer here for more information Step2: Create Web Application Firewall policy (WAF) - Refer here for details Step3: Create https load balancer (LB) with above created pool and WAF policy - Refer here for more information Step4: Upload your application swagger file and add it to above load balancer - Refer here for more details Step5: Configure Data Guard on the load balancer with action and path as below Step6: Validate the sensitive data is masked Open postman/browser, check the product reviews section/API and validate these details are hidden and not exposed as in original application In Distributed Cloud Console expand the security event and check the WAF section to understand the reason why these details are masked as below: Excessive data exposure attack scenario-2 In this demonstration we are using an API based vulnerable application VAmPI (VAmPI is a vulnerable API made with Flask, and it includes vulnerabilities from the OWASP top 10 vulnerabilities for APIs, for more info follow the repo link). Follow below steps to bring up the setup: Step1: Host the VAmPI application inside a virtual machine Step2: Login to XC console, create a HTTP LB and add the hosted application as an origin server Step3: Access the application to check its availability. Step4: Now enable API Discovery and configure sensitive data discovery policy by addingall the compliance frameworks in your HTTP LB config. Step5: Hit the vulnerable API Endpoint '/users/v1/_debug' exposing sensitive data like username, password etc. Step6: Navigate to security overview dashboard in the XC console and select the API Endpoints tab. Check for vulnerable endpoint details. Step7: In the Sensitive Data section, click Ellipsis on the right side to get options for action. Step8: Clicking on the option 'Add Sensitive Data Exposure Rule' will automatically add the entries for sensitive data exposure rule to your existing LB configs. Apply the configuration. Step9: Now again, hit the vulnerable API Endpoint '/users/v1/_debug' Here in the above image, you can see masked values in the response. All letters changed to 'a' and number is converted to '1'. Step10: Optionally you can also manually configure sensitive data exposure rule by adding details about the vulnerable API endpoint. Login back to XC console Start configuring API Protection rule in the created HTTP LB Click Configure in the Sensitive Data Exposure Rules section. Click Add Item to create the first rule. In the Target section, enter the path that will respond to the request. Also enter one or more methods with responses containing sensitive information. In the Values field in Pattern section, enter the JSON field value you want to mask. For example, to mask all emails in the array users, enter “users[_].email”. Note that an underscore between the square brackets indicates the array's elements. Once the above rule gets applied, values in the response will be masked as follows: All letters will change to a or A (matching case) and all numbers will convert to 1. Click Apply to save the rule to the list of Sensitive Data Exposure Rules. Optionally, Click Add Item to add more rules. Click Apply to save the list of rules to your load balancer. Step11: After the completion of Step10, Hit back the vulnerable API Endpoint. Here also in the above image, you can see masked values in the response as per the configurations done in Step 10. Conclusion As we have seen in the above use cases sensitive data exposure occurs when an application does not protect sensitive data like PII, CCN, SSN, Auth Credentials etc. Leaking of such information may lead to serious consequences. Hence it becomes extremely critical for organizations to reduce the risk of sensitive data exposure. As demonstrated above, F5 Distributed Cloud Platform can help in protecting the exposure of such sensitive data with its easy to use API Security solution offerings. For further information check the links below OWASP API Security - Excessive Data Exposure OWASP API Security - Overview article F5 XC Data Guard Overview OWASP Juice Shop VAmPI2.8KViews3likes2CommentsMitigating OWASP API Security Risk: Unrestricted Resource Consumption using F5 Distributed Cloud Platform
Introduction: Unrestricted Resource Consumption vulnerability occurs where an API allows end users to over-utilize resources (e.g., CPU, memory, bandwidth, or storage) without enforcing proper limitations. This can lead to overwhelming of the system, performance degradation, denial of service (DoS) or complete unavailability of the services for valid users. Attack Scenario: In this demo, we are going to generate huge traffic and observe the server’s behaviour along with its response time. Fig 1: Using Apache JMeter to send arbitrary number of requests to API endpoint continuously in very short span of time. Fig 2: (From left to right) Response time during normal and server with huge traffic. Above results show higher response time when abnormal traffic is sent to a single API endpoint when compared to normal usage. By further increases in volume, server can become unresponsive, deny requests from real users and result in DoS attacks. Fig 3: Attackers performing arbitrary number of API request to consume the server’s resources Customer Solution: F5 Distributed Cloud (XC) WAAP helps in solving above vulnerability in the application by rate limiting the API requests, thereby preventing complete consumption of memory, file system storage, CPU resources etc. This protects against traffic surge and DoS attacks. This article aims to provide F5 XC WAAP configurations to control the rate of requests sent to the origin server. Step by Step to configure Rate Limiting in F5 XC: These are the steps to enable Rate Limiting feature for APIs and its validation Add API Endpoints with Rate Limiter values Validation of request rate to violate threshold limit Verifying blocked request in F5 XC console Step 1: Add API Endpoints with Rate Limiter values Login to F5 XC console and Navigate to Home > Load Balancers > Manage > Load Balancers Select the load balancer to which API Rate Limiting should be applied. Click on the menu in Actions column of the app’s Load Balancer and click on Manage Configurations as shown below to display load balancer configs. Fig 4: Selecting menu to manage configurations for load balancer Once Load Balancer configurations are displayed, click on Edit configuration button on the top right of the page. Navigate to Security Configuration and select “API Rate Limit” in dropdown of Rate Limiting and click on “Add Item” under API Endpoint section. Fig 5: Choosing API Rate Limit to configure API endpoints. Fig 6: Configuring rate limit to API Endpoint Rate limit is configured to GET request from API Endpoint “/product/OLJCESPC7Z”. Click on Apply button displayed on the right bottom of the screen. Click on “Save and Exit” for above configuration to get saved to Load Balancer. Validation of request rate to violate threshold limit Fig 7: Verifying request for first time Request is sent for the first time after configuring API Endpoint and can be able to see the response along with status code 200. Upon requesting to the same API Endpoint beyond the threshold limit blocks the request as shown below, Fig 8: Rate Limiting the API request Verifying blocked request from F5 XC console From the F5 XC Console homepage, Navigate to WAAP > Apps & APIs > Security and select the Load Balancer. Click on Requests to view the request logs as below, Fig 9: Blocked API request details from F5 XC console You can see requests beyond the rate limiter value get dropped and the response code is 429. Conclusion: In this article, we have seen that when an application receives an abnormal amount of traffic, F5 XC WAAP protects APIs from being overwhelmed by rate limiting the requests. XC's Rate limiting feature helps in preventing DoS attacks and ensures service availability at all times. Related Links: API4:2019 Lack of Resources and Rate Limiting API4:2023 Unrestricted Resource Consumption Creating Load balancer Steps F5 Distributed Cloud Security WAAP F5 Distributed Cloud Platform133Views0likes0Comments