Flow Documentation
Company PagePlatform Status
  • Overview
  • Platform
    • Release Notes
    • Pricing
      • Compute
      • Kubernetes
      • App Engine
      • Object Storage
      • Mac Bare Metal
      • CI Engine
      • DevOps Services
      • Volumes & Snapshots
      • Load Balancers
      • Elastic IPs
      • VPN & Peering
      • Licenses
      • Support
      • Billing FAQ
    • Account
      • Sign Up
      • Closing account
    • Cashback Program
    • Support
      • Case severity and initial response times
    • Service Level Agreement (SLA)
    • Security & Compliance
      • Log4j Vulnerability
    • Regions
      • ALP1
      • ALP2
      • ZRH1
  • Products
    • Compute
      • Instances
        • How-to
          • Connect to instances
          • Destroy Instances
      • Volumes
      • Keypairs
      • ▫️Networking
        • Private Networks
        • Routers
        • Security Groups
        • Elastic IPs
        • Load Balancers
          • Balancing Pools
        • Certificates
        • VPN & Peering
    • Kubernetes
      • Clusters
      • Resources
        • Volumes Features (CSI)
        • External Load Balancers
        • Cluster Autoscaler
        • Traefik upgrade and tests
        • Update custom resource definitions (CRDs) for VolumeSnapshots
    • Object Storage
      • Instances
      • How-to
        • Access Storage with AWS S3 SDKs
        • Access Storage with Cyberduck
        • Access Storage with Mountainduck
      • Ressources
        • Supported Amazon S3 features
        • Replication Management
          • GET service replication
          • PUT service replication
          • DELETE service replication
    • App Engine
      • Accounts
    • Mac Bare Metal
      • Devices
      • How-to
        • Connect via Remote Desktop
        • Connect via SSH
        • Change Display Resolution
        • Connect local USB devices
      • Resources
        • Deprovisioning
    • CI Engine
      • Subscriptions
      • How-to
        • Setup GitHub Actions Integration
        • Setup Buildkite Integration
        • Customise Image
        • Enable Debug Mode
        • Change Image of Integration
      • Resources
        • Runners & Concurrency
        • Vanilla Images
          • macOS 15.2 - Vanilla
        • Golden Images
          • macOS 15.2 - Golden
        • Custom Images
  • Developer Center
    • Overview
    • API
      • Product Entities
      • Location Entities
    • CLI
    • Terraform
Powered by GitBook
On this page
  • Quickstart
  • Protocol Support
  • Balancing Algorithms
  • Plans and Pricing
  • Regional Availability
  • Limits
  1. Products
  2. Compute
  3. Networking

Load Balancers

PreviousElastic IPsNextBalancing Pools

Last updated 2 years ago

Flow Load Balancers are a fully-managed, highly available network load balancing service. Load balancers distribute traffic to groups of Instances or Kubernetes Clusters, which decouples the overall health of a backend service from the health of a single server to ensure that your services stay online.

Quickstart

  1. Start by clicking the Wizard button in the . Click Create Load Balancer.

  2. Name your Load Balancer.

  3. Choose a data center .

  4. Confirm the network topology. If you have more than one Private Network, you can select the one you want. By default, each Load Balancer is assigned an address and is reachable via the Internet. If you wish for the Load Balancer to be reachable only internally, uncheck the IPv4 checkbox. Click on Finish. Deploying a Load Balancer takes a few minutes.

  5. To edit and manage the newly created Load Balancer, click on it in the list.

  6. Create a new pool by clicking on the (+) Plus button under the Balancing Pools tab.

  7. Under Forwarding Rule, choose the protocol. Enable Proxy Protocol only if you want to preserve the client IP for SSL passthrough.

  8. Under Load Balancer Port, specify the listener port.

  9. Under Balancing Algorithm, choose the . Enable Sticky Session only if you want to enable the Session Persistence feature. Click on Next to proceed.

  10. Under Members Port, specify the backend port. It can be the same port number as in step 7 (listener) or its own port number.

  11. Under Members, add the Load Balancer members. Click on Next to proceed.

  12. Under Protocol, choose the protocol that the Health Monitor should use to monitor the availability of the pool members. Click on Finish to create a new Balancing Pool.

Protocol Support

A single Load Balancer can be configured to handle multiple protocols and ports. You can control traffic routing with configurable rules that specify the ports and protocols that the load balancer should listen on, as well as the way that it should select and forward requests to the backend servers.

Because Flow Load Balancers are network load balancers, not application load balancers, they do not support directing traffic to specific backends based on URLs, cookies, HTTP headers, etc.

HTTP

Standard HTTP balancing directs requests based on standard HTTP mechanisms. The load balancer sets the X-Forwarded-For, X-Forwarded-Proto, and X-Forwarded-Port headers to give the backend servers information about the original request.

If user sessions depend on the client always connecting to the same backend, a cookie can be sent to the client to enable sticky sessions.

HTTPS AND HTTP

You can balance secure traffic using either HTTPS or HTTP. Both protocols can be configured with:

  • SSL termination, which handles the SSL decryption at the load balancer after you add your SSL certificate and private key.

  • SSL passthrough, which forwards encrypted traffic to your backend Droplets. This is a good for end-to-end encryption and distributing the SSL decryption overhead, but you’ll need to manage the SSL certificates yourself.

TCP / UDP

TCP / UDP balancing is available for applications that do not speak HTTP. For example, deploying a load balancer in front of a database cluster like Galera would allow you to spread requests across all available machines.

PROXY Protocol

Balancing Algorithms

  • Least connections. Requests will be forwarded to the VM with the least number of active connections.

  • Round robin. All VMs will receive requests in the round-robin manner.

  • Source IP. Requests from a unique source IP address will be directed to the same VM.

Enable/disable the Sticky session option to enable/disable session persistence. The load balancer will generate a cookie that will be inserted into each response. The cookie will be used to send future requests to the same VM.

Plans and Pricing

Regional Availability

Load Balancers are available in all regions. They are region-specific resources and can only be assigned within the same region.

Limits

  • At the moment, IPv6 is not supported.

is a way to send client connection information (like origin IP addresses and port numbers) to the final backend server rather than discarding it at the load balancer. This information can be helpful for use cases like analyzing traffic logs or changing application functionality based on geographical IP.

For pricing details, please consult the .

▫️
PROXY protocol
pricing page
Control Panel
Region
algorithm