load balancer logo

Load Balancer

Dynamically distribute your traffic to increase the scalability of your application

Load Balancer makes it easier to ensure the scalability, high availability, and resilience of your applications. This is achieved by dynamically balancing the traffic load across multiple instances, in multiple regions. Deliver your application's users a great experience, by automatically managing variable traffic and handling peak loads, while getting costs under control. By combining Load Balancer with Gateway and Floating IP, you can set up a solution that acts as a single entry point to your application, secures the exposure of your private resources, and supports fail-over scenarios.

Built for high-availability

Load Balancer is built upon a distributed architecture and is backed by an SLA providing 99.99% availability. Leveraging its health check capability, Load Balancer distributes the load to available instances.

Created with Sketch.

Designed for automated deployment

Choose from the load balancer size that fits your needs. Configure and automate with Openstack API, UI, CLI, or with OVHcloud API. Load Balancer can be deployed with Terraform to automate and balance the traffic loads on a wide scale.

Created with Sketch.

Built-in security

To ensure data security and confidentiality, the Load Balancer comes with free HTTPS termination, and benefits from our Anti-DDoS infrastructure, a real-time protection against network attacks.

Discover our Load Balancer range

The following table provides informational values to help you choose the plan that best meets your needs.


 
Applicable to the following listeners:
All HTTP/TCP/HTTPS* HTTP/ HTTPS TERMINATED_HTTPS* UDP
Load Balancer Size Bandwidth Concurrent active session  Session created per second Requests  per second SSL/TLS session created per second Requests per second Packet per second
Size S 200 Mbs (up/down) 10k 8k 10k 250 5k 10k
Size M 500 Mbs (up/down) 20k 10k

20k

500 10k 20k
Size L 2 Gbs (up/down) 40k 10k

40k

1,000 20k 40k


*HTTPS listener is passthrough meaning that is the SSL/TLS termination is managed by the load balancer members. On contrary, the TERMINATED_HTTPS listener is managing the SSL/TLS termination.

Use cases

Manage high volumes of traffic and seasonal activity

With the load balancer, you can manage traffic growth and decline by seamlessly adding and removing instances, such as pool members, to your configuration in just a few clicks or an API call.

Blue-Green or canary deployment

With Floating IPs, you can switch between blue/green environments quickly and with agility, because roll-back is easy. Leverage the power of L7 policies and weighted routing to seamlessly implement canary deployments, enhancing your application's performance in order to guarantee an optimal user experience.

Optimise instances' performance with HTTPS encryption offloading

With HTTPS termination, you can shift TLS/SSL encryption and decryption tasks to the load balancer. As a result, your instances benefit from offloading and speed up your application logic.

Load Balancer scenarios

The load balancer can be used in three main types of architecture. Floating IP and Gateway may or may not be part of the architecture, depending on the given scenario.

public to private

Public to Private

Incoming traffic originates from the internet and reaches a Floating IP associated with the load balancer. The instances behind the load balancer are located on a private network and have no public IP, which ensures that they remain completely private and isolated from the internet.

 

public to public

Public to Public

Incoming traffic originates from internet and reaches a Floating IP that is associated to the Load Balancer. The instances to which the Load Balancer routes traffic are accessible with a public IP. Hence, the Load Balancer uses the Floating IP with an egress to reach these instances.

private to private

Private to Private

Incoming traffic originates from a private network and is routed to instances accessible from this private network. In this case, Floating IP or Gateway are not needed.

Usage

Our Load Balancer can be used with Openstack API or CLI, and will be available later through the Control Panel.

Below are the basic commands to

Create a Load Balancer

openstack loadbalancer create --flavor small --vip-network-id my_private_network

Configure an entry point (listener) and a target (pool) :

openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 test
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /healthcheck pool1
openstack loadbalancer member create --subnet-id my_subnet --address 10.0.0.1 --protocol-port 80 pool1
openstack loadbalancer member create --subnet-id my_subnet --address 10.0.0.2 --protocol-port 80 pool1

Configure the network (remind that you need be inside a vRack for this to work properly, check our guide to deploy a vRack)

# configure the network Gateway
openstack subnet set --gateway 10.0.0.254 my_subnet
# add a vrouter
openstack router create myrouter
openstack router set --external-gateway Ext-Net myrouter
openstack router add subnet myrouter my_subnet
# add the floating IP
openstack floating ip create Ext-Net
# The following IDs should be visible in the output of previous commandsv
openstack floating ip set --port

Documentation and guides

Networking concepts in Public Cloud
Getting started with Load Balancer on Public Cloud
The differences between Private to Private and Public to Private Load Balancer
Configure HTTPS termination with Let's Encrypt
Ready to get started?

Create an account and launch your services in minutes.

Main features

Types of incoming traffic

Public Cloud Load Balancer supports the following incoming traffic: HTTP/HTTPS (1.1 or 2), SCTP, TCP, UDP. With one LB, you can even receive different traffic types on several ports.

Highly available by design

OVHcloud manages the availability of your load balancer.

Multiple load balancing algorithm with optional session persistence

Public Cloud Load Balancer supports the following algorithm for balancing traffic: least-connections, round-robin, source-IP or source-IP-port. You can also define session persistence based on your custom cookie application, or let the Load Balancer manage it for you.

SSL/TLS offloading with HTTPS termination

SSL/TLS encryption and decryption can be run by the load balancer, so as to focus your instances' performance on your application logic. This feature lets you manage your certificate in a single location.

Transmit client connection information to members (PROXY/PROXY2)

Thanks to PROXY/PROXY2 procotol support, your members can receive the client connection information (source IP, source port, etc.)

Monitor the health of pool members

The load balancer can monitor members' health through various requests: HTTP, HTTPS, PING, SCTP, TCP, TLS-HELLO and UDP-CONNECT. If a member is not responding to the health monitor request, the Load Balancer will stop sending traffic to it, resuming when the health check is back to normal.

Publication of Prometheus-compliant metrics

The load balancer metrics can publish an endpoint that provides the metrics in a Prometheus-compliant format.

Incoming HTTP traffic filtering/redirecting/routing

Thanks to L7 policy and rules, the load balancer can filter and redirect traffic to a specific URL or route HTTP traffic to a specific member pool.

Integration with Managed Kubernetes Service

The Public Cloud Load Balancer is fully integrated with our Managed Kubernetes Service offering.

Public to Private, Public to Public or Private to Private architecture

This Load Balancer can be used in three architecture types, namely Public to Private, Public to Public or Private to Private. When handling public traffic, Gateway and Floating IP are required.

Redirect traffic to Public Cloud instances, Bare Metal servers or Hosted Private Cloud

Integrated in vRack Private Network, the Public Cloud Load Balancer can redirect traffic to any host connected to the same VLAN even if they belong to another product universe.

Simplified management

Choose the tool that suits you for administration of your Load Balancer: OpenStack Horizon UI or API.

Predictible pay-as-you-go pricing model

Our pricing structure is predictible as you only pay while using the product. There are no ingress or egress fees charged for traffic.

Public cloud prices

Load Balancer billing

Load Balancer is billed upon usage, on an hourly basis. The service is available in three plans, depending on your traffic profile: Small, Medium, and Large.

FAQ

What is Layer-7 HTTP(S) load balancing?

This describes the way to transport the application layer (ie: the web traffic) from a source, to backend servers through a loadbalancing component which can apply different advanced traffic routing policies. These policies includes the use of http cookies, proxy-protocol support, different methods of load distribution between the backends, https use and offloading

Why is my Load Balancer spawned per-region?

The availability of Public Cloud solutions depends on OpenStack regions. Each region has its own OpenStack platform, which provides it with its own computing, storage, network resources, etc. You can find out more about regional availability here.

What protocols can I use with my Load Balancer?

The supported protocols are - at the launch of the product, with version : TCP, HTTP, HTTPS, TERMINATED_HTTPS, UDP, SCTP and HTTP/2.

How does Load Balancer verify which hosts are healthy?

Load Balancer uses healthmonitor to check if backend services are alive. You can configure a number of protocols for that purpose, including (but not limited to) HTTP, TLS, TCP, UDP, SCTP and PING.

I have my own SSL certificate, can I use it?

Yes, of course. You can either use the OVHcloud Customer Control Panel to upload your own SSL certificate to be used with Load Balancer, or you can perform this operation using the OVHcloud API if you have require this action to be automated.

I don't know how to generate an SSL certificate, how can I use HTTPS LBaaS?

That's not an issue! Through the OVHcloud Customer Control Panel, you're able to create and generate your own Let's Encrypt SSL DV certificate, and use it with your LoadBalancer, making your deployment easy. The Let's Encrypt SSL DV certificate is included within in the price of the Load Balancer at no additional charge.

What is a load balancer in the Cloud ?

A cloud Load Balancer is a load balancing system that is fully managed in the cloud, which can be quickly instantiated, configured via API and has very high availability. A typical feature of a cloud Load Balancer is pay-per-use billing. This means that you only pay for what you use.

What is the difference between Load Balancer for Kubernetes and Load Balancer?

Load Balancer for Kubernetes works for our Managed Kubernetes offer only. It delivers an interface that is directly compatible with Kubernetes. This means you can easily control your Load Balancer for Kubernetes, with native tools.

Load Balancer is built upon Openstack Octavia, and can be deployed within your Public Cloud project, leveraging  the Openstack API, enabling automation through tool like Terraform, Ansible, or Salt. Load Balancer is planned to support Kubernetes and we will keep you updated about its availability.