ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • HAProxy
    DevOps/MIddleware 2020. 5. 3. 01:41

    1. Overview

    HAProxy is a reliable and high-performance load balancer that can operate on the TCP and HTTP networking layers it's a very popular free and open-source software load balancer that powers many distributed systems web applications and websites and is considered to be almost a standard for load balancers in the industry. It is actually very easy to set up but despite its simplicity and has a lot of very advanced features and capabilities which are so important for production systems. Naturally, it officially supports only Linux.

    2. Configuration

    HAProxy's logic comes from a config file (haproxy.cfg)

    • Predefined location: /usr/local/etc/haproxy/haproxy.cfg
    • Command line: haproxy -f haproxy.cfg

    2.1 Sections

    2.1.1 global section

    Parameters for the entire load balancing process (OS-specific)

    2.1.2 Proxies section

    Parameters for proxying incoming traffic to our backend cluster

    • defaults: optional default subsection where we can set default parameters for all of our networking proxies
    • frontend: Describes the listening sockets for all the incoming requests from the client and the logic on how to handle
    • backend: describes a set of servers that participate in our back-end cluster
    • listen: optional frontend + backend

    3. Load Balancing

    global
    	maxconn	512
    
    defaults
    	mode http
        timeout connect 5000ms
        timeout client 50000ms
        timeout server 50000ms
    
    frontend http-in
    	bind *:80
        default_backend servers
    
    backend servers
    	balance roundrobin
    	server server1 127.0.0.1:8000 weight 2
        server server2 127.0.0.1:8001 weight 2
        server server3 127.0.0.1:8002 weight 1

    3. High Availability

    global
    	maxconn	512
    
    defaults
    	mode http
        timeout connect 5000ms
        timeout client 50000ms
        timeout server 50000ms
    
    frontend http-in
    	bind *:80
        default_backend servers
    
    backend servers
    	balance roundrobin
        option httpchk GET /status
        http-check expect string "Server is alive"
    	server server1 127.0.0.1:8000 check inter 1s
        server server2 127.0.0.1:8001 check	inter 1s
        server server3 127.0.0.1:8002 check	inter 1s

    4. Monitoring

    global
    	maxconn	512
    
    defaults
    	mode http
        timeout connect 5000ms
        timeout client 50000ms
        timeout server 50000ms
    
    frontend http-in
    	bind *:80
        default_backend servers
    
    backend servers
    	balance roundrobin
        option httpchk GET /status
        http-check expect string "Server is alive"
    	server server1 127.0.0.1:8000 check inter 1s
        server server2 127.0.0.1:8001 check	inter 1s
        server server3 127.0.0.1:8002 check	inter 1s
        
    listen stats_page
    	bind *:83
        stats enable
        stats uri /

    5. Advanced Routing (ACLs)

    global
    	maxconn	512
    
    defaults
    	mode http
        timeout connect 5000ms
        timeout client 50000ms
        timeout server 50000ms
    
    frontend http-in
    	bind *:80
        acl even path_end -i /even
        acl odd path_end -i /odd
        
        use_backend even_application_nodes if even
        use_backend odd_application_nodes if odd
    
    backend odd_application_nodes
    	balance roundrobin
        option httpchk GET /status
        http-check expect string "Server is alive"
    	server server1 127.0.0.1:8000 check
        server server3 127.0.0.1:8002 check
        
    backend even_application_ndoes
    	balance roundrobin
        option httpchk GET /status
        http-check expect string "Server is alive"
        server server2 127.0.0.1:8001 check
        
    listen stats_page
    	bind *:83
        stats enable
        stats uri /

    6. Layer 4 and 7 Load Balancing

    global
    	maxconn	512
    
    defaults
    	mode tcp
        timeout connect 10s
        timeout client 50s
        timeout server 50s
    
    frontend http-in
    	bind *:80
        default_backend application_nodes
    
    backend application_nodes
    	balance roundrobin
        option httpchk GET /status
        http-check expect string "Server is alive"
    	server server1 127.0.0.1:8000 check
        server server1 127.0.0.1:8001 check
        server server3 127.0.0.1:8002 check
        
    listen stats_page
    	bind *:83
        stats enable
        stats uri /

    After switching to TCP, We open the browser and keep sending your requests to localhost suddenly all our requests are going to the same server instead of each request being sent to a different one. The reason for this behavior is any time we refresh the browser we actually send a new HTTP GET request to an HAProxy. However, all those requests are still sent on the same TCP connection. Since our load balancer now does not understand HTTP as far as its concerned all the TCP packets belong to the same stream and hence will be sent to the same server. To break the TCP connection and open a new one for each request we need to close the web browser entirely and once we open a new browser instance and send a new request to localhost and new TCP connection will be established. Now, this new connection will be routed to a new back-end server based on the round-robin policy.

    7. Running HAProxy with Docker

    The great part about docker for us is for platforms like Windows that do not have the same resource isolation features like Linux docker will run a single Linux virtual machine behind the scenes for us so we can run the same commands in the same containers on any platform.

    7.1 Example

    7.1.1 Docker Image for Sample Project

    FROM maven:3.6.1-jdk-11 AS MAVEN_TOOL_CHAIN_CONTAINER
    RUN mkdir src
    COPY src /tmp/src
    COPY ./pom.xml /tmp/
    WORKDIR /tmp/
    RUN mvn package
    RUN ls -la /tmp
    
    FROM openjdk:11
    COPY --from=MAVEN_TOOL_CHAIN_CONTAINER /tmp/target/webapp-1.0-SNAPSHOT-jar-with-dependencies.jar /tmp/
    WORKDIR /tmp/
    ENTRYPOINT ["java","-jar", "webapp-1.0-SNAPSHOT-jar-with-dependencies.jar"]
    CMD ["80", "Server Name"]

    7.1.2 HAProxy Image

    FROM haproxy:1.7
    COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
    ENTRYPOINT ["haproxy", "-f", "/usr/local/etc/haproxy/haproxy.cfg"]
    

    7.1.3 HAProxy Configuration

    global
        maxconn 500
    
    defaults
        mode http
        timeout connect 10s
        timeout client  50s
        timeout server  50s
    
    frontend http-in
        bind *:80
        default_backend application_nodes
    
    backend application_nodes
        balance roundrobin
        option httpchk GET /status
        http-check expect string "Server is alive"
        server server01 app1:9001 check inter 1s
        server server02 app2:9002 check inter 2s
        server server03 app3:9003 check inter 2s
    
    listen stats 
        bind *:83
        stats enable
        stats uri /
    
    
    
    
    

    8. Reference

    http://www.haproxy.org/

    https://en.wikipedia.org/wiki/HAProxy

    https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.1

    https://hub.docker.com/

    'DevOps > MIddleware' 카테고리의 다른 글

    Difference between Apache and Nginx  (0) 2020.05.14

    댓글

Designed by Tistory.