0

A bit of context:

I have this endpoint behind an AWS ALB. I do SSL termination at the ALB. To my surprise, when looking at the client_tlsnegotiation_error_count metric for the ALB, I've noticed a substantial amount of failed connection attempts due to TLS negotiation errors - perhaps around 5% of total traffic but this estimate could be wrong by a substantial margin.

The clients are mostly an average cross-section of current mobile devices in use these days in the US.

The ALB is configured with the default TLS policy that Amazon provides:

| ssl-enum-ciphers: 
|   TLSv1.0: 
|     ciphers: 
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
|     compressors: 
|       NULL
|     cipher preference: server
|   TLSv1.1: 
|     ciphers: 
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
|     compressors: 
|       NULL
|     cipher preference: server
|   TLSv1.2: 
|     ciphers: 
|       TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) - A
|       TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
|       TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) - A
|       TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
|       TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
|       TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) - A
|       TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
|     compressors: 
|       NULL
|     cipher preference: server
|_  least strength: A

Unfortunately, ALBs do not provide error logs, so I could not directly identify why some clients are failing the SSL negotiation.

Instead, I've pushed SSL termination deeper into the stack, to the Nginx frontend, and replaced the ALB with a plain TCP-based ELB. Now connections to port 443 are forwarded directly to Nginx for SSL negotiation.

I've configured Nginx with the exact same protocol versions and ciphers like the ALB:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA';
ssl_prefer_server_ciphers on;

I've verified with nmap and I get the same ssl-enum-ciphers list from Nginx.

Now in the Nginx error log I get lots of lines like this:

SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:ssl_bytes_to_cipher_list:inappropriate fallback) while SSL handshaking

Still not very informative, so I've run tcpdump on port 443 on the Nginx instances. As expected, there's some amount of SSL errors like this:

Secure Sockets Layer
    TLSv1 Record Layer: Alert (Level: Fatal, Description: Inappropriate Fallback)
        Content Type: Alert (21)
        Version: TLS 1.0 (0x0301)
        Length: 2
        Alert Message
            Level: Fatal (2)
            Description: Inappropriate Fallback (86)

In that same TCP stream there's this Client Hello packet:

Secure Sockets Layer
    TLSv1 Record Layer: Handshake Protocol: Client Hello
        Content Type: Handshake (22)
        Version: TLS 1.0 (0x0301)
        Length: 165
        Handshake Protocol: Client Hello
            Handshake Type: Client Hello (1)
            Length: 161
            Version: TLS 1.0 (0x0301)
            Random
                GMT Unix Time: Jun  7, 2050 12:50:05.000000000 PST
                Random Bytes: da03ff7045a5f76e78edf61c097c75e4e141df6649ef1861...
            Session ID Length: 0
            Cipher Suites Length: 28
            Cipher Suites (14 suites)
                Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA (0xc00a)
                Cipher Suite: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA (0xc009)
                Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
                Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
                Cipher Suite: TLS_ECDHE_ECDSA_WITH_RC4_128_SHA (0xc007)
                Cipher Suite: TLS_ECDHE_RSA_WITH_RC4_128_SHA (0xc011)
                Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033)
                Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039)
                Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
                Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
                Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)
                Cipher Suite: TLS_RSA_WITH_RC4_128_SHA (0x0005)
                Cipher Suite: TLS_RSA_WITH_RC4_128_MD5 (0x0004)
                Cipher Suite: TLS_FALLBACK_SCSV (0x5600)
            Compression Methods Length: 1
            Compression Methods (1 method)
                Compression Method: null (0)
            Extensions Length: 92
            Extension: renegotiation_info
                Type: renegotiation_info (0xff01)
                Length: 1
                Renegotiation Info extension
                    Renegotiation info extension length: 0
            Extension: server_name
                Type: server_name (0x0000)
                Length: 27
                Server Name Indication extension
                    Server Name list length: 25
                    Server Name Type: host_name (0)
                    Server Name length: 22
                    Server Name: [REDACTED]
            Extension: Extended Master Secret
                Type: Extended Master Secret (0x0017)
                Length: 0
            Extension: SessionTicket TLS
                Type: SessionTicket TLS (0x0023)
                Length: 0
                Data (0 bytes)
            Extension: Application Layer Protocol Negotiation
                Type: Application Layer Protocol Negotiation (0x0010)
                Length: 26
                ALPN Extension Length: 24
                ALPN Protocol
                    ALPN string length: 5
                    ALPN Next Protocol: h2-16
                    ALPN string length: 8
                    ALPN Next Protocol: spdy/3.1
                    ALPN string length: 8
                    ALPN Next Protocol: http/1.1
            Extension: ec_point_formats
                Type: ec_point_formats (0x000b)
                Length: 2
                EC point formats Length: 1
                Elliptic curves point formats (1)
                    EC point format: uncompressed (0)
            Extension: elliptic_curves
                Type: elliptic_curves (0x000a)
                Length: 8
                Elliptic Curves Length: 6
                Elliptic curves (3 curves)
                    Elliptic curve: secp256r1 (0x0017)
                    Elliptic curve: secp384r1 (0x0018)
                    Elliptic curve: secp521r1 (0x0019)

It's a little puzzling because the exchange of crypto messages uses TLS 1.0 which the server definitely supports, and the client should be very likely to support too.

I've seen discussions online saying that the presence of the TLS_FALLBACK_SCSV cipher suite is an indication that this failed connection is related to anti-POODLE security measures, and the communication is likely to be retried again. Is that correct?

For the most part, what I'm trying to do is find the answers to these questions:

  1. Is this bad? If yes, why? (what are the things that the clients need that the endpoint is not providing)

  2. If it's not bad, then why do we get this constant stream of SSL errors?

It's a little difficult to search the capture file and try to correlate the failed SSL handshake with other, successful connections, because the source IPs are masked by the ELB. There might be a way to rely on the PROXY protocol header to identify IPs, but I'll have to figure out how to do that.

1 Answer 1

1

This appears to be actually normal.

If a TLS client fails to connect for whatever reason (even plain TCP failure due to a bad network, or other reasons), it will downgrade the TLS protocol version to a lower level and try again, this time including the TLS_FALLBACK_SCSV ciphersuite in the ClientHello request. When the server sees the TLS_FALLBACK_SCSV ciphersuite, and it supports a higher TLS protocol version, then it knows the client is basically troubleshooting the connection and responds with inappropriate fallback. Presumably the client will try again, this time with a higher protocol version (the vast majority of our connections are TLSv1.2).

This complex negotiation is done to avoid some older man-in-the-middle TLS attacks such as POODLE. Naively downgrading the connection step by step from the top level opens the door to MitM attacks. A hard downgrade with the TLS_FALLBACK_SCSV signal allows both client and server to know this is a legitimate troubleshooting attempt and not a MitM attack.

The rate at which we see these errors is consistent with what other companies are seeing.

For more details, see the discussion thread SSL error “inappropriate fallback” and TLS_FALLBACK_SCSV in the June 2017 archives of the openssl-users mailing list:

https://mta.openssl.org/pipermail/openssl-users/2017-June/thread.html#5911

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .