Anatomy of an HTTPS Request
Every HTTPS request passes through three protocol layers. TCP establishes a reliable connection (SYN, SYN-ACK, ACK), TLS encrypts the channel, and HTTP carries your data. Each layer wraps the one above it, like envelopes inside envelopes. The simulation below lets you watch packets flow through the entire stack.
why does TCP need a handshake?
UDP fires packets and hopes for the best. TCP can’t do that. It guarantees delivery, ordering, and flow control. Before any of that works, both sides need to agree on a few things:
- “Are you there?” the client needs to confirm the server is listening
- Sequence numbers each side picks a random starting number to track packets (protects against stale connections and spoofing)
- Receive window how much data the receiver can buffer
The handshake solves all three in exactly three packets. No more, no less.
step by step
1. SYN (Client → Server)
The client picks a random initial sequence number (ISN), say seq=100, and
sends a SYN (synchronize) packet. The client moves to SYN_SENT state and
starts a retransmission timer.
2. SYN-ACK (Server → Client)
The server receives the SYN, picks its own ISN (seq=300), and acknowledges
the client’s sequence number by setting ack=101 (client’s seq + 1). It
sends this back as a SYN-ACK and moves to SYN_RCVD.
3. ACK (Client → Server)
The client receives the SYN-ACK, confirms the server’s sequence number with
ack=301, and sends the final ACK. Both sides move to ESTABLISHED.
Data can now flow in both directions.
sequence numbers matter
The ISN isn’t 0. It’s a random 32-bit number, and that’s deliberate:
- Stale segments. If a previous connection used
seq=0, a delayed packet from that connection could be misinterpreted. Random ISNs make collisions astronomically unlikely. - Spoofing protection. An attacker who can’t see the traffic can’t guess the sequence number to inject packets into the stream.
Modern OS kernels use time-based or cryptographic ISN generation (RFC 6528) to make prediction infeasible.
after the handshake
Once TCP is established, the simulation shows what comes next. For HTTPS, two more layers stack on top:
TLS 1.3 negotiates encryption. Client and server exchange cryptographic parameters (ClientHello / ServerHello), verify the server’s identity with a certificate, and derive shared session keys. TLS 1.3 does this in a single round-trip, half of what TLS 1.2 needed.
HTTP/1.1 then carries your actual request: GET /, headers, cookies,
all encrypted inside TLS, which is segmented inside TCP. Each layer wraps
the one above it. The packet inspector in the simulation shows this
encapsulation. Click any HTTP step and you’ll see all three layers at once.
why does TCP need a handshake?
UDP fires packets and hopes for the best. TCP can’t do that. It guarantees delivery, ordering, and flow control. Before any of that works, both sides need to agree on a few things:
- “Are you there?” the client needs to confirm the server is listening
- Sequence numbers each side picks a random starting number to track packets (protects against stale connections and spoofing)
- Receive window how much data the receiver can buffer
The handshake solves all three in exactly three packets. No more, no less.
step by step
1. SYN (Client → Server)
The client picks a random initial sequence number (ISN), say seq=100, and
sends a SYN (synchronize) packet. The client moves to SYN_SENT state and
starts a retransmission timer.
2. SYN-ACK (Server → Client)
The server receives the SYN, picks its own ISN (seq=300), and acknowledges
the client’s sequence number by setting ack=101 (client’s seq + 1). It
sends this back as a SYN-ACK and moves to SYN_RCVD.
3. ACK (Client → Server)
The client receives the SYN-ACK, confirms the server’s sequence number with
ack=301, and sends the final ACK. Both sides move to ESTABLISHED.
Data can now flow in both directions.
sequence numbers matter
The ISN isn’t 0. It’s a random 32-bit number, and that’s deliberate:
- Stale segments. If a previous connection used
seq=0, a delayed packet from that connection could be misinterpreted. Random ISNs make collisions astronomically unlikely. - Spoofing protection. An attacker who can’t see the traffic can’t guess the sequence number to inject packets into the stream.
Modern OS kernels use time-based or cryptographic ISN generation (RFC 6528) to make prediction infeasible.
+ what happens when SYN is lost?
If the SYN packet is lost (try the “Drop packet” button in the simulation), the client never gets a SYN-ACK back within its retransmission timeout (RTO).
The default RTO is typically 1 second (RFC 6298). On each successive failure the timeout doubles. This is exponential backoff:
- Attempt 1: wait 1s
- Attempt 2: wait 2s
- Attempt 3: wait 4s
- Attempt 4: wait 8s
Most systems give up after 5 or 6 retries (about 63 seconds total). SYN-ACK and ACK follow the same retransmission logic.
This is why a “connection timeout” to a dead host takes about a minute. The kernel is methodically retrying with increasing delays.
+ SYN flood attacks
The handshake has an asymmetry that attackers exploit: after receiving a SYN, the server allocates memory for the half-open connection (the Transmission Control Block, or TCB) and waits for the final ACK.
In a SYN flood, the attacker sends thousands of SYN packets with spoofed source IPs. The server fills its backlog with half-open connections that will never complete, starving legitimate connections.
Defenses:
- SYN cookies (RFC 4987). The server encodes state in the ISN
itself, so it doesn’t need to store a TCB until the ACK arrives. This
is enabled by default on Linux (
net.ipv4.tcp_syncookies). - Increased backlog.
net.ipv4.tcp_max_syn_backlogcontrols how many half-open connections the kernel holds. - SYN-ACK retries. Reducing
tcp_synack_retriesdrops half-open connections faster. - Rate limiting. Firewall rules (iptables, nftables) can throttle SYN packets per source IP.
+ TCP Fast Open (TFO)
The standard handshake adds a full round-trip before any data moves. For short-lived connections (HTTP requests, DNS over TCP), that latency adds up.
TCP Fast Open (RFC 7413) lets the client send data in the SYN packet on repeat connections:
- First connection: normal handshake, but the server issues a TFO cookie in the SYN-ACK.
- Subsequent connections: the client includes the cookie and application data in the SYN. The server validates the cookie and delivers the data immediately, saving one RTT.
Trade-offs:
- Middleboxes (firewalls, NATs) sometimes strip unknown TCP options, breaking TFO silently.
- The SYN data isn’t covered by the full handshake, so the server must handle possible replays at the application layer.
- Adoption is patchy. Linux supports it (
net.ipv4.tcp_fastopen), but many networks still block it.
TFO is most impactful on high-latency links (mobile, cross-continent) where saving 100ms+ per connection matters.
after the handshake
Once TCP is established, the simulation shows what comes next. For HTTPS, two more layers stack on top.
TLS 1.3: one round-trip to encryption
The client sends a ClientHello with its supported cipher suites and a key share (typically x25519). The server responds with a ServerHello, choosing a cipher suite and sending its own key share. At this point both sides can derive the shared secret. No more round trips needed for key exchange.
The server then sends its certificate and a Finished message, all encrypted with the newly derived handshake keys. The client verifies the certificate chain, sends its own Finished, and both sides switch to the final application traffic keys. Total cost: one round-trip. TLS 1.2 needed two.
Watch the simulation closely. After the TLS handshake completes, a lock icon appears on the TLS layer band. Everything above that layer is now encrypted.
HTTP: your data, wrapped twice
With TLS ready, HTTP/1.1 sends the actual request. A GET / becomes HTTP
headers inside a TLS record inside a TCP segment. The simulation’s packet
inspector shows this nesting. Click on the GET or 200 OK steps to see all
three layers stacked in the inspector.
This is encapsulation. Each layer adds its own header and treats everything above it as opaque payload. TCP doesn’t know it’s carrying TLS. TLS doesn’t know it’s carrying HTTP. Each layer only talks to its counterpart on the other side.
The simulation shows you the textbook version. What follows is what the textbooks tend to skip.
connections don’t exist
There is no pipe between your browser and the server. No dedicated circuit, no persistent channel. A TCP “connection” is just state held at both endpoints: sequence numbers, window sizes, retransmission timers. The network between them is stateless. Every packet is independently routed and could take a completely different path.
This isn’t trivia. It explains real behavior you’ve probably seen:
- Pull the Ethernet cable and plug it back in before the retransmission timeout expires. The “connection” survives because the state at each end is untouched. The network never knew the difference.
- NAT tables, firewalls, and load balancers maintain their own shadow copy of connection state. When that state goes stale (idle timeout, reboot), your “connection” dies even though both endpoints are perfectly fine. The middlebox forgot about you.
- TIME_WAIT exists because the network might still have old packets in flight after both sides agree the connection is closed. The endpoint keeps state around for 2x the maximum segment lifetime (about 60 seconds) just to reject stragglers that show up late.
- TCP keepalives aren’t maintaining a connection. They’re probing whether the other side’s state still exists. If a NAT box between you silently dropped its mapping, the keepalive is how you find out the connection is already gone.
Next time you see a “connection reset” error, think about it this way: something destroyed the state at one end. The wire was never involved.
layers all the way down
The simulation shows a clean three-layer stack: TCP, TLS, HTTP. The OSI model says there should be seven. Reality follows neither.
TLS isn’t an OSI layer. It sits between transport and application but doesn’t have a layer number. The OSI model’s “presentation layer” (layer 6) was supposed to handle encryption, but nobody actually builds systems that way. TLS is a shim that both sides agree to wedge into the stack.
HTTP/3 collapses the model. It uses QUIC, which reimplements TCP’s reliability, flow control, and multiplexing inside UDP. The “transport layer” now lives in userspace, inside what’s technically an “application layer” protocol. Layers 4 and 7 merged and nobody asked permission.
VXLAN puts layer 2 inside layer 7. Data center overlay networks wrap Ethernet frames (layer 2) inside UDP (layer 4) inside IP (layer 3) inside Ethernet (layer 2 again). The stack recurses.
The OSI model is a teaching tool, not an architecture. Real protocols leak, merge, and nest in ways no committee anticipated. The mental model that actually holds up is simpler: encapsulation. Each protocol wraps the one above it and treats it as opaque bytes. That pattern works even when the layers don’t match any textbook diagram.