Audience

7 min read

TCP/IP: The OG Duo That Powers Your Cat Videos

This blog dives into the Transmission Control Protocol (TCP)—the unsung hero behind reliable internet communication. From the three-way handshake to flow and congestion control, it explains how TCP ensures your data arrives correctly and in order. The post highlights how TCP’s Slow Start mechanism can delay website performance, especially when pages exceed the initial congestion window (~14KB). Real-world studies (like Google’s) show how optimizing initial window sizes drastically improves load times. It also explains how even small things—like extra whitespace or large headers—can trigger extra round-trips and slow down rendering. The blog wraps up with TCP’s modern upgrades like BBR, Fast Open, and the rise of QUIC, a new-age protocol that promises faster, smarter connections. Whether you’re a dev or just curious, this piece reveals how every scroll, click, and stream rides on TCP’s digital backbone.

What Is TCP?

TCP (Transmission Control Protocol) was developed in the 1970s as part of the ARPANET project by pioneers like Vint Cerf and Bob Kahn. It works on top of the Internet Protocol (IP), adding a layer of reliability and data integrity. Unlike IP, which simply routes packets, TCP ensures that the data actually arrives intact and in order.
Most internet applications like web browsing, email, file transfers, and remote logins rely on TCP because they require dependable data delivery.

How TCP Works

1. The Three-Way Handshake

Before data can be transmitted, TCP performs a connection setup using a three-step process:

  1. SYN: The client initiates the connection.

  2. SYN-ACK: The server acknowledges and responds.

  3. ACK: The client acknowledges the server’s response.

This handshake establishes a reliable communication path between both endpoints.

2. Reliable Data Transmission

TCP breaks data into segments, each with a sequence number. It ensures reliability by:

  1. Requiring acknowledgments (ACKs) for received packets

  2. Retransmitting lost or corrupted packets

  3. Reordering out-of-sequence packets

3. Flow and Congestion Control

  1. Flow control ensures the sender doesn’t overwhelm the receiver by adjusting how much data can be sent before waiting for an ACK.

  2. Congestion control avoids network overload through mechanisms like Slow Start and Congestion Avoidance.

Why TCP Is Essential

TCP is critical for maintaining data integrity and reliability across unpredictable and often unreliable networks. It enables:

  1. Ordered delivery of web pages and resources

  2. Reliable file downloads and uploads

  3. Secure, interactive sessions (e.g., SSH)

    Without TCP, every application would need to implement its own data correction, sequencing, and reliability logic.

TCP Slow Start and Website Performance

The Problem

When a new TCP connection is established, it doesn’t transmit data at full capacity. Instead, it begins cautiously with a small initial congestion window (initcwnd) and grows it over time.

Why It Matters

If a website’s content exceeds the initial window size (usually ~14–15kB), multiple round trips (RTTs) are required to deliver all data, introducing delays.

Website Size

RTTs Required

User Impact

≤14kB

1 RTT

Fast, near-instant load

15–50kB

2–3 RTTs

Noticeable lag

>50kB

4+ RTTs

Slower, multi-second loads

TCP’s Slow Start algorithm can significantly delay page load times, especially on high-latency networks like mobile.

Real-World Example

Google found that increasing the initial congestion window to 10 segments (≈15kB) improved average search latency by ~10% without increasing congestion. This change is now standard across major browsers and operating systems.

How Whitespace and Extra Bytes Hurt Performance

Many developers overlook how seemingly trivial elements—like extra whitespace, capital letters in headers, or uncompressed scripts—can hurt performance. Here's why:

  1. Every byte counts in the initial TCP window.

  2. Extra bytes trigger additional RTTs during Slow Start.

  3. Large headers or unminified scripts can delay the time to first render.

Optimizing page weight helps reduce TCP round-trips and speeds up load time.

TCP Improvements and Evolutions

TCP has evolved over the decades with numerous enhancements:

Feature

Old TCP

Modern TCP

Initcwnd

2–4 segments

10 segments

Max Window Size

64KB

~1GB (window scaling)

Handshake

1 RTT

0-RTT (TCP Fast Open)

Congestion Control

Reno/Tahoe

CUBIC, BBR

Loss Recovery

Retransmit on timeout

SACK, Fast Retransmit

Modern congestion control algorithms like BBR optimize throughput and latency by modeling bandwidth and RTT rather than relying solely on packet loss.

Conclusion

TCP remains the backbone of the modern web, balancing reliability, efficiency, and fairness. But with rising expectations for speed and responsiveness, it's evolving. From improvements in Slow Start and BBR to the emergence of QUIC, the transport layer is entering a new era.
For developers, understanding TCP’s behavior—especially around latency and initial congestion—is key to building high-performance websites and apps.


What Is TCP?

TCP (Transmission Control Protocol) was developed in the 1970s as part of the ARPANET project by pioneers like Vint Cerf and Bob Kahn. It works on top of the Internet Protocol (IP), adding a layer of reliability and data integrity. Unlike IP, which simply routes packets, TCP ensures that the data actually arrives intact and in order.
Most internet applications like web browsing, email, file transfers, and remote logins rely on TCP because they require dependable data delivery.

How TCP Works

1. The Three-Way Handshake

Before data can be transmitted, TCP performs a connection setup using a three-step process:

  1. SYN: The client initiates the connection.

  2. SYN-ACK: The server acknowledges and responds.

  3. ACK: The client acknowledges the server’s response.

This handshake establishes a reliable communication path between both endpoints.

2. Reliable Data Transmission

TCP breaks data into segments, each with a sequence number. It ensures reliability by:

  1. Requiring acknowledgments (ACKs) for received packets

  2. Retransmitting lost or corrupted packets

  3. Reordering out-of-sequence packets

3. Flow and Congestion Control

  1. Flow control ensures the sender doesn’t overwhelm the receiver by adjusting how much data can be sent before waiting for an ACK.

  2. Congestion control avoids network overload through mechanisms like Slow Start and Congestion Avoidance.

Why TCP Is Essential

TCP is critical for maintaining data integrity and reliability across unpredictable and often unreliable networks. It enables:

  1. Ordered delivery of web pages and resources

  2. Reliable file downloads and uploads

  3. Secure, interactive sessions (e.g., SSH)

    Without TCP, every application would need to implement its own data correction, sequencing, and reliability logic.

TCP Slow Start and Website Performance

The Problem

When a new TCP connection is established, it doesn’t transmit data at full capacity. Instead, it begins cautiously with a small initial congestion window (initcwnd) and grows it over time.

Why It Matters

If a website’s content exceeds the initial window size (usually ~14–15kB), multiple round trips (RTTs) are required to deliver all data, introducing delays.

Website Size

RTTs Required

User Impact

≤14kB

1 RTT

Fast, near-instant load

15–50kB

2–3 RTTs

Noticeable lag

>50kB

4+ RTTs

Slower, multi-second loads

TCP’s Slow Start algorithm can significantly delay page load times, especially on high-latency networks like mobile.

Real-World Example

Google found that increasing the initial congestion window to 10 segments (≈15kB) improved average search latency by ~10% without increasing congestion. This change is now standard across major browsers and operating systems.

How Whitespace and Extra Bytes Hurt Performance

Many developers overlook how seemingly trivial elements—like extra whitespace, capital letters in headers, or uncompressed scripts—can hurt performance. Here's why:

  1. Every byte counts in the initial TCP window.

  2. Extra bytes trigger additional RTTs during Slow Start.

  3. Large headers or unminified scripts can delay the time to first render.

Optimizing page weight helps reduce TCP round-trips and speeds up load time.

TCP Improvements and Evolutions

TCP has evolved over the decades with numerous enhancements:

Feature

Old TCP

Modern TCP

Initcwnd

2–4 segments

10 segments

Max Window Size

64KB

~1GB (window scaling)

Handshake

1 RTT

0-RTT (TCP Fast Open)

Congestion Control

Reno/Tahoe

CUBIC, BBR

Loss Recovery

Retransmit on timeout

SACK, Fast Retransmit

Modern congestion control algorithms like BBR optimize throughput and latency by modeling bandwidth and RTT rather than relying solely on packet loss.

Conclusion

TCP remains the backbone of the modern web, balancing reliability, efficiency, and fairness. But with rising expectations for speed and responsiveness, it's evolving. From improvements in Slow Start and BBR to the emergence of QUIC, the transport layer is entering a new era.
For developers, understanding TCP’s behavior—especially around latency and initial congestion—is key to building high-performance websites and apps.


What Is TCP?

TCP (Transmission Control Protocol) was developed in the 1970s as part of the ARPANET project by pioneers like Vint Cerf and Bob Kahn. It works on top of the Internet Protocol (IP), adding a layer of reliability and data integrity. Unlike IP, which simply routes packets, TCP ensures that the data actually arrives intact and in order.
Most internet applications like web browsing, email, file transfers, and remote logins rely on TCP because they require dependable data delivery.

How TCP Works

1. The Three-Way Handshake

Before data can be transmitted, TCP performs a connection setup using a three-step process:

  1. SYN: The client initiates the connection.

  2. SYN-ACK: The server acknowledges and responds.

  3. ACK: The client acknowledges the server’s response.

This handshake establishes a reliable communication path between both endpoints.

2. Reliable Data Transmission

TCP breaks data into segments, each with a sequence number. It ensures reliability by:

  1. Requiring acknowledgments (ACKs) for received packets

  2. Retransmitting lost or corrupted packets

  3. Reordering out-of-sequence packets

3. Flow and Congestion Control

  1. Flow control ensures the sender doesn’t overwhelm the receiver by adjusting how much data can be sent before waiting for an ACK.

  2. Congestion control avoids network overload through mechanisms like Slow Start and Congestion Avoidance.

Why TCP Is Essential

TCP is critical for maintaining data integrity and reliability across unpredictable and often unreliable networks. It enables:

  1. Ordered delivery of web pages and resources

  2. Reliable file downloads and uploads

  3. Secure, interactive sessions (e.g., SSH)

    Without TCP, every application would need to implement its own data correction, sequencing, and reliability logic.

TCP Slow Start and Website Performance

The Problem

When a new TCP connection is established, it doesn’t transmit data at full capacity. Instead, it begins cautiously with a small initial congestion window (initcwnd) and grows it over time.

Why It Matters

If a website’s content exceeds the initial window size (usually ~14–15kB), multiple round trips (RTTs) are required to deliver all data, introducing delays.

Website Size

RTTs Required

User Impact

≤14kB

1 RTT

Fast, near-instant load

15–50kB

2–3 RTTs

Noticeable lag

>50kB

4+ RTTs

Slower, multi-second loads

TCP’s Slow Start algorithm can significantly delay page load times, especially on high-latency networks like mobile.

Real-World Example

Google found that increasing the initial congestion window to 10 segments (≈15kB) improved average search latency by ~10% without increasing congestion. This change is now standard across major browsers and operating systems.

How Whitespace and Extra Bytes Hurt Performance

Many developers overlook how seemingly trivial elements—like extra whitespace, capital letters in headers, or uncompressed scripts—can hurt performance. Here's why:

  1. Every byte counts in the initial TCP window.

  2. Extra bytes trigger additional RTTs during Slow Start.

  3. Large headers or unminified scripts can delay the time to first render.

Optimizing page weight helps reduce TCP round-trips and speeds up load time.

TCP Improvements and Evolutions

TCP has evolved over the decades with numerous enhancements:

Feature

Old TCP

Modern TCP

Initcwnd

2–4 segments

10 segments

Max Window Size

64KB

~1GB (window scaling)

Handshake

1 RTT

0-RTT (TCP Fast Open)

Congestion Control

Reno/Tahoe

CUBIC, BBR

Loss Recovery

Retransmit on timeout

SACK, Fast Retransmit

Modern congestion control algorithms like BBR optimize throughput and latency by modeling bandwidth and RTT rather than relying solely on packet loss.

Conclusion

TCP remains the backbone of the modern web, balancing reliability, efficiency, and fairness. But with rising expectations for speed and responsiveness, it's evolving. From improvements in Slow Start and BBR to the emergence of QUIC, the transport layer is entering a new era.
For developers, understanding TCP’s behavior—especially around latency and initial congestion—is key to building high-performance websites and apps.


Be the first to know about every new letter.

No spam, unsubscribe anytime.

Create a free website with Framer, the website builder loved by startups, designers and agencies.