Tag: Network


The HTTP-over-QUIC experimental protocol will be renamed to HTTP/3 and is expected to become the third official version of the HTTP protocol, officials at the Internet Engineering Task Force (IETF) have revealed.

This will become the second Google-developed experimental technology to become an official HTTP protocol upgrade after Google’s SPDY technology became the base of HTTP/2.

HTTP-over-QUIC is a rewrite of the HTTP protocol that uses Google’s QUIC instead of TCP (Transmission Control Protocol) as its base technology.


QUIC stands for “Quick UDP Internet Connections” and is, itself, Google’s attempt at rewriting the TCP protocol as an improved technology that combines HTTP/2, TCP, UDP, and TLS (for encryption), among many other things.

In a mailing list discussion last month, Mark Nottingham, Chair of the IETF HTTP and QUIC Working Group, made the official request to rename HTTP-over-QUIC as HTTP/3, and pass it’s a development from the QUIC Working Group to the HTTP Working Group.

In the subsequent discussions that followed and stretched over several days, Nottingham’s proposal was accepted by fellow IETF members, who gave their official seal of approval that HTTP-over-QUIC becomes HTTP/3, the next major iteration of the HTTP protocol, the technology that underpins today’s World Wide Web.

According to web statistics portal W3Techs, as of November 2018, 31.2 percent of the top 10 million websites support HTTP/2, while only 1.2 percent support QUIC.

What is QUIC?

QUIC (Quick UDP Internet Connections) is a new transport protocol for the internet, developed by Google.

QUIC solves a number of transport-layer and application-layer problems experienced by modern web applications while requiring little or no change from application writers. QUIC is very similar to TCP+TLS+HTTP2 but implemented on top of UDP. Having QUIC as a self-contained protocol allows innovations which aren’t possible with existing protocols as they are hampered by legacy clients and middleboxes.

Key advantages of QUIC over TCP+TLS+HTTP2 include:

  • Connection establishment latency
  • Improved congestion control
  • Multiplexing without head-of-line blocking
  • Forward error correction
  • Connection migration

Connection Establishment

QUIC handshakes frequently require zero roundtrips before sending a payload, as compared to 1-3 roundtrips for TCP+TLS.

The first time a QUIC client connects to a server, the client must perform a 1-roundtrip handshake in order to acquire the necessary information to complete the handshake. The client sends an inchoate (empty) client hello (CHLO), the server sends a rejection (REJ) with the information the client needs to make forward progress, including the source address token and the server’s certificates. The next time the client sends a CHLO, it can use the cached credentials from the previous connection to immediately send encrypted requests to the server.


Congestion Control

QUIC has pluggable congestion control and provides richer information to the congestion control algorithm than TCP. Currently, Google’s implementation of QUIC uses a reimplementation of TCP Cubic and is experimenting with alternative approaches.

One example of richer information is that each packet, both original and retransmitted, carries a new sequence number. This allows a QUIC sender to distinguish ACKs for retransmissions from ACKs for originals and avoids TCP’s retransmission ambiguity problem. QUIC ACKs also explicitly carry the delay between the receipt of a packet and its acknowledgment being sent, and together with the monotonically-increasing sequence numbers.  This allows for precise roundtrip-time calculation.

Finally, QUIC’s ACK frames support up to 256 NACK ranges, so QUIC is more resilient to reordering than TCP (with SACK), as well as able to keep more bytes on the wire when there is reordering or loss. Both client and server have a more accurate picture of which packets the peer has received.


One of the larger issues with HTTP2 on top of TCP is the issue of head-of-line blocking. The application sees a TCP connection as a stream of bytes. When a TCP packet is lost, no streams on that HTTP2 connection can make forward progress until the packet is retransmitted and received by the far side – not even when the packets with data for these streams have arrived and are waiting in a buffer.

Because QUIC is designed from the ground up for multiplexed operation, lost packets carrying data for an individual stream generally only impact that specific stream. Each stream frame can be immediately dispatched to that stream on arrival, so streams without loss can continue to be reassembled and make forward progress in the application.

Forward Error Correction

In order to recover from lost packets without waiting for a retransmission, QUIC can complement a group of packets with an FEC packet. Much like RAID-4, the FEC packet contains parity of the packets in the FEC group. If one of the packets in the group is lost, the contents of that packet can be recovered from the FEC packet and the remaining packets in the group. The sender may decide whether to send FEC packets to optimize specific scenarios (e.g., beginning and end of a request).

Connection Migration

QUIC connections are identified by a 64-bit connection ID, randomly generated by the client. In contrast, TCP connections are identified by a 4-tuple of source address, source port, destination address, and destination port. This means that if a client changes IP addresses (for example, by moving out of Wi-Fi range and switching over to cellular) or ports (if a NAT box loses and rebinds the port association), any active TCP connections are no longer valid. When a QUIC client changes IP addresses, it can continue to use the old connection ID from the new IP address without interrupting any in-flight requests.

For a detailed explanation, read the book: HTTP/3 Explained by Daniel Stenberg

HTTP/3 explained is a free and open booklet describing the HTTP/3 and QUIC protocols.


Watch this Google Developers QUIC tech Talk:

Do drop a comment below.

Source: zdnet, Google, Chromium Blog, Chromium



A firewall is defined as a system which is designed to prevent unauthorized access to or from a private network. Claimed to be implemented in both hardware and software, or a combination of both, firewalls are frequently used in order to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets.

Types of firewall techniques:

Packet filter: Each packet entering or leaving the network is checked and based on user-defined rules it is either accepted or rejected. It is said to be fairly effective and transparent to users, but is difficult to configure and is susceptible to IP spoofing.
Application gateway: Security mechanisms are applied to specific applications, such as FTP and Telnet servers. Although this is very effective, performance degradation can be imposed.
Circuit-level gateway: Security mechanisms are applied when a TCP or UDP connection is established. Upon establishing the connection, packets can flow between the hosts without further checking.
Proxy server: All messages are intercepted while entering and leaving the network, while the true network addresses are kept effectively hidden by the proxy server

Principle of a Firewall:

A set of predefined rules constitute a firewall system wherein the system is allowed to:

Authorise the connection (allow)
Block the connection (deny)
Reject the connection request without informing the issuer (drop)

Firewall Management Best Practices:

  • Don’t assume that the firewall is the answer to all your network security needs.
  • Deny all the traffic and allow what is needed and the other way, allowing all and blocking the known vulnerable ports.
  • Limit the number of applications running (Antivirus, VPN, Authentication software’s) in your host based firewalls to maximize the CPU cycles and network throughput.
  • Run the firewall services from unique ID rather than running from generic root/admin id.
  • Follow good password practices

                   – Change the default admin or root passwords before connecting the firewall to the internet

                   – Use long and complex pass phrase difficult to crack and easy to remember

                   – Change the passwords once in 6 months and whenever suspected to be compromised

  • Use features like stateful inspection, proxies and application level inspections if available in the firewalls.
  • Physical Access to the firewall should be controlled.
  • Keep the configurations simple, eliminate unneeded and redundant rules.
  • Audit the firewall rule base regularly.
  • Perform regular security tests on your firewalls for new exploits, changes in rules and with firewall disabled to determine how vulnerable you will be in cased of firewall failures.
  • Enable firewall logging and alerting.
  • Use secure remote syslog server that makes log modification and manipulation difficult for an attacker.
  • Consider outsourcing firewall management to a managed service provider to leverage on their expertise, trend analysis and intelligence.
  • Have strong Change Management process to control changes to firewalls.
  • Try to have personal firewalls/intrusion prevention software’s, as the network firewalls can be easily circumvented when connected through devices like USB modems, ADSL links etc.
  • Backup the firewalls rule base regularly and keep the backups offsite