The HTTP/2 standard was finalized in May 2015. Most major browsers support it, and Google uses it heavily. HTTP/2 leaves the basic concepts of Requests, Responses and Headers intact. Changes are mostly at the transport level, improving the performance of parallel requests - with few changes to your application. The go HTTP/2 'gophertiles' demo nicely demonstrates this effect. A new concept in HTTP/2 is Server Push, which allows the server to speculatively start sending resources to the client. This can potentially speed up initial page load times: the browser doesn't have to parse the HTML page and find out which other resources to load, instead the server can start sending them immediately. This article will demonstrate how Server Push affects the load time of the 'gophertiles'.
HTTP/2 in a nutshell
The key characteristic of HTTP/2 is that all requests for a server are sent over one TCP connection, and responses can come in parallel over that connection. Using only one connection reduces overhead caused by TCP and TLS handshakes. Allowing responses to be sent in parallel is an improvement over HTTP/1.1 pipelining, which only allows requests to be served sequentially. Additionally, because all requests are sent over one connection, there is a Header Compression mechanism that reduces the bandwidth needed for headers that previously would have had to be repeated for each request.
HTTP/2 is here today, and can provide a considerable improvement in perceived performance - even with few changes in your application. Server Push potentially allows you to improve your page loading times even further, but requires careful analysis and tuning - otherwise it might even have an adverse effect.