100 Continue
100 Continue
is a status-code you might not deal with very often.
Generally, as a web developer, the 100 Continue
status is sent under
the hood by your webserver.
So what’s it for? The best example comes from RFC 7231. Say, you’re
sending a large object to the server using a PUT
request, you may
include a Expect
header like this:
PUT /media/file.mp4 HTTP/1.1
Host: api.example.org
Content-Length: 1073741824
Expect: 100-continue
This tells the server that it should respond with a 100 Continue
status code
if the server is going to be able to accept the request:
HTTP/1.1 100 Continue
When the client receives this, it tells the client the server will accept the request, and it may start sending the request body.
The big benefit here is that if there’s a problem with the request, a server can immediately respond with an error before the client starts sending the request body.
A simple use-case is that a server might first require authentication
using 401 Unauthorized
, or it might know in advance that the Content-Type
that the client wants to send to the server is not something the server will
want to accept.
Unfortunately, it’s not always possible to take advantage of this feature. PHP for example has no event you can hook into to tell the client early that it should stop sending the request. PHP scripts only start after the full request has been received, which is too late.
An example of a language that does have support is Node.js. Node.js has a
checkContinue
event on its http.Server
class that can be listened to, to
potentially send an early error code (source). If a user doesn’t implement
this event Node.js will send the 100 Continue
status by default.
My personal experience with 100 Continue
is mostly confusion and frustration.
For a while I didn’t really understand what it meant, and Nginx in particular
didn’t have support for it for many years (it does now). This broke a number
of obscure clients I had to deal with for a project I was working on.