A HTTP message consists of a head and an optional body. The message head of an HTTP request consists of a request line and a collection of header fields. The message head of an HTTP response consists of a status line and a collection of header fields. All HTTP messages must include the protocol version. Some HTTP messages can optionally enclose a content body.
HttpCore defines the HTTP message object model that closely follows the definition and provides an extensive support for serialization (formatting) and deserialization (parsing) of HTTP message elements.
HTTP request is a message sent from the client to the server. The first line of that message includes the method to be applied to the resource, the identifier of the resource, and the protocol version in use.
HttpRequest request = new BasicHttpRequest("GET", "/", HttpVersion.HTTP_1_1); System.out.println(request.getRequestLine().getMethod()); System.out.println(request.getRequestLine().getUri()); System.out.println(request.getProtocolVersion()); System.out.println(request.getRequestLine().toString());
stdout >
GET / HTTP/1.1 GET / HTTP/1.1
HTTP response is a message sent by the server back to the client after having received and interpreted a request message. The first line of that message consists of the protocol version followed by a numeric status code and its associated textual phrase.
HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, HttpStatus.SC_OK, "OK"); System.out.println(response.getProtocolVersion()); System.out.println(response.getStatusLine().getStatusCode()); System.out.println(response.getStatusLine().getReasonPhrase()); System.out.println(response.getStatusLine().toString());
stdout >
HTTP/1.1 200 OK HTTP/1.1 200 OK
An HTTP message can contain a number of headers describing properties of the message such as the content length, content type and so on. HttpCore provides methods to retrieve, add, remove and enumerate headers.
HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, HttpStatus.SC_OK, "OK"); response.addHeader("Set-Cookie", "c1=a; path=/; domain=localhost"); response.addHeader("Set-Cookie", "c2=b; path=\"/\", c3=c; domain=\"localhost\""); Header h1 = response.getFirstHeader("Set-Cookie"); System.out.println(h1); Header h2 = response.getLastHeader("Set-Cookie"); System.out.println(h2); Header[] hs = response.getHeaders("Set-Cookie"); System.out.println(hs.length);
stdout >
Set-Cookie: c1=a; path=/; domain=localhost Set-Cookie: c2=b; path="/", c3=c; domain="localhost" 2
There is an efficient way to obtain all headers of a given type using the
HeaderIterator
interface.
HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, HttpStatus.SC_OK, "OK"); response.addHeader("Set-Cookie", "c1=a; path=/; domain=localhost"); response.addHeader("Set-Cookie", "c2=b; path=\"/\", c3=c; domain=\"localhost\""); HeaderIterator it = response.headerIterator("Set-Cookie"); while (it.hasNext()) { System.out.println(it.next()); }
stdout >
Set-Cookie: c1=a; path=/; domain=localhost Set-Cookie: c2=b; path="/", c3=c; domain="localhost"
It also provides convenience methods to parse HTTP messages into individual header elements.
HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, HttpStatus.SC_OK, "OK"); response.addHeader("Set-Cookie", "c1=a; path=/; domain=localhost"); response.addHeader("Set-Cookie", "c2=b; path=\"/\", c3=c; domain=\"localhost\""); HeaderElementIterator it = new BasicHeaderElementIterator( response.headerIterator("Set-Cookie")); while (it.hasNext()) { HeaderElement elem = it.nextElement(); System.out.println(elem.getName() + " = " + elem.getValue()); NameValuePair[] params = elem.getParameters(); for (int i = 0; i < params.length; i++) { System.out.println(" " + params[i]); } }
stdout >
c1 = a path=/ domain=localhost c2 = b path=/ c3 = c domain=localhost
HTTP headers get tokenized into individual header elements only on demand. HTTP headers received over an HTTP connection are stored internally as an array of chars and parsed lazily only when their properties are accessed.
HTTP messages can carry a content entity associated with the request or response. Entities can be found in some requests and in some responses, as they are optional. Requests that use entities are referred to as entity enclosing requests. The HTTP specification defines two entity enclosing methods: POST and PUT. Responses are usually expected to enclose a content entity. There are exceptions to this rule such as responses to HEAD method and 204 No Content, 304 Not Modified, 205 Reset Content responses.
HttpCore distinguishes three kinds of entities, depending on where their content originates:
streamed: The content is received from a stream, or generated on the fly. In particular, this category includes entities being received from a connection. Streamed entities are generally not repeatable.
self-contained: The content is in memory or obtained by means that are independent from a connection or other entity. Self-contained entities are generally repeatable.
wrapping: The content is obtained from another entity.
This distinction is important for connection management with incoming entities. For entities that are created by an application and only sent using the HttpCore framework, the difference between streamed and self-contained is of little importance. In that case, it is suggested to consider non-repeatable entities as streamed, and those that are repeatable as self-contained.
An entity can be repeatable, meaning its content can be read more than once. This
is only possible with self contained entities (like
ByteArrayEntity
or StringEntity
).
Since an entity can represent both binary and character content, it has support for character encodings (to support the latter, ie. character content).
The entity is created when executing a request with enclosed content or when the request was successful and the response body is used to send the result back to the client.
To read the content from the entity, one can either retrieve the input stream via
the HttpEntity#getContent()
method, which returns an
java.io.InputStream
, or one can supply an output stream to
the HttpEntity#writeTo(OutputStream)
method, which will
return once all content has been written to the given stream.
The EntityUtils
class exposes several static methods to
more easily read the content or information from an entity. Instead of reading
the java.io.InputStream
directly, one can retrieve the whole
content body in a string / byte array by using the methods from this class.
When the entity has been received with an incoming message, the methods
HttpEntity#getContentType()
and
HttpEntity#getContentLength()
methods can be used for
reading the common metadata such as Content-Type
and
Content-Length
headers (if they are available). Since the
Content-Type
header can contain a character encoding for text
mime-types like text/plain
or text/html
,
the HttpEntity#getContentEncoding()
method is used to
read this information. If the headers aren't available, a length of -1 will be
returned, and NULL
for the content type. If the
Content-Type
header is available, a Header object will be
returned.
When creating an entity for a outgoing message, this meta data has to be supplied by the creator of the entity.
StringEntity myEntity = new StringEntity("important message", "UTF-8"); System.out.println(myEntity.getContentType()); System.out.println(myEntity.getContentLength()); System.out.println(EntityUtils.getContentCharSet(myEntity)); System.out.println(EntityUtils.toString(myEntity)); System.out.println(EntityUtils.toByteArray(myEntity).length);
stdout >
Content-Type: text/plain; charset=UTF-8 17 UTF-8 important message 17
In order to ensure proper release of system resources one must close the content stream associated with the entity.
HttpResponse response; HttpEntity entity = response.getEntity(); if (entity != null) { InputStream instream = entity.getContent(); try { // do something useful } finally { instream.close(); } }
Please note that HttpEntity#writeTo(OutputStream)
method is also required to ensure proper release of system resources once the
entity has been fully written out. If this method obtains an instance of
java.io.InputStream
by calling
HttpEntity#getContent()
, it is also expected to close
the stream in a finally clause.
When working with streaming entities, one can use the
EntityUtils#consume(HttpEntity)
method to ensure that
the entity content has been fully consumed and the underlying stream has been
closed.
There are a few ways to create entities. The following implementations are provided by HttpCore:
This is exactly as the name implies, a basic entity that represents an underlying stream. This is generally used for the entities received from HTTP messages.
This entity has an empty constructor. After construction it represents no content, and has a negative content length.
One needs to set the content stream, and optionally the length. This can be done
with the BasicHttpEntity#setContent(InputStream)
and
BasicHttpEntity#setContentLength(long)
methods
respectively.
BasicHttpEntity myEntity = new BasicHttpEntity(); myEntity.setContent(someInputStream); myEntity.setContentLength(340); // sets the length to 340
ByteArrayEntity
is a self contained, repeatable entity
that obtains its content from a given byte array. This byte array is supplied
to the constructor.
String myData = "Hello world on the other side!!"; ByteArrayEntity myEntity = new ByteArrayEntity(myData.getBytes());
StringEntity
is a self contained, repeatable entity that
obtains its content from a java.lang.String
object. It has
two constructors, one simply constructs with a given java.lang.String
object; the other also takes a character encoding for the data in the
string.
StringBuffer sb = new StringBuffer(); Map<String, String> env = System.getenv(); for (Entry<String, String> envEntry : env.entrySet()) { sb.append(envEntry.getKey()).append(": ") .append(envEntry.getValue()).append("\n"); } // construct without a character encoding HttpEntity myEntity1 = new StringEntity(sb.toString()); // alternatively construct with an encoding HttpEntity myEntity2 = new StringEntity(sb.toString(), "UTF-8");
InputStreamEntity
is a streamed, non-repeatable entity that
obtains its content from an input stream. It is constructed by supplying the input
stream and the content length. The content length is used to limit the amount of
data read from the java.io.InputStream
. If the length matches
the content length available on the input stream, then all data will be sent.
Alternatively a negative content length will read all data from the input stream,
which is the same as supplying the exact content length, so the length is most
often used to limit the length.
InputStream instream = getSomeInputStream(); InputStreamEntity myEntity = new InputStreamEntity(instream, 16);
FileEntity
is a self contained, repeatable entity that
obtains its content from a file. Since this is mostly used to stream large files
of different types, one needs to supply the content type of the file, for
instance, sending a zip file would require the content type
application/zip
, for XML application/xml
.
HttpEntity entity = new FileEntity(staticFile, "application/java-archive");
This is an entity which receives its content from a
ContentProducer
interface. Content producers are
objects which produce their content on demand, by writing it out to an output
stream. They are expected to be able produce their content every time they are
requested to do so. So creating a EntityTemplate
, one is
expected to supply a reference to a content producer, which effectively creates
a repeatable entity.
There are no standard content producers in HttpCore. It is basically just a
convenience interface to allow wrapping up complex logic into an entity. To use
this entity one needs to create a class that implements
ContentProducer
and override the
ContentProducer#writeTo(OutputStream)
method. Then, an instance of
custom ContentProducer
will be used to write the
full content body to the output stream. For instance, an HTTP server would serve
static files with the FileEntity
, but running CGI programs
could be done with a ContentProducer
, inside which
one could implement custom logic to supply the content as it becomes available.
This way one does not need to buffer it in a string and then use a
StringEntity
or ByteArrayEntity
.
ContentProducer myContentProducer = new ContentProducer() { public void writeTo(OutputStream out) throws IOException { out.write("ContentProducer rocks! ".getBytes()); out.write(("Time requested: " + new Date()).getBytes()); } }; HttpEntity myEntity = new EntityTemplate(myContentProducer); myEntity.writeTo(System.out);
stdout >
ContentProducer rocks! Time requested: Fri Sep 05 12:20:22 CEST 2008
This is the base class for creating wrapped entities. The wrapping entity holds a reference to a wrapped entity and delegates all calls to it. Implementations of wrapping entities can derive from this class and need to override only those methods that should not be delegated to the wrapped entity.
BufferedHttpEntity
is a subclass of
HttpEntityWrapper
. It is constructed by supplying another entity. It
reads the content from the supplied entity, and buffers it in memory.
This makes it possible to make a repeatable entity, from a non-repeatable entity. If the supplied entity is already repeatable, calls are simply passed through to the underlying entity.
myNonRepeatableEntity.setContent(someInputStream); BufferedHttpEntity myBufferedEntity = new BufferedHttpEntity( myNonRepeatableEntity);
HTTP connections are responsible for HTTP message serialization and deserialization. One should rarely need to use HTTP connection objects directly. There are higher level protocol components intended for execution and processing of HTTP requests. However, in some cases direct interaction with HTTP connections may be necessary, for instance, to access properties such as the connection status, the socket timeout or the local and remote addresses.
It is important to bear in mind that HTTP connections are not thread-safe. It is strongly
recommended to limit all interactions with HTTP connection objects to one thread. The only
method of HttpConnection
interface and its sub-interfaces
which is safe to invoke from another thread is HttpConnection#shutdown()
.
HttpCore does not provide full support for opening connections because the process of establishing a new connection - especially on the client side - can be very complex when it involves one or more authenticating or/and tunneling proxies. Instead, blocking HTTP connections can be bound to any arbitrary network socket.
Socket socket = new Socket(); // Initialize socket BasicHttpParams params = new BasicHttpParams(); DefaultHttpClientConnection conn = new DefaultHttpClientConnection(); conn.bind(socket, params); conn.isOpen(); HttpConnectionMetrics metrics = conn.getMetrics(); metrics.getRequestCount(); metrics.getResponseCount(); metrics.getReceivedBytesCount(); metrics.getSentBytesCount();
HTTP connection interfaces, both client and server, send and receive messages in two stages. The message head is transmitted first. Depending on properties of the message head it may be followed by a message body. Please note it is very important to always close the underlying content stream in order to signal that the processing of the message is complete. HTTP entities that stream out their content directly from the input stream of the underlying connection must ensure the content of the message body is fully consumed for that connection to be potentially re-usable.
Over-simplified process of client side request execution may look like this:
Socket socket = new Socket(); // Initialize socket HttpParams params = new BasicHttpParams(); DefaultHttpClientConnection conn = new DefaultHttpClientConnection(); conn.bind(socket, params); HttpRequest request = new BasicHttpRequest("GET", "/"); conn.sendRequestHeader(request); HttpResponse response = conn.receiveResponseHeader(); conn.receiveResponseEntity(response); HttpEntity entity = response.getEntity(); if (entity != null) { // Do something useful with the entity and, when done, ensure all // content has been consumed, so that the underlying connection // coult be re-used EntityUtils.consume(entity); }
Over-simplified process of server side request handling may look like this:
Socket socket = new Socket(); // Initialize socket HttpParams params = new BasicHttpParams(); DefaultHttpServerConnection conn = new DefaultHttpServerConnection(); conn.bind(socket, params); HttpRequest request = conn.receiveRequestHeader(); if (request instanceof HttpEntityEnclosingRequest) { conn.receiveRequestEntity((HttpEntityEnclosingRequest) request); HttpEntity entity = ((HttpEntityEnclosingRequest) request) .getEntity(); if (entity != null) { // Do something useful with the entity and, when done, ensure all // content has been consumed, so that the underlying connection // coult be re-used EntityUtils.consume(entity); } } HttpResponse response = new BasicHttpResponse(HttpVersion.HTTP_1_1, 200, "OK"); response.setEntity(new StringEntity("Got it")); conn.sendResponseHeader(response); conn.sendResponseEntity(response);
Please note that one should rarely need to transmit messages using these low level methods and should use appropriate higher level HTTP service implementations instead.
HTTP connections manage the process of the content transfer using the
HttpEntity
interface. HTTP connections generate an entity object that
encapsulates the content stream of the incoming message. Please note that
HttpServerConnection#receiveRequestEntity()
and
HttpClientConnection#receiveResponseEntity()
do not retrieve or buffer any
incoming data. They merely inject an appropriate content codec based on the properties
of the incoming message. The content can be retrieved by reading from the content input
stream of the enclosed entity using HttpEntity#getContent()
.
The incoming data will be decoded automatically completely transparently for the data
consumer. Likewise, HTTP connections rely on
HttpEntity#writeTo(OutputStream)
method to generate the content of an
outgoing message. If an outgoing messages encloses an entity, the content will be
encoded automatically based on the properties of the message.
Default implementations of HTTP connections support three content transfer mechanisms defined by the HTTP/1.1 specification:
Content-Length
delimited:
The end of the content entity is determined by the value of the
Content-Length
header. Maximum entity length:
Long#MAX_VALUE
.
Identity coding: The end of the content entity is demarcated by closing the underlying connection (end of stream condition). For obvious reasons the identity encoding can only be used on the server side. Max entity length: unlimited.
Chunk coding: The content is sent in small chunks. Max entity length: unlimited.
The appropriate content stream class will be created automatically depending on properties of the entity enclosed with the message.
HTTP connections can be terminated either gracefully by calling
HttpConnection#close()
or forcibly by calling
HttpConnection#shutdown()
. The former tries to flush all buffered data
prior to terminating the connection and may block indefinitely. The
HttpConnection#close()
method is not thread-safe. The latter terminates
the connection without flushing internal buffers and returns control to the caller as
soon as possible without blocking for long. The HttpConnection#shutdown()
method is thread-safe.
All HttpCore components potentially throw two types of exceptions: IOException
in case of an I/O failure such as socket timeout or an socket reset and
HttpException
that signals an HTTP failure such as a violation of
the HTTP protocol. Usually I/O errors are considered non-fatal and recoverable, whereas
HTTP protocol errors are considered fatal and cannot be automatically recovered from.
HTTP protocol interceptor is a routine that implements a specific aspect of the HTTP protocol. Usually protocol interceptors are expected to act upon one specific header or a group of related headers of the incoming message or populate the outgoing message with one specific header or a group of related headers. Protocol interceptors can also manipulate content entities enclosed with messages, transparent content compression / decompression being a good example. Usually this is accomplished by using the 'Decorator' pattern where a wrapper entity class is used to decorate the original entity. Several protocol interceptors can be combined to form one logical unit.
HTTP protocol processor is a collection of protocol interceptors that implements the 'Chain of Responsibility' pattern, where each individual protocol interceptor is expected to work on the particular aspect of the HTTP protocol it is responsible for.
Usually the order in which interceptors are executed should not matter as long as they do not depend on a particular state of the execution context. If protocol interceptors have interdependencies and therefore must be executed in a particular order, they should be added to the protocol processor in the same sequence as their expected execution order.
Protocol interceptors must be implemented as thread-safe. Similarly to servlets, protocol interceptors should not use instance variables unless access to those variables is synchronized.
HttpCore comes with a number of most essential protocol interceptors for client and server HTTP processing.
RequestContent
is the most important interceptor for
outgoing requests. It is responsible for delimiting content length by adding
Content-Length
or Transfer-Content
headers
based on the properties of the enclosed entity and the protocol version. This
interceptor is required for correct functioning of client side protocol processors.
ResponseContent
is the most important interceptor for
outgoing responses. It is responsible for delimiting content length by adding
Content-Length
or Transfer-Content
headers
based on the properties of the enclosed entity and the protocol version. This
interceptor is required for correct functioning of server side protocol processors.
RequestConnControl
is responsible for adding
Connection
header to the outgoing requests, which is essential
for managing persistence of HTTP/1.0
connections. This
interceptor is recommended for client side protocol processors.
ResponseConnControl
is responsible for adding
Connection
header to the outgoing responses, which is essential
for managing persistence of HTTP/1.0
connections. This
interceptor is recommended for server side protocol processors.
RequestDate
is responsible for adding
Date
header to the outgoing requests This interceptor is
optional for client side protocol processors.
ResponseDate
is responsible for adding
Date
header to the outgoing responses. This interceptor is
recommended for server side protocol processors.
RequestExpectContinue
is responsible for enabling the
'expect-continue' handshake by adding Expect
header. This
interceptor is recommended for client side protocol processors.
RequestTargetHost
is responsible for adding
Host
header. This interceptor is required for client side
protocol processors.
RequestUserAgent
is responsible for adding
User-Agent
header. This interceptor is recommended for client
side protocol processors.
Usually HTTP protocol processors are used to pre-process incoming messages prior to executing application specific processing logic and to post-process outgoing messages.
BasicHttpProcessor httpproc = new BasicHttpProcessor(); // Required protocol interceptors httpproc.addInterceptor(new RequestContent()); httpproc.addInterceptor(new RequestTargetHost()); // Recommended protocol interceptors httpproc.addInterceptor(new RequestConnControl()); httpproc.addInterceptor(new RequestUserAgent()); httpproc.addInterceptor(new RequestExpectContinue()); HttpContext context = new BasicHttpContext(); HttpRequest request = new BasicHttpRequest("GET", "/"); httpproc.process(request, context); HttpResponse response = null;
Send the request to the target host and get a response.
httpproc.process(response, context);
Please note the BasicHttpProcessor
class does not synchronize
access to its internal structures and therefore amy be thread-unsafe.
Protocol interceptors can collaborate by sharing information - such as a processing
state - through an HTTP execution context. HTTP context is a structure that can be
used to map an attribute name to an attribute value. Internally HTTP context
implementations are usually backed by a HashMap
. The primary
purpose of the HTTP context is to facilitate information sharing among various
logically related components. HTTP context can be used to store a processing state for
one message or several consecutive messages. Multiple logically related messages can
participate in a logical session if the same context is reused between consecutive
messages.
BasicHttpProcessor httpproc = new BasicHttpProcessor(); httpproc.addInterceptor(new HttpRequestInterceptor() { public void process( HttpRequest request, HttpContext context) throws HttpException, IOException { String id = (String) context.getAttribute("session-id"); if (id != null) { request.addHeader("Session-ID", id); } } }); HttpRequest request = new BasicHttpRequest("GET", "/"); httpproc.process(request, context);
HttpContext
instances can be linked together to form a
hierarchy. In the simplest form one context can use content of another context to
obtain default values of attributes not present in the local context.
HttpContext parentContext = new BasicHttpContext(); parentContext.setAttribute("param1", Integer.valueOf(1)); parentContext.setAttribute("param2", Integer.valueOf(2)); HttpContext localContext = new BasicHttpContext(); localContext.setAttribute("param2", Integer.valueOf(0)); localContext.setAttribute("param3", Integer.valueOf(3)); HttpContext stack = new DefaultedHttpContext(localContext, parentContext); System.out.println(stack.getAttribute("param1")); System.out.println(stack.getAttribute("param2")); System.out.println(stack.getAttribute("param3")); System.out.println(stack.getAttribute("param4"));
stdout >
1 0 3 null
HttpParams
interface represents a collection of immutable
values that define a runtime behavior of a component. In many ways HttpParams
is similar to HttpContext
. The main
distinction between the two lies in their use at runtime. Both interfaces represent a
collection of objects that are organized as a map of textual names to object values, but
serve distinct purposes:
HttpParams
is intended to contain simple objects:
integers, doubles, strings, collections and objects that remain immutable at
runtime. HttpParams
is expected to be used in the
'write once - ready many' mode. HttpContext
is
intended to contain complex objects that are very likely to mutate in the course of
HTTP message processing.
The purpose of HttpParams
is to define a behavior of
other components. Usually each complex component has its own
HttpParams
object. The purpose of HttpContext
is to represent an execution state of an HTTP process. Usually
the same execution context is shared among many collaborating objects.
HttpParams
, like HttpContext
can be linked together to form a hierarchy. In the simplest form one set of parameters can
use content of another one to obtain default values of parameters not present in the local
set.
HttpParams parentParams = new BasicHttpParams(); parentParams.setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_0); parentParams.setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET, "UTF-8"); HttpParams localParams = new BasicHttpParams(); localParams.setParameter(CoreProtocolPNames.PROTOCOL_VERSION, HttpVersion.HTTP_1_1); localParams.setParameter(CoreProtocolPNames.USE_EXPECT_CONTINUE, Boolean.FALSE); HttpParams stack = new DefaultedHttpParams(localParams, parentParams); System.out.println(stack.getParameter( CoreProtocolPNames.PROTOCOL_VERSION)); System.out.println(stack.getParameter( CoreProtocolPNames.HTTP_CONTENT_CHARSET)); System.out.println(stack.getParameter( CoreProtocolPNames.USE_EXPECT_CONTINUE)); System.out.println(stack.getParameter( CoreProtocolPNames.USER_AGENT));
stdout >
HTTP/1.1 UTF-8 false null
Please note the BasicHttpParams
class does not synchronize access to
its internal structures and therefore amy be thread-unsafe.
HttpParams
interface allows for a great deal of
flexibility in handling configuration of components. Most importantly, new parameters
can be introduced without affecting binary compatibility with older versions. However,
HttpParams
also has a certain disadvantage compared to
regular Java beans: HttpParams
cannot be assembled using
a DI framework. To mitigate the limitation, HttpCore includes a number of bean classes
that can used in order to initialize HttpParams
objects
using standard Java bean conventions.
HttpParams params = new BasicHttpParams(); HttpProtocolParamBean paramsBean = new HttpProtocolParamBean(params); paramsBean.setVersion(HttpVersion.HTTP_1_1); paramsBean.setContentCharset("UTF-8"); paramsBean.setUseExpectContinue(true); System.out.println(params.getParameter( CoreProtocolPNames.PROTOCOL_VERSION)); System.out.println(params.getParameter( CoreProtocolPNames.HTTP_CONTENT_CHARSET)); System.out.println(params.getParameter( CoreProtocolPNames.USE_EXPECT_CONTINUE)); System.out.println(params.getParameter( CoreProtocolPNames.USER_AGENT));
stdout >
HTTP/1.1 UTF-8 false null
HttpService
is a server side HTTP protocol handler based on the
blocking I/O model that implements the essential requirements of the HTTP protocol for
the server side message processing as described by RFC 2616.
HttpService
relies on HttpProcessor
instance to generate mandatory protocol headers for all outgoing
messages and apply common, cross-cutting message transformations to all incoming and
outgoing messages, whereas HTTP request handlers are expected to take care of
application specific content generation and processing.
HttpParams params; // Initialize HTTP parameters HttpProcessor httpproc; // Initialize HTTP processor HttpService httpService = new HttpService( httpproc, new DefaultConnectionReuseStrategy(), new DefaultHttpResponseFactory()); httpService.setParams(params);
The HttpRequestHandler
interface represents a
routine for processing of a specific group of HTTP requests. HttpService
is designed to take care of protocol specific aspects, whereas
individual request handlers are expected to take care of application specific HTTP
processing. The main purpose of a request handler is to generate a response object
with a content entity to be sent back to the client in response to the given
request.
HttpRequestHandler myRequestHandler = new HttpRequestHandler() { public void handle( HttpRequest request, HttpResponse response, HttpContext context) throws HttpException, IOException { response.setStatusCode(HttpStatus.SC_OK); response.addHeader("Content-Type", "text/plain"); response.setEntity( new StringEntity("some important message")); } };
HTTP request handlers are usually managed by a
HttpRequestHandlerResolver
that matches a request URI to a request
handler. HttpCore includes a very simple implementation of the request handler
resolver based on a trivial pattern matching algorithm:
HttpRequestHandlerRegistry
supports only three formats:
*
, <uri>*
and
*<uri>
.
HttpService httpService; // Initialize HTTP service HttpRequestHandlerRegistry handlerResolver = new HttpRequestHandlerRegistry(); handlerReqistry.register("/service/*", myRequestHandler1); handlerReqistry.register("*.do", myRequestHandler2); handlerReqistry.register("*", myRequestHandler3); // Inject handler resolver httpService.setHandlerResolver(handlerResolver);
Users are encouraged to provide more sophisticated implementations of
HttpRequestHandlerResolver
- for instance, based on
regular expressions.
When fully initialized and configured, the HttpService
can
be used to execute and handle requests for active HTTP connections. The
HttpService#handleRequest()
method reads an incoming
request, generates a response and sends it back to the client. This method can be
executed in a loop to handle multiple requests on a persistent connection. The
HttpService#handleRequest()
method is safe to execute from
multiple threads. This allows processing of requests on several connections
simultaneously, as long as all the protocol interceptors and requests handlers used
by the HttpService
are thread-safe.
HttpService httpService; // Initialize HTTP service HttpServerConnection conn; // Initialize connection HttpContext context; // Initialize HTTP context boolean active = true; try { while (active && conn.isOpen()) { httpService.handleRequest(conn, context); } } finally { conn.shutdown(); }
HttpRequestExecutor
is a client side HTTP protocol handler based
on the blocking I/O model that implements the essential requirements of the HTTP
protocol for the client side message processing, as described by RFC 2616.
HttpRequestExecutor
relies on on HttpProcessor
instance to generate mandatory protocol headers for all outgoing
messages and apply common, cross-cutting message transformations to all incoming and
outgoing messages. Application specific processing can be implemented outside
HttpRequestExecutor
once the request has been executed and a
response has been received.
HttpClientConnection conn; // Create connection HttpParams params; // Initialize HTTP parameters HttpProcessor httpproc; // Initialize HTTP processor HttpContext context; // Initialize HTTP context HttpRequestExecutor httpexecutor = new HttpRequestExecutor(); BasicHttpRequest request = new BasicHttpRequest("GET", "/"); request.setParams(params); httpexecutor.preProcess(request, httpproc, context); HttpResponse response = httpexecutor.execute( request, conn, context); response.setParams(params); httpexecutor.postProcess(response, httpproc, context); HttpEntity entity = response.getEntity(); EntityUtils.consume(entity);
Methods of HttpRequestExecutor
are safe to execute from multiple
threads. This allows execution of requests on several connections simultaneously, as
long as all the protocol interceptors used by the HttpRequestExecutor
are thread-safe.
The ConnectionReuseStrategy
interface is intended to
determine whether the underlying connection can be re-used for processing of further
messages after the transmission of the current message has been completed. The default
connection re-use strategy attempts to keep connections alive whenever possible.
Firstly, it examines the version of the HTTP protocol used to transmit the message.
HTTP/1.1
connections are persistent by default, while
HTTP/1.0
connections are not. Secondly, it examines the value of the
Connection
header. The peer can indicate whether it intends to
re-use the connection on the opposite side by sending Keep-Alive
or
Close
values in the Connection
header. Thirdly,
the strategy makes the decision whether the connection is safe to re-use based on the
properties of the enclosed entity, if available.