Web server - UTCC e
Download
Report
Transcript Web server - UTCC e
Web server
Definition
• A computer that is responsible for accepting
HTTP requests from clients, which are known as
Web browsers, and serving them Web pages,
which are usually HTML documents and linked
objects (images, etc.).
• A computer program that provides the
functionality described in the first sense of the
term.
Web server programs
Basic common features
• HTTP responses to HTTP requests: every Web server
program operates by accepting HTTP requests from the
network, and providing an HTTP response to the
requester.
– The HTTP response typically consists of an HTML document,
but can also be a raw text file, an image, or some other type of
document; if something bad is found in client request or while
trying to serve the request, a Web server has to send an error
response which may include some custom HTML or text
messages to better explain the problem to end users.
• Logging: usually Web servers have also the capability of
logging some detailed information, about client requests
and server responses, to log files; this allows the
Webmaster to collect statistics by running log analyzers
on log files.
Web servers implement the following features:
•
Configurability of available features by configuration
files or even by an external user interface.
•
Authentication,
•
Handling of not only static content (file content recorded
in server's filesystem(s)) but of dynamic content too by
supporting one or more related interfaces (SSI, CGI,
SCGI, FastCGI, PHP, ASP, ASP .NET, Server API such
as NSAPI, ISAPI, etc.).
optional authorization request
(request of user name and password) before allowing
access to some or all kind of resources.
• Module support, in order to allow the extension
of server capabilities by adding or modifying
software modules which are linked to the server
software or that are dynamically loaded (on
demand) by the core server.
• HTTPS support (by SSL or TLS) in order to allow
secure (encrypted) connections to the server on
the standard port 443 instead of usual port 80.
• Content compression (i.e. by gzip encoding) to
reduce the size of the responses (to lower
bandwidth usage, etc.).
• Virtual Host to serve many web sites
using one IP address.
• Large file support to be able to serve
files whose size is greater than 2 GB on
32 bit OS.
• Bandwidth throttling to limit the speed of
responses in order to not saturate the
network and to be able to serve more
clients.
Origin of returned content
The origin of the content sent by server is called:
• static if it comes from an existing file lying on a
filesystem;
• dynamic if it is dynamically generated by some other
program or script or API called by the Web server.
• Serving static content is usually much faster (from 2 to
100 times) than serving dynamic content, especially if
the latter involves data pulled from a database.
• Server Application Programming Interface (API), the API
used by PHP to interface with Web Servers
Path translation
• Web servers usually translate the path
component of a Uniform Resource Locator
(URL) into a local file system resource.
– The URL path specified by the client is relative to
the Web server's root directory.
Concurrency
Server Models
A webserver program, as any other server, can be
implemented by using one of these server
models:
1. single process, finite state machine and non blocking
or even asynchronous I/O;
2. multi process, finite state machine and non blocking or
even asynchronous I/O;
3. single process, forking a new process for each
request;
4. multi process, with adaptive pre-forking of processes;
5. single process, multithreaded;
6. multi process, multithreaded.
Finite state machine servers
• To minimize the context switches and to
maximize the scalability, many small Web
servers are implemented as a single process (or
at most as a process per CPU) and a finite state
machine.
• Every task is split into two or more small steps
that are executed as needed (typically on
demand); by keeping the internal state of each
connection and by using non-blocking I/O or
asynchronous I/O, it is possible to implement
ultra fast Web servers, at least for serving static
content.
Threaded-based servers
• Many Web servers are multithreaded.
• This means that inside each server's
process, there are two or more threads,
each one able to execute its own task
independently from the others.
• When a user visits a Web site, a Web server will use a
thread to serve the page to that user.
• If another user visits the site while the previous user is
still being served, the Web server can serve the second
visitor by using a different thread.
• Thus, the second user does not have to wait for the first
visitor to be served.
• This is very important because not all users have the
same speed Internet connection.
• A slow user should not delay all other visitors from
downloading a Web page.
• Threads are often used to serve dynamic
content.
• For better performance, threads used by
Web servers and other Internet services
are typically pooled and reused to
eliminate even the small overhead
associated with creating a thread.
Process-based servers
• For reliability and security reasons, some Web
servers using multiple processes (rather than
multiple threads within a single process) still
remain in production use, such as Apache 1.3.
• A pool of processes are used, and reused, until
a certain threshold of requests have been
served by a process before it is replaced by a
new one.
• Because threads share a main process context,
a crashing thread may more easily crash the
whole application, and a buffer overflow can
have more disastrous consequences.
• Moreover, a memory leak in system libraries which are
out of the control of the application programmer cannot
be dealt with using threads, but are appropriately dealt
with using a pool of processes with a limited life time
(because OS automatically frees all the allocated
memory, requested by a process, when the process
dies).
• Another problem relates to the wide variety of third party
libraries which might be used by an application (a PHP
extension library for instance) which might not be thread
safe.
• Using multiple processes also allows to deal with
situations which can benefit from privilege separation
techniques to achieve better security and to work around
some OS limits which very often are per-process.
Mixed model servers
• To leverage the advantages of finite state
machines, threads and processes, many
webservers implement a mixture of all
these programming techniques, trying to
use the best model for each task (i.e. for
serving static or dynamic content, etc.).
Load Limits
• A web server (program) has defined load limits, because
it can handle only a limited number of concurrent client
connections (usually between 2 and 60,000, by default
between 500 and 1,000) per IP address (and IP port)
and it can serve only a certain maximum number of
requests per second depending on:
–
–
–
–
–
–
its own settings;
the HTTP request type;
content origin (static or dynamic);
the fact that the served content is or is not cached;
the hardware and software limits of the OS where it is working.
When a web server is near to or over its limits, it becomes
overloaded and thus unresponsive.
Overload Causes
• too much legitimate Web traffic (i.e. thousands or even
millions of clients hitting the Web site in a short interval of time);
• DDoS (Distributed Denial of Service) attacks;
• Computer worms that sometimes cause abnormal traffic
because of millions of infected computers (not coordinated among
them);
• Internet web robots traffic not filtered / limited on large web
sites with very few resources (bandwidth, etc.);
• Internet (network) slowdowns, so that client requests are
served more slowly and the number of connections increases so
much that server limits are reached;
• Web servers (computers) partial unavailability, this can
happen because of required / urgent maintenance or upgrade, HW
or SW failures, back-end (i.e. DB) failures, etc.; in these cases the
remaining web servers get too much traffic and of course they
become overloaded.
Overload Symptoms
• The symptoms of an overloaded Web
server are:
– requests are served with noticeably (long)
delays (from 1 second to a few hundreds of
seconds);
– 500, 503 HTTP errors are returned to clients
(sometimes also unrelated 404 error or even
408 error may be returned);
– TCP connections are refused or reset before
any content is sent to clients.
Anti Overload Techniques
• managing network traffic, by using:
– Firewalls to block unwanted traffic coming from bad IP
sources or having bad patterns;
– HTTP traffic managers to drop, redirect or rewrite requests
having bad HTTP patterns;
– Bandwidth management and Traffic shaping, in
order to smooth down peaks in network usage;
• deploying Web cache techniques;
• using different domain names to serve different (static
and dynamic) content by separate Web servers, i.e.:
– http://images.example.com
– http://www.example.com
• using many Web servers (programs) per computer, each
one bound to its own network card and IP address;
• using many Web servers (computers) that are grouped
together so that they act or are seen as one big Web
server, see also: Load balancer;
• adding more HW resources (i.e. RAM, disks) to each
computer;
• tuning OS parameters for HW capabilities and usage;
• using more efficient computer programs for Web servers,
etc.;
• using other workarounds, specially if dynamic content is
involved.
Software
The four top most common Web or HTTP server programs
are:
1. Apache HTTP Server from the Apache Software
Foundation.
2. Internet Information Services (IIS) from Microsoft.
3. Sun Java System Web Server from Sun Microsystems,
formerly Sun ONE Web Server, iPlanet Web Server, and
Netscape Enterprise Server.
4. Zeus Web Server from Zeus Technology.
There are thousands of different Web server programs
available, many of them are specialized for some uses
and can be tailored to satisfy specific needs.
Statistics
• The most popular Web servers, used for public
Web sites, are tracked by Netcraft Web Server
Survey, with details given by Netcraft Web
Server Reports.
• The Apache HTTP Server Project is an effort to
develop and maintain an open-source HTTP
server for modern operating systems including
UNIX and Windows NT. The goal of this project
is to provide a secure, efficient and extensible
server that provides HTTP services in sync with
the current HTTP standards.
• Apache has been the most popular Web server
on the Internet since April of 1996.
– The November 2005 Netcraft Web Server Survey
found that more than 70% of the Web sites on the
Internet are using Apache, thus making it more widely
used than all other Web servers combined.
– The Apache HTTP Server is a project of the Apache
Software Foundation
• Another site provide statistics is SecuritySpace:
[1] and they also provide a detail break down for
each version of Web server: [2]
Web Server Survey
Across All Domains
Market Share Change (Total servers: 16,236,196) December 1st, 2005
1Servers are ordered according to their global market share.
November
Count
November
%
October
Count
October
%
Change
11,705,062
72.09%
11,508,481
71.95%
+0.14%
3,588,469
22.10%
3,561,256
22.27%
-0.17%
123,100
0.76%
125,218
0.78%
-0.02%
Netscape
80,711
0.50%
82,783
0.52%
-0.02%
WebSTAR
65,289
0.40%
58,201
0.36%
+0.04%
WebSite
14,792
0.09%
14,888
0.09%
+0.00%
658,773
4.06%
643,314
4.02%
+0.04%
Server1
Apache
Microsoft
Zeus
Other
Copyright© 1998-2006 E-Soft Inc. Excerpts of this report may be reproduced providing that E-Soft and the URL http://www.securityspace.com are attributed.
Three main categories of firewalls
• Network layer firewalls. An example would
be iptables.
• Application layer firewalls. An example
would be TCP Wrappers.
• Application firewalls. An example would be
restricting ftp services through
/etc/ftpaccess file
Network layer firewalls
• operate at a (relatively) low level of the TCP/IP protocol
stack as IP-packet filters, not allowing packets to pass
through the firewall unless they match the rules. The
firewall administrator may define the rules; or default
built-in rules may apply (as in some inflexible firewall
systems).
• A more permissive setup could allow any packet to pass
the filter as long as it does not match one or more
"negative-rules", or "deny rules". Today network firewalls
are built into most computer operating systems and
network appliances.
• Modern firewalls can filter traffic based on many packet
attributes like source IP address, source port, destination
IP address or port, destination service like WWW or FTP.
They can filter based on protocols, TTL values, netblock
of originator, domain name of the source, and many
other attributes.
Application-layer firewalls
• work on the application level of the TCP/IP stack (i.e., all
browser traffic, or all telnet or ftp traffic), and may
intercept all packets traveling to or from an application.
They block other packets (usually dropping them without
acknowledgement to the sender). In principle, application
firewalls can prevent all unwanted outside traffic from
reaching protected machines.
• By inspecting all packets for improper content, firewalls
can even prevent the spread of the likes of viruses. In
practice, however, this becomes so complex and so
difficult to attempt (given the variety of applications and
the diversity of content each may allow in its packet
traffic) that comprehensive firewall design does not
generally attempt this approach.
• The XML firewall exemplifies a more recent kind of
application-layer firewall.
A proxy device
• (running either on dedicated hardware or as software on
a general-purpose machine) may act as a firewall by
responding to input packets (connection requests, for
example) in the manner of an application, whilst blocking
other packets.
• Proxies make tampering with an internal system from the
external network more difficult and misuse of one
internal system would not necessarily cause a security
breach exploitable from outside the firewall (as long as
the application proxy remains intact and properly
configured). Conversely, intruders may hijack a publiclyreachable system and use it as a proxy for their own
purposes; the proxy then masquerades as that system to
other internal machines. While use of internal address
spaces enhances security, crackers may still employ
methods such as IP spoofing to attempt to pass packets
to a target network..
• http://en.wikipedia.org/wiki/Webserver