# Add the repository
rpm -Uvh http://repo.webtatic.com/yum/centos/5/latest.rpm
# Install the latest version of git
yum install -y --enablerepo=webtatic git
Thursday, December 27, 2012
Install git client on CentOS5
Wednesday, December 26, 2012
JS Study
========
Closure
========
<html>
<head>
<title>Closure</title>
<script>
var fade = function(node) {
var level = 0; // Closure element
// step is closure element
var step = function() {
var hex = level.toString(16);
// node is closure element
node.style.backgroundColor = '#' + hex + hex + hex + hex + hex + hex;
if (level < 15) {
level++;
setTimeout(step, 1000);
}
};
setTimeout(step, 100);
};
var quo = function(stat) {
return {
getStatus: function() {
return stat;
}
};
};
var a = quo("Good");
document.writeln(a.getStatus());
fade(document.body);
</script>
</head>
<body>
</body>
</html>
Closure
========
<html>
<head>
<title>Closure</title>
<script>
var fade = function(node) {
var level = 0; // Closure element
// step is closure element
var step = function() {
var hex = level.toString(16);
// node is closure element
node.style.backgroundColor = '#' + hex + hex + hex + hex + hex + hex;
if (level < 15) {
level++;
setTimeout(step, 1000);
}
};
setTimeout(step, 100);
};
var quo = function(stat) {
return {
getStatus: function() {
return stat;
}
};
};
var a = quo("Good");
document.writeln(a.getStatus());
fade(document.body);
</script>
</head>
<body>
</body>
</html>
Saturday, December 8, 2012
Notes
HTTP RFC: http://tools.ietf.org/html/rfc2068#page-34
Autonomous System: http://www.cisco.com/web/about/ac123/ac147/archived_issues/ipj_9-1/autonomous_system_numbers.html
http://as.robtex.com/as38636.html
Whois: http://www.team-cymru.org/Services/ip-to-asn.html
http://whois.cymru.com/cgi-bin/whois.cgi
CIDR
http://www.cidr-report.org/cgi-bin/as-report?as=AS2527&v=4&view=2.0
# SNMP
http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/mrtg/mrtg_config_step_3.php
http://www.paessler.com/info/snmp_mibs_and_oids_an_overview
http://tools.cisco.com/Support/SNMP/do/BrowseOID.do
# Cacti
http://www.cyberciti.biz/faq/fedora-rhel-install-cacti-monitoring-rrd-software/
# Percona plugin for Cacti
http://www.percona.com/doc/percona-monitoring-plugins/cacti/mysql-templates.html
http://www.percona.com/doc/percona-monitoring-plugins/cacti/installing-templates.html
http://www.percona.com/downloads/percona-monitoring-plugins/
# NTP Server and Client
http://www.cyberciti.biz/faq/rhel-fedora-centos-configure-ntp-client-server/
# MySQL partition
http://dev.mysql.com/doc/refman/5.1/en/partitioning-management-range-list.html
==== grub4dos ==============================
1. Install grub4dos to USB/External HDD
2. Copy ISO file to USB/External HDD
3. Edit menu.lst as following:
title HirentBoot9.9v3.iso (0xFF)
find --set-root /HirentBoot9.9v3.iso
map /HirentBoot9.9v3.iso (0xFF)
map --hook
root (0xFF)
chainloader (0xFF)
title CentOS57.iso (0xFF)
find --set-root /CentOS57.iso
map /CentOS57.iso (0xFF)
map --hook
root (0xFF)
chainloader (0xFF)
Note: Use CDBurnerXP to create ISO file.
=========================================
phpMyAdmin installation
1. Download source code
2. Extract source code to /var/www/html/phpmyadmin
3. Create /etc/httpd/conf.d/phpmyadmin.conf as follows
Alias /phpmyadmin "/var/www/html/phpMyAdmin-2.11.11-english"
<Directory "/var/www/html/phpMyAdmin-2.11.11-english">
Options None
AllowOverride None
Order allow,deny
Allow from 119.15.160.25/32 210.86.225.160/28
</Directory>
Hide db:
vim path/to/config.inc.php
$cfg['Servers'][$i]['hide_db'] = '^information_schema|mysql|test$';
Allow/Deny user
$cfg['Servers'][$i]['AllowDeny']['order'] = 'deny,allow';
$cfg['Servers'][$i]['AllowDeny']['rules'] = array('deny admin from all'); // Deny user admin
MySQL
SHOW GRANTS FOR 'bbdev'@'localhost';
mysqldump -uroot -p<password> --single-transaction --databases db1 db2
vi/vim
Create ~/.exrc, change settings such as
:set ts=4
=================================
Cisco Router
=================================
Enable SSH login
1. Set hostname
yourname#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
yourname (config)#hostname LabRouter
LabRouter(config)#
2. Set domain name
LabRouter(config)#ip domain-name CiscoLab.com
3. We generate a certificate that will be used to encrypt the SSH packets using the crypto key generate rsa command
LabRouter(config)#crypto key generate rsa
4. Config vty lineLabRouter(config)#line vty 0 4
LabRouter(config-line)#login local
LabRouter(config-line)#transport input ssh
5. Create router's account
LabRouter(config)#username XXXX privilege 15 secret XXXX
6. Set SSH version
LabRouter(config)#line vty 0 4
LabRouter(config)#ip ssh version 2
Port forwarding
R1841_Punch(config)#ip nat inside source static tcp 10.0.0.9 80 <public ip> 80 extendable
R1841_Punch(config)#ip nat inside source static tcp 10.0.0.9 443 <public ip>443 extendable
R1841_Punch(config)#ip nat inside source static tcp 10.0.0.9 943 <public ip> 943 extendable
R1841_Punch(config)#ip nat inside source static udp 10.0.0.9 1194 <public ip> 1194 extendable
=================================
syslog-ng
=================================
[root@abc ~]# cat /etc/syslog-ng/syslog-ng.conf
# syslog-ng configuration file.
#
# This should behave pretty much like the original syslog on RedHat. But
# it could be configured a lot smarter.
#
# See syslog-ng(8) and syslog-ng.conf(5) for more information.
#
options {
sync (0);
time_reopen (10);
log_fifo_size (1000);
long_hostnames (no);
use_dns (yes);
use_fqdn (yes);
create_dirs (yes);
keep_hostname (no);
};
source s_sys {
file ("/proc/kmsg" log_prefix("kernel: "));
unix-stream ("/dev/log");
internal();
# udp(ip(0.0.0.0) port(514));
};
destination d_cons { file("/dev/console"); };
destination d_mesg { file("/var/log/messages"); };
destination d_auth { file("/var/log/secure"); };
destination d_mail { file("/var/log/maillog" sync(10)); };
destination d_spol { file("/var/log/spooler"); };
destination d_boot { file("/var/log/boot.log"); };
destination d_cron { file("/var/log/cron"); };
destination d_kern { file("/var/log/kern"); };
destination d_mlal { usertty("*"); };
filter f_kernel { facility(kern); };
filter f_default { level(info..emerg) and
not (facility(mail)
or facility(authpriv)
or facility(cron)); };
filter f_auth { facility(authpriv); };
filter f_mail { facility(mail); };
filter f_emergency { level(emerg); };
filter f_news { facility(uucp) or
(facility(news)
and level(crit..emerg)); };
filter f_boot { facility(local7); };
filter f_cron { facility(cron); };
#log { source(s_sys); filter(f_kernel); destination(d_cons); };
log { source(s_sys); filter(f_kernel); destination(d_kern); };
log { source(s_sys); filter(f_default); destination(d_mesg); };
log { source(s_sys); filter(f_auth); destination(d_auth); };
log { source(s_sys); filter(f_mail); destination(d_mail); };
log { source(s_sys); filter(f_emergency); destination(d_mlal); };
log { source(s_sys); filter(f_news); destination(d_spol); };
log { source(s_sys); filter(f_boot); destination(d_boot); };
log { source(s_sys); filter(f_cron); destination(d_cron); };
# vim:ft=syslog-ng:ai:si:ts=4:sw=4:et:
# Define all the sources of localhost generated syslog
# messages and label it "d_localhost"
#source s_localhost {
# pipe ("/proc/kmsg" log_prefix("kernel: "));
# unix-stream ("/dev/log");
# internal();
#};
# Define all the sources of network generated syslog
# messages and label it "d_network"
source s_network {
tcp(max-connections(5000));
udp();
};
# Define the destination "d_localhost" log directory
#destination d_localhost {
# file ("/var/log/syslog-ng/$YEAR.$MONTH.$DAY/localhost/$FACILITY.log");
#};
# Define the destination "d_network" log directory
destination d_network {
file ("/var/log/syslog-ng/$YEAR.$MONTH.$DAY/$HOST/$FACILITY.log");
};
# Any logs that match the "s_localhost" source should be logged
# in the "d_localhost" directory
#log { source(s_localhost);
# destination(d_localhost);
#};
# Any logs that match the "s_network" source should be logged
# in the "d_network" directory
log { source(s_network);
destination(d_network);
};
[root@abc ~]#
=================================
OpenSSL
=================================
===========================
Linux Firewall
===========================
==========================
BASH SHELL
==========================
export PS1='\[\e[1;32m\][\u@\w]\$\[\e[00m\] '
export LSCOLORS=gxfxcxdxbxegedabagacad
BEGIN_COLOR="\e[0;31m"
END_COLOR="\e[m"
export PS1="[\u@\h($BEGIN_COLOR master $END_COLOR) \W]# "
==========================
OpenLDAP
==========================
Generate userPassword
# slappasswd
New password:
Re-enter new password:
{SSHA}xNkreAEiJpX2oyHbjiai0BUdqiEdwcYo
Generate sambaNTPassword
************************************
#!/usr/bin/perl
use Crypt::SmbHash;
$password = $ARGV[0];
if ( !$password ) {
print "Not enough argument\n";
print "Usage: $0 password\n";
exit 1;
}
my ($lm, $nt) = ntlmgen $password;
print "LM = $lm\n";
print "NT = $nt\n";
************************************
==========================
OpenVPN
==========================
Autonomous System: http://www.cisco.com/web/about/ac123/ac147/archived_issues/ipj_9-1/autonomous_system_numbers.html
http://as.robtex.com/as38636.html
Whois: http://www.team-cymru.org/Services/ip-to-asn.html
http://whois.cymru.com/cgi-bin/whois.cgi
CIDR
http://www.cidr-report.org/cgi-bin/as-report?as=AS2527&v=4&view=2.0
# SNMP
http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/mrtg/mrtg_config_step_3.php
http://www.paessler.com/info/snmp_mibs_and_oids_an_overview
http://tools.cisco.com/Support/SNMP/do/BrowseOID.do
# Cacti
http://www.cyberciti.biz/faq/fedora-rhel-install-cacti-monitoring-rrd-software/
# Percona plugin for Cacti
http://www.percona.com/doc/percona-monitoring-plugins/cacti/mysql-templates.html
http://www.percona.com/doc/percona-monitoring-plugins/cacti/installing-templates.html
http://www.percona.com/downloads/percona-monitoring-plugins/
# NTP Server and Client
http://www.cyberciti.biz/faq/rhel-fedora-centos-configure-ntp-client-server/
# MySQL partition
http://dev.mysql.com/doc/refman/5.1/en/partitioning-management-range-list.html
==== grub4dos ==============================
1. Install grub4dos to USB/External HDD
2. Copy ISO file to USB/External HDD
3. Edit menu.lst as following:
title HirentBoot9.9v3.iso (0xFF)
find --set-root /HirentBoot9.9v3.iso
map /HirentBoot9.9v3.iso (0xFF)
map --hook
root (0xFF)
chainloader (0xFF)
title CentOS57.iso (0xFF)
find --set-root /CentOS57.iso
map /CentOS57.iso (0xFF)
map --hook
root (0xFF)
chainloader (0xFF)
Note: Use CDBurnerXP to create ISO file.
=========================================
phpMyAdmin installation
1. Download source code
2. Extract source code to /var/www/html/phpmyadmin
3. Create /etc/httpd/conf.d/phpmyadmin.conf as follows
Alias /phpmyadmin "/var/www/html/phpMyAdmin-2.11.11-english"
<Directory "/var/www/html/phpMyAdmin-2.11.11-english">
Options None
AllowOverride None
Order allow,deny
Allow from 119.15.160.25/32 210.86.225.160/28
</Directory>
Hide db:
vim path/to/config.inc.php
$cfg['Servers'][$i]['hide_db'] = '^information_schema|mysql|test$';
Allow/Deny user
$cfg['Servers'][$i]['AllowDeny']['order'] = 'deny,allow';
$cfg['Servers'][$i]['AllowDeny']['rules'] = array('deny admin from all'); // Deny user admin
MySQL
SHOW GRANTS FOR 'bbdev'@'localhost';
mysqldump -uroot -p<password> --single-transaction --databases db1 db2
vi/vim
Create ~/.exrc, change settings such as
:set ts=4
=================================
Cisco Router
=================================
Enable SSH login
1. Set hostname
yourname#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
yourname (config)#hostname LabRouter
LabRouter(config)#
2. Set domain name
LabRouter(config)#ip domain-name CiscoLab.com
3. We generate a certificate that will be used to encrypt the SSH packets using the crypto key generate rsa command
LabRouter(config)#crypto key generate rsa
4. Config vty lineLabRouter(config)#line vty 0 4
LabRouter(config-line)#login local
LabRouter(config-line)#transport input ssh
5. Create router's account
LabRouter(config)#username XXXX privilege 15 secret XXXX
6. Set SSH version
LabRouter(config)#line vty 0 4
LabRouter(config)#ip ssh version 2
Port forwarding
R1841_Punch(config)#ip nat inside source static tcp 10.0.0.9 80 <public ip> 80 extendable
R1841_Punch(config)#ip nat inside source static tcp 10.0.0.9 443 <public ip>443 extendable
R1841_Punch(config)#ip nat inside source static tcp 10.0.0.9 943 <public ip> 943 extendable
R1841_Punch(config)#ip nat inside source static udp 10.0.0.9 1194 <public ip> 1194 extendable
=================================
syslog-ng
=================================
[root@abc ~]# cat /etc/syslog-ng/syslog-ng.conf
# syslog-ng configuration file.
#
# This should behave pretty much like the original syslog on RedHat. But
# it could be configured a lot smarter.
#
# See syslog-ng(8) and syslog-ng.conf(5) for more information.
#
options {
sync (0);
time_reopen (10);
log_fifo_size (1000);
long_hostnames (no);
use_dns (yes);
use_fqdn (yes);
create_dirs (yes);
keep_hostname (no);
};
source s_sys {
file ("/proc/kmsg" log_prefix("kernel: "));
unix-stream ("/dev/log");
internal();
# udp(ip(0.0.0.0) port(514));
};
destination d_cons { file("/dev/console"); };
destination d_mesg { file("/var/log/messages"); };
destination d_auth { file("/var/log/secure"); };
destination d_mail { file("/var/log/maillog" sync(10)); };
destination d_spol { file("/var/log/spooler"); };
destination d_boot { file("/var/log/boot.log"); };
destination d_cron { file("/var/log/cron"); };
destination d_kern { file("/var/log/kern"); };
destination d_mlal { usertty("*"); };
filter f_kernel { facility(kern); };
filter f_default { level(info..emerg) and
not (facility(mail)
or facility(authpriv)
or facility(cron)); };
filter f_auth { facility(authpriv); };
filter f_mail { facility(mail); };
filter f_emergency { level(emerg); };
filter f_news { facility(uucp) or
(facility(news)
and level(crit..emerg)); };
filter f_boot { facility(local7); };
filter f_cron { facility(cron); };
#log { source(s_sys); filter(f_kernel); destination(d_cons); };
log { source(s_sys); filter(f_kernel); destination(d_kern); };
log { source(s_sys); filter(f_default); destination(d_mesg); };
log { source(s_sys); filter(f_auth); destination(d_auth); };
log { source(s_sys); filter(f_mail); destination(d_mail); };
log { source(s_sys); filter(f_emergency); destination(d_mlal); };
log { source(s_sys); filter(f_news); destination(d_spol); };
log { source(s_sys); filter(f_boot); destination(d_boot); };
log { source(s_sys); filter(f_cron); destination(d_cron); };
# vim:ft=syslog-ng:ai:si:ts=4:sw=4:et:
# Define all the sources of localhost generated syslog
# messages and label it "d_localhost"
#source s_localhost {
# pipe ("/proc/kmsg" log_prefix("kernel: "));
# unix-stream ("/dev/log");
# internal();
#};
# Define all the sources of network generated syslog
# messages and label it "d_network"
source s_network {
tcp(max-connections(5000));
udp();
};
# Define the destination "d_localhost" log directory
#destination d_localhost {
# file ("/var/log/syslog-ng/$YEAR.$MONTH.$DAY/localhost/$FACILITY.log");
#};
# Define the destination "d_network" log directory
destination d_network {
file ("/var/log/syslog-ng/$YEAR.$MONTH.$DAY/$HOST/$FACILITY.log");
};
# Any logs that match the "s_localhost" source should be logged
# in the "d_localhost" directory
#log { source(s_localhost);
# destination(d_localhost);
#};
# Any logs that match the "s_network" source should be logged
# in the "d_network" directory
log { source(s_network);
destination(d_network);
};
[root@abc ~]#
=================================
OpenSSL
=================================
Q: First - what happens if I don't give a passphrase? Is some sort of pseudo random phrase used? I'm just looking for something "good enough" to keep casual hackers at bay.
Second - how do I generate a key pair from the command line, supplying the passphrase on the command line?
A: If you don't use a passphrase, then the private key is not encrypted with any symmetric cipher - it is output completely unprotected.
You can generate a keypair, supplying the password on the command-line using an invocation like (in this case, the password is
foobar
):openssl genrsa -aes128 -passout pass:foobar 2048
However, note that this passphrase could be grabbed by any other process running on the machine at the time, since command-line arguments are generally visible to all processes.
A better alternative is to write the passphrase into a temporary file that is protected with file permissions, and specify that:
openssl genrsa -aes128 -passout file:passphrase.txt 2048
Or supply the passphrase on standard input:
openssl genrsa -aes128 -passout stdin 2048
You can also used a named pipe with the
file:
option, or a file descriptor.
To then obtain the matching public key, you need to use
openssl rsa
, supplying the same passphrase with the -passin
parameter as was used to encrypt the private key:openssl rsa -passin file:passphrase.txt -pubout
(This expects the encrypted private key on standard input - you can instead read it from a file using
-in <file>
).
Example of creating a 2048-bit private and public key pair in files, with the private key pair encrypted with password
foobar
:openssl genrsa -aes128 -passout pass:foobar -out privkey.pem 2048
openssl rsa -in privkey.pem -passin pass:foobar -pubout -out privkey.pub
===========================
Linux Firewall
===========================
Tuning Linux firewall connection tracker ip_conntrack
1 Reply
Overview
If your Linux server should handle lots of connections, you can get into the problem with ip_conntrack iptables module. It limits number of simultaneous connections your system can have. Default value (in CentOS and most other distros) is 65536.
If your Linux server should handle lots of connections, you can get into the problem with ip_conntrack iptables module. It limits number of simultaneous connections your system can have. Default value (in CentOS and most other distros) is 65536.
To check how many entries in the conntrack table are occupied at the moment:
Or you can dump whole table :
cat /proc/sys/net/ipv4/netfilter/ip_conntrack_count
Or you can dump whole table :
cat /proc/net/ip_conntrack
Conntrack table is hash table (hash map) of fixed size (8192 entries by default), which is used for primary lookup. When the slot in the table is found it points to list of conntrack structures, so secondary lookup is done using list traversal. 65536/8192 gives 8 – the average list length. You may want to experiment with this value on heavily loaded systems.
Modifying conntrack capacity
To see the current conntrack capacity:
You can modify it by echoing new value there:
Changes are immediate, but temporary – will not survive reboot.
To see the current conntrack capacity:
cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
You can modify it by echoing new value there:
# echo 131072 > /proc/sys/net/ipv4/netfilter/ip_conntrack_max
# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
131072
Changes are immediate, but temporary – will not survive reboot.
Modifying number of buckets in the hash table
As mentioned above just changing this parameter will give you some relief, if your server was at the cap, but it is not ideal setup. For 1M connections average list becomes 1048576 / 8192 = 128, which is a bit too much.
As mentioned above just changing this parameter will give you some relief, if your server was at the cap, but it is not ideal setup. For 1M connections average list becomes 1048576 / 8192 = 128, which is a bit too much.
To see current size of hash table:
which is read-only aliase for module parameter:
You can change it on the fly as well:
cat /proc/sys/net/ipv4/netfilter/ip_conntrack_buckets
which is read-only aliase for module parameter:
cat /sys/module/ip_conntrack/parameters/hashsize
You can change it on the fly as well:
#echo 32768 > /sys/module/ip_conntrack/parameters/hashsize
# cat /sys/module/ip_conntrack/parameters/hashsize
32768
Persisting the changes
Making these changes persistent is a bit tricky.
For total number of connection just edit
Making these changes persistent is a bit tricky.
For total number of connection just edit
/etc/sysctl.conf
(CentOs, Redhat etc) and you are done:
# conntrack limits
net.ipv4.netfilter.ip_conntrack_max = 131072
Not so easy with hashtable size. You need to pass parameters to kerenl module at boot time. Edit add to
/etc/modprobe.conf
:
options ip_conntrack hashsize=32768
Memory usage
You can find how much kernel memory each conntrack entry occupies by grepping /var/log/messages :
ip_conntrack version 2.4 (8192 buckets, 65536 max) - 304 bytes per conntrack
1M connections would require 304MB of kernel memory.
======================
RPM
======================
Listing package installed by date time
rpm -qa --qf '%{INSTALLTIME} (%{INSTALLTIME:date}): %{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n' | sort -n==========================
BASH SHELL
==========================
A list of handy tput command line options
- tput bold - Bold effect
- tput rev - Display inverse colors
- tput sgr0 - Reset everything
- tput setaf {CODE}- Set foreground color, see color {CODE} table below for more information.
- tput setab {CODE}- Set background color, see color {CODE} table below for more information.
Various color codes for the tput command
Color {code} | Color |
0 | Black |
1 | Red |
2 | Green |
3 | Yellow |
4 | Blue |
5 | Magenta |
6 | Cyan |
7 | White |
export PS1='\[\e[1;32m\][\u@\w]\$\[\e[00m\] '
export LSCOLORS=gxfxcxdxbxegedabagacad
BEGIN_COLOR="\e[0;31m"
END_COLOR="\e[m"
export PS1="[\u@\h($BEGIN_COLOR master $END_COLOR) \W]# "
==========================
OpenLDAP
==========================
Generate userPassword
# slappasswd
New password:
Re-enter new password:
{SSHA}xNkreAEiJpX2oyHbjiai0BUdqiEdwcYo
Generate sambaNTPassword
************************************
#!/usr/bin/perl
use Crypt::SmbHash;
$password = $ARGV[0];
if ( !$password ) {
print "Not enough argument\n";
print "Usage: $0 password\n";
exit 1;
}
my ($lm, $nt) = ntlmgen $password;
print "LM = $lm\n";
print "NT = $nt\n";
************************************
==========================
OpenVPN
==========================
yum install openvpn -y
cp /usr/share/doc/openvpn-2.3.1/sample/sample-config-files/server.conf /etc/openvpn/server.conf
Follow comments to modify /etc/openvpn/server.conf
Download easy-rsa from below:
wget https://github.com/downloads/OpenVPN/easy-rsa/easy-rsa-2.2.0_master.tar.gz
Extract the package:
tar -zxvf easy-rsa-2.2.0_master.tar.gz
Copy to OpenVPN directory:
cp -R easy-rsa-2.2.0_master/easy-rsa/ /etc/openvpn/
Now let’s create the certificate:
cd /etc/openvpn/easy-rsa/2.0 chmod 755 * source ./vars ./vars ./clean-all
Build CA:
./build-ca
Country Name: may be filled or press enter State or Province Name: may be filled or press enter City: may be filled or press enter Org Name: may be filled or press enter Org Unit Name: may be filled or press enter Common Name: your server hostname Email Address: may be filled or press enter
Build key server:
./build-key-server server
Almost the same with ./build.ca but check the changes and additional Common Name: server A challenge password: leave Optional company name: fill or enter sign the certificate: y 1 out of 1 certificate requests: y
Build Diffie Hellman (wait a moment until the process finish):
./build-dhGenerate client key
./build-key-pass client
Thursday, November 29, 2012
Nginx: the High-Performance Web Server and Reverse Proxy
Source: http://www.linuxjournal.com/magazine/nginx-high-performance-web-server-and-reverse-proxy?page=0,0
Having performance issues with your Web server? Maybe the Russians can help.
Having performance issues with your Web server? Maybe the Russians can help.
Apache is the most popular Web server and one of the most successful
open-source projects of all time. Since April 1996, Apache has served
more Web sites than any other Web server. Many of the world's largest
Web sites, including YouTube, Facebook, Wikipedia and Craigslist,
use Apache to serve billions of page views per month. Over the years,
Apache has proven itself to be a very stable, secure and configurable
Web server. Although Apache is an excellent Web server, what if there
were an alternative with the same functionality, a simpler configuration
and better performance? That Web server exists, and it's called Nginx.
Nginx, pronounced “Engine X”, is a high-performance Web server and reverse proxy. It was created by Igor Sysoev for www.rambler.ru, Russia's second-largest Web site. Rambler has used Nginx since summer 2004, and it's currently serving about 500 million requests per day. Like Apache, Nginx is used by some of the largest Web sites in the US, including WordPress (#26), YouPorn (#27), Hulu and MochiMedia. As of May 2008, Nginx is the fourth-most-popular Web server, and it is currently serving more than two million Web sites. As it is only trailing behind Apache, IIS and GFE, it is effectively the second-most-popular Web server available for Linux.
Nginx, pronounced “Engine X”, is a high-performance Web server and reverse proxy. It was created by Igor Sysoev for www.rambler.ru, Russia's second-largest Web site. Rambler has used Nginx since summer 2004, and it's currently serving about 500 million requests per day. Like Apache, Nginx is used by some of the largest Web sites in the US, including WordPress (#26), YouPorn (#27), Hulu and MochiMedia. As of May 2008, Nginx is the fourth-most-popular Web server, and it is currently serving more than two million Web sites. As it is only trailing behind Apache, IIS and GFE, it is effectively the second-most-popular Web server available for Linux.
Like Apache, Nginx has all the features you would expect from a leading
Web server:
It is stable, secure and very easy to configure, as you will see later
in the article. However, the main advantages of Nginx over Apache are
performance and efficiency.
I ran a simple test against Nginx v0.5.22 and Apache v2.2.8 using ab (Apache's benchmarking tool). During the tests, I monitored the system with vmstat and top. The results indicate that Nginx outperforms Apache when serving static content. Both servers performed best with a concurrency of 100. Apache used four worker processes (threaded mode), 30% CPU and 17MB of memory to serve 6,500 requests per second. Nginx used one worker, 15% CPU and 1MB of memory to serve 11,500 requests per second.
Nginx is able to serve more requests per second with less resources because of its architecture. It consists of a master process, which delegates work to one or more worker processes. Each worker handles multiple requests in an event-driven or asynchronous manner using special functionality from the Linux kernel (epoll/select/poll). This allows Nginx to handle a large number of concurrent requests quickly with very little overhead. Apache can be configured to use either a process per request (pre-fork) or a thread for each request (worker). Although Apache's threaded mode performs much better than its pre-fork mode, it still uses more memory and CPU than Nginx's event-driven architecture.
- Static file serving.
- SSL/TLS support.
- Virtual hosts.
- Reverse proxying.
- Load balancing.
- Compression.
- Access controls.
- URL rewriting.
- Custom logging.
- Server-side includes.
- WebDAV.
- FLV streaming.
- FastCGI.
I ran a simple test against Nginx v0.5.22 and Apache v2.2.8 using ab (Apache's benchmarking tool). During the tests, I monitored the system with vmstat and top. The results indicate that Nginx outperforms Apache when serving static content. Both servers performed best with a concurrency of 100. Apache used four worker processes (threaded mode), 30% CPU and 17MB of memory to serve 6,500 requests per second. Nginx used one worker, 15% CPU and 1MB of memory to serve 11,500 requests per second.
Nginx is able to serve more requests per second with less resources because of its architecture. It consists of a master process, which delegates work to one or more worker processes. Each worker handles multiple requests in an event-driven or asynchronous manner using special functionality from the Linux kernel (epoll/select/poll). This allows Nginx to handle a large number of concurrent requests quickly with very little overhead. Apache can be configured to use either a process per request (pre-fork) or a thread for each request (worker). Although Apache's threaded mode performs much better than its pre-fork mode, it still uses more memory and CPU than Nginx's event-driven architecture.
Nginx is available in most Linux distributions. For this article, I
use Ubuntu 8.04 (Hardy), which includes Nginx version 0.5.33. If your
distro does not have Nginx, or if you want to run a newer version,
you always can download the latest stable version (v0.6.31 at the time of this
writing)
and install from source.
Run the following command as root to install Nginx:
Run the following command as root to install Nginx:
# apt-get install nginxNow that Nginx is installed, you can use the startup script to start, stop or restart the Web server:
# /etc/init.d/nginx start # /etc/init.d/nginx stop # /etc/init.d/nginx restartMost configuration changes do not require a restart, in which case you can use the reload command. It is generally a good idea to test the Nginx configuration file for errors before reloading:
# nginx -t # /etc/init.d/nginx reloadLet's go ahead and start the server:
# /etc/init.d/nginx startNginx now should be running on your machine. If you open http://127.0.0.1/ in your browser, you should see a page with “Welcome to nginx!”.
Now that Nginx is installed, let's take a look at its config file,
located at /etc/nginx/nginx.conf. This file contains the server-wide
settings for Nginx, and it should look similar to this:
The error_log and access_log settings indicate the default logging locations. You also can configure these settings on a per-site basis, as you will see later in the article. Like Apache, Nginx is configured to run as the www-data user, but you easily can change this with the user setting. The startup script for Nginx needs to know the process ID for the master process, which is stored in /var/run/nginx.pid, as indicated by the pid setting.
The sendfile setting allows Nginx to use a special Linux system call to send a file over the network in a very efficient manner. The gzip option instructs Nginx to compress each response, which uses more CPU but saves bandwidth and decreases response time. Additionally, Nginx provides another compression module called gzip precompression (available as of version 0.6.24). This module looks for a compressed copy of the file with a .gz extension in the same location and serves it to gzip-enabled clients. This prevents having to compress the file each time it's requested.
The last setting we are concerned with is the include directive for the sites-enabled directory. Inside /etc/nginx, you'll see two other directories, /etc/nginx/sites-available and /etc/nginx/sites-enabled. For each Web site you want to host with Nginx, you should create a config file in /etc/nginx/sites-available, then create a symlink in /etc/nginx/sites-enabled that points to the config file you created. The main Nginx config file includes all the files in /etc/nginx/sites-enabled. This helps organize your configuration files and makes it very easy to enable and disable specific Web sites.
Static Web Server
Now that we covered the main configuration file, let's create a config file for a basic Web site. Before we begin, we need to disable the default site that Ubuntu created for us:
Let's go over the new configuration file we created. The server directive is used to define a new virtual server, and all of its settings are enclosed in braces. The listen directive indicates the IP and port on which this server will accept requests, and server_name sets the hostname for your virtual server. As I mentioned earlier, the access_log and error_log settings can be set on a per-site basis. It is usually a good idea to provide each site with its own set of log files.
Next is the location directive, which allows you to modify the settings for different parts of your site. In our case, we have only one location for the entire site. However, you can have multiple location directives, and you can use regular expressions to define them. We have two other directives inside our location block: root and index. The root directive is used to define the document root for this location. This means a request for /img/test.gif would look for the file /var/www/localhost/img/test.gif. Finally, the index directive tells Nginx what files to use as the default file for this location.
Reverse Proxy and Load Balancer
In addition to being an extremely fast static Web server, Nginx also is a load balancer and reverse proxy. A load balancer is a device used to spread work out across multiple servers or processes, and a reverse proxy is a server that transparently hands off requests to another server. Among other things, this allows Nginx to handle requests for static content and to load-balance requests for dynamic content across many different back-end servers or processes.
For this example, let's create a very simple Python Web server to serve up some dynamic content. Don't worry if you are not familiar with Python; we're just using it to display a Web page that indicates on which port the server is running. Save the following to a file called /tmp/server.py:
Now, create a new configuration file called /etc/nginx/sites-available/proxy with the following contents:
Let's go over some of these new settings. The upstream block defines a name for a group of back-end servers. In our case, we defined a group named python_servers, which contains the two local Python servers we started on port 8001 and 8002. We then configured Nginx to hand off all requests to our back-end servers with the line proxy_pass http://python_servers;. Nginx automatically load-balances the requests to each Python server in a round-robin manner. You also can set weights for each back end, so you can direct more or fewer requests to specific servers.
Nginx handles back-end failures automatically and will stop sending requests to a failed back-end server until it starts responding again. To demonstrate this, we can kill off the Python process that's running on port 8001. Use the jobs command to find the job number for the Python process running on port 8001, and use kill %<job number> to kill the process:
Module Comparison Index: wiki.codemongers.com/NginxModuleComparisonMatrix
Testimonials: wiki.codemongers.com/NginxWhyUseIt
Nginx at WordPress: barry.wordpress.com/2008/04/28/load-balancer-update
Facebook App Using Nginx: highscalability.com/friends-sale-architecture-300-million-page-view-month-facebook-ror-app
user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; include /etc/nginx/sites-enabled/*; }We are not going to change any of these settings, but let's talk about some of them to help us understand how Nginx works. The worker_processes setting tells Nginx how many child processes to start. If your server has more than one processor or is performing large amounts of disk IO, you might want to try increasing this number to see if you get better performance. The worker_connections setting limits the number of concurrent connections per worker process. To determine the maximum number of concurrent requests, you simply multiply worker_processes by worker_connections.
The error_log and access_log settings indicate the default logging locations. You also can configure these settings on a per-site basis, as you will see later in the article. Like Apache, Nginx is configured to run as the www-data user, but you easily can change this with the user setting. The startup script for Nginx needs to know the process ID for the master process, which is stored in /var/run/nginx.pid, as indicated by the pid setting.
The sendfile setting allows Nginx to use a special Linux system call to send a file over the network in a very efficient manner. The gzip option instructs Nginx to compress each response, which uses more CPU but saves bandwidth and decreases response time. Additionally, Nginx provides another compression module called gzip precompression (available as of version 0.6.24). This module looks for a compressed copy of the file with a .gz extension in the same location and serves it to gzip-enabled clients. This prevents having to compress the file each time it's requested.
The last setting we are concerned with is the include directive for the sites-enabled directory. Inside /etc/nginx, you'll see two other directories, /etc/nginx/sites-available and /etc/nginx/sites-enabled. For each Web site you want to host with Nginx, you should create a config file in /etc/nginx/sites-available, then create a symlink in /etc/nginx/sites-enabled that points to the config file you created. The main Nginx config file includes all the files in /etc/nginx/sites-enabled. This helps organize your configuration files and makes it very easy to enable and disable specific Web sites.
Static Web Server
Now that we covered the main configuration file, let's create a config file for a basic Web site. Before we begin, we need to disable the default site that Ubuntu created for us:
# rm -f /etc/nginx/sites-enabled/default
Now, create a new configuration file called /etc/nginx/sites-available/basic with the following contents:
server { listen 127.0.0.1:80; server_name basic; access_log /var/log/nginx/basic.access.log; error_log /var/log/nginx/basic.error.log; location / { root /var/www/basic; index index.html index.htm; } }
Create the root directory and index.html file:
# mkdir /var/www/basic # cd /var/www/basic # echo "Basic Web Site" > index.html
Enable the site and restart Nginx:
# cd /etc/nginx/sites-enabled # ln -s ../sites-available/basic . # /etc/init.d/nginx restart
If you open http://127.0.0.1/ in your browser, you should see a page with “Basic Web Site”. As you can see, it is very easy to create a new site using Nginx.
Let's go over the new configuration file we created. The server directive is used to define a new virtual server, and all of its settings are enclosed in braces. The listen directive indicates the IP and port on which this server will accept requests, and server_name sets the hostname for your virtual server. As I mentioned earlier, the access_log and error_log settings can be set on a per-site basis. It is usually a good idea to provide each site with its own set of log files.
Next is the location directive, which allows you to modify the settings for different parts of your site. In our case, we have only one location for the entire site. However, you can have multiple location directives, and you can use regular expressions to define them. We have two other directives inside our location block: root and index. The root directive is used to define the document root for this location. This means a request for /img/test.gif would look for the file /var/www/localhost/img/test.gif. Finally, the index directive tells Nginx what files to use as the default file for this location.
Some Web sites, such as on-line stores, require secure communication
(HTTPS) to protect credit-card transactions and customer information.
Like
Apache, Nginx supports HTTPS via an SSL module, and it's very easy
to set up.
First, you need to generate an SSL certificate. The openssl command will ask you a bunch of questions, but you simply can press Enter for each one:
This config file is very similar to our previous config, but there are a few differences. First, notice that this new server is listening on port 443, which is the standard port for HTTPS. Second, we enabled the SSL module with the line ssl on;. If you compiled Nginx yourself instead of using the Ubuntu package, you need to make sure you specified --with-http_ssl_module when you ran ./configure; otherwise, the SSL module will not be available. Third, we used the ssl_certificate and ssl_certificate_key directives to point to the certificate and key we created earlier.
First, you need to generate an SSL certificate. The openssl command will ask you a bunch of questions, but you simply can press Enter for each one:
# apt-get install openssl # mkdir /etc/nginx/ssl # cd /etc/nginx/ssl # openssl req -new -x509 -nodes -out server.crt -keyout server.key
Create a new config file called /etc/nginx/sites-available/secure, which contains the following:
server { listen 127.0.0.1:443; server_name secure; access_log /var/log/nginx/secure.access.log; error_log /var/log/nginx/secure.error.log; ssl on; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; location / { root /var/www/secure; index index.html index.htm; } }
Create the root directory and index.html file:
# mkdir /var/www/secure # cd /var/www/secure # echo "Secure Web Site" > index.html
Enable the site and restart Nginx:
# cd /etc/nginx/sites-enabled # ln -s ../sites-available/secure . # /etc/init.d/nginx restart
If you open https://127.0.0.1/ in your browser (note the https), you probably will get a warning about not being able to verify the certificate. That's because we are using a self-signed certificate for this example. Go ahead and tell your browser to accept the certificate, and you should see a page with “Secure Web Site”.
This config file is very similar to our previous config, but there are a few differences. First, notice that this new server is listening on port 443, which is the standard port for HTTPS. Second, we enabled the SSL module with the line ssl on;. If you compiled Nginx yourself instead of using the Ubuntu package, you need to make sure you specified --with-http_ssl_module when you ran ./configure; otherwise, the SSL module will not be available. Third, we used the ssl_certificate and ssl_certificate_key directives to point to the certificate and key we created earlier.
In many cases, you will want to run multiple Web sites from a single
server. This is called virtual hosting, and Nginx supports both IP- and
name-based vhosts.
Let's create two virtual hosts: one.example.com and two.example.com. First, we need to add a line to our /etc/hosts file, so that one.example.com and two.example.com point to our server (normally you would do this using DNS):
We just created two name-based virtual hosts running on 127.0.0.1 by changing the server_name directive. For IP-based virtual hosts, simply change the listen directive to use a different IP for each site.
Now, go ahead and disable these two virtual hosts:
Let's create two virtual hosts: one.example.com and two.example.com. First, we need to add a line to our /etc/hosts file, so that one.example.com and two.example.com point to our server (normally you would do this using DNS):
# echo "127.0.0.1 one.example.com two.example.com" >> /etc/hosts
Now, we need to create a configuration file for each site. First, create a file called /etc/nginx/sites-available/one with the following contents:
server { listen 127.0.0.1:80; server_name one.example.com; access_log /var/log/nginx/one.access.log; error_log /var/log/nginx/one.error.log; location / { root /var/www/one; index index.html index.htm; } }
Then, make a copy of that file called /etc/nginx/sites-available/two, and replace each occurrence of “one” with “two”:
# cd /etc/nginx/sites-available # cp one two # sed -i "s/one/two/" two
Create the root directories and index.html files:
# mkdir /var/www/{one,two} # echo "Site 1" > /var/www/one/index.html # echo "Site 2" > /var/www/two/index.html
Enable the sites and restart Nginx:
# cd /etc/nginx/sites-enabled # ln -s ../sites-available/one . # ln -s ../sites-available/two . # /etc/init.d/nginx restart
If you open http://one.example.com/ in your browser, you should see a page with “Site 1”. For http://two.example.com/, you should see “Site 2”.
We just created two name-based virtual hosts running on 127.0.0.1 by changing the server_name directive. For IP-based virtual hosts, simply change the listen directive to use a different IP for each site.
Now, go ahead and disable these two virtual hosts:
# rm -f /etc/nginx/sites-enabled/one # rm -f /etc/nginx/sites-enabled/two # /etc/init.d/nginx restart
Don't forget to remove the line we added to /etc/hosts when you are done.
Reverse Proxy and Load Balancer
In addition to being an extremely fast static Web server, Nginx also is a load balancer and reverse proxy. A load balancer is a device used to spread work out across multiple servers or processes, and a reverse proxy is a server that transparently hands off requests to another server. Among other things, this allows Nginx to handle requests for static content and to load-balance requests for dynamic content across many different back-end servers or processes.
For this example, let's create a very simple Python Web server to serve up some dynamic content. Don't worry if you are not familiar with Python; we're just using it to display a Web page that indicates on which port the server is running. Save the following to a file called /tmp/server.py:
import sys,BaseHTTPServer as B class Handler(B.BaseHTTPRequestHandler): def do_GET(self): self.wfile.write("Served from port %s" % port) def log_message(self, *args): pass if __name__ == '__main__': host,port = sys.argv[1:3] server = B.HTTPServer((host,int(port)), Handler) server.serve_forever()
Now we can start two of these local servers, each on a different port:
# python /tmp/server.py 127.0.0.1 8001 & # python /tmp/server.py 127.0.0.1 8002 &If you open http://127.0.0.1:8001/ in your browser, you should see “Served from port 8001”, and if you open http://127.0.0.1:8002/, you should see “Served from port 8002”.
Now, create a new configuration file called /etc/nginx/sites-available/proxy with the following contents:
upstream python_servers { server 127.0.0.1:8001; server 127.0.0.1:8002; } server { listen 127.0.0.1:8000; server_name proxy; access_log /var/log/nginx/proxy.access.log; error_log /var/log/nginx/proxy.error.log; location / { proxy_pass http://python_servers; } }
Enable the site and restart Nginx:
# cd /etc/nginx/sites-enabled # ln -s ../sites-available/proxy . # /etc/init.d/nginx restart
If you open http://127.0.0.1:8000/ in your browser, you should see a page with either “Served from port 8001” or “Served from port 8002”, and it should alternate each time you refresh the page.
Let's go over some of these new settings. The upstream block defines a name for a group of back-end servers. In our case, we defined a group named python_servers, which contains the two local Python servers we started on port 8001 and 8002. We then configured Nginx to hand off all requests to our back-end servers with the line proxy_pass http://python_servers;. Nginx automatically load-balances the requests to each Python server in a round-robin manner. You also can set weights for each back end, so you can direct more or fewer requests to specific servers.
Nginx handles back-end failures automatically and will stop sending requests to a failed back-end server until it starts responding again. To demonstrate this, we can kill off the Python process that's running on port 8001. Use the jobs command to find the job number for the Python process running on port 8001, and use kill %<job number> to kill the process:
# jobs # kill %1
Open http://127.0.0.1:8000/ in your browser and keep refreshing the page, you should see only the “Served from port 8002” page. Nginx detected that the back-end server from port 8001 was not responding, so it stopped sending requests to that server. Now, restart the Python process for port 8001:
# python /tmp/server.py 127.0.0.1 8001 &Keep refreshing the page and you should see your browser start alternating between “Served from port 8001” and “Served from port 8002” again. Nginx detected that the port 8001 back end was responding and began sending requests to it.
Whether you are looking to get the most out of your VPS or are attempting
to scale one of the largest Web sites in the world, Nginx may be the
best tool for the job. It's fast, stable and easy to use. Thanks to
Igor Sysoev for creating this excellent piece of software.
Resources
Nginx Web Site: wiki.codemongers.com/Main
Module Comparison Index: wiki.codemongers.com/NginxModuleComparisonMatrix
Testimonials: wiki.codemongers.com/NginxWhyUseIt
Nginx at WordPress: barry.wordpress.com/2008/04/28/load-balancer-update
Facebook App Using Nginx: highscalability.com/friends-sale-architecture-300-million-page-view-month-facebook-ror-app
Will Reese has worked with Linux for the past ten years, primarily scaling
Web applications running on Apache, Python and PostgreSQL. He enjoys
beating Cory Wright at foosball and Wii Tennis.
Subscribe to:
Posts (Atom)