Wednesday, March 27, 2013

DDoS Spamhaus


A squabble between a group fighting spam and a Dutch company that hosts Web sites said to be sending spam has escalated into one of the largest computer attacks on the Internet, causing widespread congestion and jamming crucial infrastructure around the world.

Millions of ordinary Internet users have experienced delays in services like Netflix or could not reach a particular Web site for a short time.

However, for the Internet engineers who run the global network the problem is more worrisome. The attacks are becoming increasingly powerful, and computer security experts worry that if they continue to escalate people may not be able to reach basic Internet services, like e-mail and online banking.

The dispute started when the spam-fighting group, called Spamhaus, added the Dutch company Cyberbunker to its blacklist, which is used by e-mail providers to weed out spam. Cyberbunker, named for its headquarters, a five-story former NATO bunker, offers hosting services to any Web site “except child porn and anything related to terrorism,” according to its Web site.

A spokesman for Spamhaus, which is based in Europe, said the attacks began on March 19, but had not stopped the group from distributing its blacklist.

Patrick Gilmore, chief architect at Akamai Technologies, a digital content provider, said Spamhaus’s role was to generate a list of Internet spammers.

Of Cyberbunker, he added: “These guys are just mad. To be frank, they got caught. They think they should be allowed to spam.”

Mr. Gilmore said that the attacks, which are generated by swarms of computers called botnets, concentrate data streams that are larger than the Internet connections of entire countries. He likened the technique, which uses a long-known flaw in the Internet’s basic plumbing, to using a machine gun to spray an entire crowd when the intent is to kill one person.

The attacks were first mentioned publicly last week by CloudFlare, an Internet security firm in Silicon Valley that was trying to defend against the attacks and as a result became a target.

“These things are essentially like nuclear bombs,” said Matthew Prince, chief executive of CloudFlare. “It’s so easy to cause so much damage.”

The so-called distributed denial of service, or DDoS, attacks have reached previously unknown magnitudes, growing to a data stream of 300 billion bits per second.

“It is a real number,” Mr. Gilmore said. “It is the largest publicly announced DDoS attack in the history of the Internet.”

Spamhaus, one of the most prominent groups tracking spammers on the Internet, uses volunteers to identify spammers and has been described as an online vigilante group.

In the past, blacklisted sites have retaliated against Spamhaus with denial-of-service attacks, in which they flood Spamhaus with traffic requests from personal computers until its servers become unreachable. But in recent weeks, the attackers hit back with a far more powerful strike that exploited the Internet’s core infrastructure, called the Domain Name System, or DNS.

That system functions like a telephone switchboard for the Internet. It translates the names of Web sites like or into a string of numbers that the Internet’s underlying technology can understand. Millions of computer servers around the world perform the actual translation.

In the latest incident, attackers sent messages, masquerading as ones coming from Spamhaus, to those machines, which were then amplified drastically by the servers, causing torrents of data to be aimed back at the Spamhaus computers.

When Spamhaus requested aid from CloudFlare, the attackers began to focus their digital ire on the companies that provide data connections for both Spamhaus and CloudFlare.

Questioned about the attacks, Sven Olaf Kamphuis, an Internet activist who said he was a spokesman for the attackers, said in an online message that, “We are aware that this is one of the largest DDoS attacks the world had publicly seen.” Mr. Kamphuis said Cyberbunker was retaliating against Spamhaus for “abusing their influence.”

“Nobody ever deputized Spamhaus to determine what goes and does not go on the Internet,” Mr. Kamphuis said. “They worked themselves into that position by pretending to fight spam.”

A typical denial-of-service attack tends to affect only a small number of networks. But in the case of a Domain Name System flood attack, data packets are aimed at the victim from servers all over the world. Such attacks cannot easily be stopped, experts say, because those servers cannot be shut off without halting the Internet.

“The No. 1 rule of the Internet is that it has to work,” said Dan Kaminsky, a security researcher who years ago pointed out the inherent vulnerabilities of the Domain Name System. “You can’t stop a DNS flood by shutting down those servers because those machines have to be open and public by default. The only way to deal with this problem is to find the people doing it and arrest them.”

The heart of the problem, according to several Internet engineers, is that many large Internet service providers have not set up their networks to make sure that traffic leaving their networks is actually coming from their own users. The potential security flaw has long been known by Internet security specialists, but it has only recently been exploited in a way that threatens the Internet infrastructure.

An engineer at one of the largest Internet communications firms said the attacks in recent days have been as many as five times larger than what was seen recently in attacks against major American banks. He said the attacks were not large enough to saturate the company’s largest routers, but they had overwhelmed important equipment.

Cyberbunker brags on its Web site that it has been a frequent target of law enforcement because of its “many controversial customers.” The company claims that at one point it fended off a Dutch SWAT team.

“Dutch authorities and the police have made several attempts to enter the bunker by force,” the site said. “None of these attempts were successful.”

At CloudFlare, we deal with large DDoS attacks every day. Usually, these attacks are directed at large companies or organizations that are reluctant to talk about their details. It's fun, therefore, whenever we have a customer that is willing to let us tell the story of an attack they saw and how we mitigated it. This is one of those stories.
Yesterday, Tuesday, March 19, 2013, CloudFlare was contacted by the non-profit anti-spam organization Spamhaus. They were suffering a large DDoS attack against their website and asked if we could help mitigate the attack.
Spamhaus provides one of the key backbones that underpins much of the anti-spam filtering online. Run by a tireless team of volunteers, Spamhaus patrols the Internet for spammers and publishes a list of the servers they use to send their messages in order to empower email system administrators to filter unwanted messages. Spamhaus's services are so pervasive and important to the operation of the Internet's email architecture that, when a lawsuit threatened to shut the service down, industry experts testified [PDF, full disclosure: I wrote the brief back in the day] that doing so risked literally breaking email since Spamhaus is directly or indirectly responsible for filtering as much as 80% of daily spam messages.
Beginning on March 18, the Spamhaus site came under attack. The attack was large enough that the Spamhaus team wasn't sure of its size when they contacted us. It was sufficiently large to fully saturate their connection to the rest of the Internet and knock their site offline. These very large attacks, which are known as Layer 3 attacks, are difficult to stop with any on-premise solution. Put simply: if you have a router with a 10Gbps port, and someone sends you 11Gbps of traffic, it doesn't matter what intelligent software you have to stop the attack because your network link is completely saturated.
While we don't know who was behind this attack, Spamhaus has made plenty of enemies over the years. Spammers aren't always the most lovable of individuals and Spamhaus has been threatened, sued, and DDoSed regularly. Spamhaus's blocklists are distributed via DNS and there is a long list of volunteer organizations that mirror their DNS infrastructure in order to ensure it is resilient to attacks. The website, however, was unreachable.
Filling Up the Series of Tubes
Very large Layer 3 attacks are nearly always originated from a number of sources. These many sources each send traffic to a single Internet location, effectively creating a tidal wave that overwhelms the target's resources. In this sense, the attack is distributed (the first D in DDoS -- Distributed Denial of Service). The sources of attack traffic can be a group of individuals working together (e.g., the Anonymous LOIC model, although this is Layer 7 traffic and even at high volumes usually much smaller in volume than other methods), a botnet of compromised PCs, a botnet of compromised servers, misconfigured DNS resolvers, or even home Internet routers with weak passwords.
Since an attacker attempting to launch a Layer 3 attack doesn't care about receiving a response to the requests they send, the packets that make up the attack do not have to be accurate or correctly formatted. Attackers will regularly spoof all the information in the attack packets, including the source IP, making it look like the attack is coming from a virtually infinite number of sources. Since packets data can be fully randomized, using techniques like IP filtering even upstream becomes virtually useless.
Spamhaus signed up for CloudFlare on Tuesday afternoon and we immediately mitigated the attack, making the site once again reachable. (More on how we did that below.) Once on our network, we also began recording data about the attack. At first, the attack was relatively modest (around 10Gbps). There was a brief spike around 16:30 UTC, likely a test, that lasted approximately 10 minutes. Then, around 21:30 UTC, the attackers let loose a very large wave.
The graph below is generated from bandwidth samples across a number of the routers that sit in front of servers we use for DDoS scrubbing. The green area represents in-bound requests and the blue line represents out-bound responses. While there is always some attack traffic on our network, it's easy to see when the attack against Spamhaus started and then began to taper off around 02:30 UTC on March 20, 2013. As I'm writing this at 16:15 UTC on March 20, 2013, it appears the attack is picking up again.
How to Generate a 75Gbps DDoS
The largest source of attack traffic against Spamhaus came from DNS reflection. I'vewritten about these attacks before and in the last year they have become the source of the largest Layer 3 DDoS attacks we see (sometimes well exceeding 100Gbps). Open DNS resolvers are quickly becoming the scourge of the Internet and the size of these attacks will only continue to rise until all providers make a concerted effort to close them. (It also makes sense to implement BCP-38, but that's a topic for another post another time.)
The basic technique of a DNS reflection attack is to send a request for a large DNS zone file with the source IP address spoofed to be the intended victim to a large number of open DNS resolvers. The resolvers then respond to the request, sending the large DNS zone answer to the intended victim. The attackers' requests themselves are only a fraction of the size of the responses, meaning the attacker can effectively amplify their attack to many times the size of the bandwidth resources they themselves control. 
In the Spamhaus case, the attacker was sending requests for the DNS zone file for to open DNS resolvers. The attacker spoofed the CloudFlare IPs we'd issued for Spamhaus as the source in their DNS requests. The open resolvers responded with DNS zone file, generating collectively approximately 75Gbps of attack traffic. The requests were likely approximately 36 bytes long (e.g. dig ANY @X.X.X.X +edns=0 +bufsize=4096, where X.X.X.X is replaced with the IP address of an open DNS resolver) and the response was approximately 3,000 bytes, translating to a 100x amplification factor.
We recorded over 30,000 unique DNS resolvers involved in the attack. This translates to each open DNS resolver sending an average of 2.5Mbps, which is small enough to fly under the radar of most DNS resolvers. Because the attacker used a DNS amplification, the attacker only needed to control a botnet or cluster of servers to generate 750Mbps -- which is possible with a small sized botnet or a handful of AWS instances. It is worth repeating: open DNS resolvers are the scourge of the Internet and these attacks will become more common and large until service providers take serious efforts to close them.
How You Mitigate a 75Gbps DDoS
While large Layer 3 attacks are difficult for an on-premise DDoS solution to mitigate, CloudFlare's network was specifically designed from the beginning to stop these types of attacks. We make heavy use of Anycast. That means the same IP address is announced from every one of our 23 worldwide data centers. The network itselfload balances requests to the nearest facility. Under normal circumstances, this helps us ensure a visitor is routed to the nearest data center on our network.
When there's an attack, Anycast serves to effectively dilute it by spreading it across our facilities. Since every data center announces the same IP address for any CloudFlare customer, traffic cannot be concentrated in any one location. Instead of the attack being many-to-one, it becomes many-to-many with no single point on the network acting as a bottleneck.
Once diluted, the attack becomes relatively easy to stop at each of our data centers. Because CloudFlare acts as a virtual shield in front of our customers sites, with Layer 3 attacks none of the attack traffic reaches the customer's servers. Traffic to Spamhaus's network dropped to below the levels when the attack started as soon as they signed up for our service.
Other Noise
While the majority of the traffic involved in the attack was DNS reflection, the attacker threw in a few other attack methods as well. One was a so-called ACK reflection attack. When a TCP connection is established there is a handshake. The server initiating the TCP session first sends a SYN (for synchronize) request to the receiving server. The receiving server responds with an ACK (for acknowledge). After that handshake, data can be exchanged.
In an ACK reflection, the attacker sends a number of SYN packets to servers with a spoofed source IP address pointing to the intended victim. The servers then respond to the victim's IP with an ACK. Like the DNS reflection attack, this disguises the source of the attack, making it appear to come from legitimate servers. However, unlike the DNS reflection attack, there is no amplification factor: the bandwidth from the ACKs is symmetrical to the bandwidth the attacker has to generate the SYNs. CloudFlare is configured to drop unmatched ACKs, which mitigates these types of attacks.
Whenever we see one of these large attacks, network operators will write to us upset that we are attacking their infrastructure with abusive DNS queries or SYN floods. In fact, it is their infrastructure that is being used to reflect an attack at us. By working with and educating network operators, they clean up their network which helps to solve the root cause of these large attacks.
History Repeats Itself
Finally, it's worth noting how similar this battle against DDoS attacks and open DNS relays is with Spamhaus's original fight. If DDoS is the network scourge of tomorrow, spam was its clear predecessor. Paul Vixie, the father of the DNSBL, set out in 1997 to use DNS to help shut down the spam source of the day: open email relays. These relays were being used to disguise the origin of spam messages, making them more difficult to block. What was needed was a list of mail relays that mail serves could query against and decide whether to accept messages.
While it wasn't originally designed with the idea in mind, DNS proved a highly scalable and efficient means to distribute a queryable list of open mail relays that email service providers could use to block unwanted messages. Spamhaus arose as one of the most respected and widely used DNSBLs, effectively blocking a huge percentage of daily spam volume.
As open mail relays were shut, spammers turned to virus writers to create botnets that could be used to relay spam. Spamhaus expanded their operations to list the IPs of known botnets, trying to stay ahead of spammers. CloudFlare's own history grew out of Project Honey Pot, which started as an automated service to track the resources used by spammers and publishes the HTTP:BL.
Today, as Spamhaus's success has eroded the business model of spammers, botnet operators are increasingly renting their networks to launch DDoS attacks. At the same time, DNSBLs proved that there were many functions that the DNS protocol could be used for, encouraging many people to tinker with installing their own DNS resolvers. Unfortunately, these DNS resolvers are often mis-configured and left open to abuse, making them the DDoS equivalent of the open mail relay.
If you're running a network, take a second to make sure you've closed any open resolvers before DDoS explodes into an even worse problem than it already is.
Posted by 

Tuesday, March 26, 2013

Monitoring Authentication Attempts on Cisco Routers with Syslog


One of great things about the syslog logging standard is the capability to collect system notifications from a variety of network hosts and direct them to a central store for analysis.  In this demo I will configure a Cisco router to log system messages using syslog to a central Linux server.  Specifically I am interested in logging authentication attempts to the router.
My preferred syslog daemon that I will be running on my Linux syslog server is rsyslog.  There are also many syslog servers available for Windows if you choose to go that route.  Kiwi is one with a nice interface but the full featured version is payware. Your choice of a syslog server to collect your messages should be immaterial to this discussion as the configuration steps should be the same on a Cisco router.

Configure Syslog Server to Accept Messages
To start, we’ll make sure that the syslog server is configured to accept messages from the IP address of your router.  This should be the IP of the interface on the router that is closest to the syslog server.  For example, suppose the router has an external and an internal interface.  Our syslog server is on the same LAN that the internal interface is connected to.  The syslog server should be configured to accept messages from the IP address of the internal interface.  We also have the option to manually configure the interface the syslog messages are sourced from.
The syslog standard sends log messages identified with a certain facility and severity.  Generally the facility is used to identify the message as coming from a particular program or service.  This has more use when the source of the syslog messages is a full blown computer server.  In the case of Cisco routers by default syslog messages are sent marked as coming from the “local7″ facility, so we need to make sure that the syslog server accepts messages from this facility.  The source facility can be changed if you so desire.
In addition, syslog messages have a severity attached which gives information on the priority or urgency of the message.  If you are familiar with syslog you know that higher numbers represent lower severity levels.  Here is a list of the minimum severity levels that a Cisco router can be configured with which to send messages to the syslog server.

Router(config)#logging trap ?
<0-7>          Logging severity level
alerts         Immediate action needed           (severity=1)
critical       Critical conditions               (severity=2)
debugging      Debugging messages                (severity=7)
emergencies    System is unusable                (severity=0)
errors         Error conditions                  (severity=3)
informational  Informational messages            (severity=6)
notifications  Normal but significant conditions (severity=5)
warnings       Warning conditions                (severity=4)
Configure Cisco Router with Secret Passwords
First let’s enter global config mode.

Router#conf t
Now we need to make sure that we have a secret password set to enter enable mode.  I’ll use the “enable secret” command to encrypt the password using the type 5 MD5 hash algorithm which is much more secure than the older type 7 encryption.

Router(config)#enable secret EnablePassword
Now we’ll set up username authentication.  This needs to be turned on or our authentication attempts will not be logged.  Logging of authentication does not appear to work if you only use passwords set directly on the virtual telnet/ssh terminal lines.

Router(config)#username aaron secret MyPassword
We need to configure our telnet/ssh terminal lines to use local username authentication.

Router(config)#line vty 0 4
Router(config-line)#login local
Configure Logging Options
Now we’ll set the router to direct messages to be logged to the IP address or hostname of our syslog server host.

We can set the minimum severity level that log messages need to be if they are logged to the syslog server. The minimum level for logging failed authentication attempts is warning (4) and for successful authentications is notifications (5).  To capture both I will configure the minimum level to be notifications. Dial this back to warnings and above if there are too many messages being forwarded to your server, but remember that the successful logins will no longer be logged.

Router(config)#logging trap notifications
I’ll choose to activate login checking for both successful and failed login attempts.  Specifying “log” will generate the syslog messages.  Optionally we can have the router generate a log after a certain number of attempts, but in this case I’ll log them all.

Router(config)#login on-success log
Router(config)#login on-failure log
We also need to set up a quiet mode time period. Logging of failed logins will not work without this. The “login block-for” command will create an ACL for a certain period of time that will as the name suggests block logins after a certain number of failed attempts. In this case logins will be disabled for 120 seconds if there are 3 failed attempts within a 60 second time span. This will also work well for deterring a brute force attack on the router.

Router(config)#login block-for 120 attempts 3 within 60
Optional Logging Parameters
As I mentioned at the beginning by default the syslog messages sent by the router will appear as coming from the interface closest to the syslog server. If you want to change this behavior you can manually specify the interface the messages appear to come from.
Router(config)#logging source-interface FastEthernet0/0
We can also activate a delay which will slow login attempts. In this case there will be a 5 second delay between when a bad username/password combo is entered and when the next login prompt is presented.

Router(config)#login delay 5
That should do it. You can now test a successful or failed login attempt and the messages should show up on the syslog server!

How to configure syslog server in Linux


Sample Exam question:- You are a System administrator. Using Log files very easy to monitor the system. Now there are 40 servers running as Mail, Web, Proxy, DNS services etc. Your task is to centralize the logs from all servers into on LOG Server. How will you configure the LOG Server to accept logs from remote host ?

Answer with Explanation

An important part of maintaining a secure system is keeping track of the activities that take place on the system. If you know what usually happens, such as understanding when users log into your system, you can use log files to spot unusual activity. You can configure what syslogd records through the /etc/syslog.conf configuration file.
The syslogd daemon manages all the logs on your system and coordinates with any of the logging operations of other systems on your network. Configuration information for syslogd is held in the /etc/syslog.conf file, which contains the names and locations for your system log files.
By Default system accept the logs only generated from local host. In this example we will configure a log server and will accept logs from client side.
For this example we are using two systems one linux server one linux clients . To complete these per quest of log server Follow this link
Network configuration in Linux
  • A linux server with ip address and hostname Server
  • A linux client with ip address and hostname Client1
  • Updated /etc/hosts file on both linux system
  • Running portmap and xinetd services
  • Firewall should be off on server
We suggest you to review that article before start configuration of log server. Once you have completed the necessary steps follow this guide.
Check syslog, portmap, xinetd service in system service it should be on

 #setup Select  System service from list [*]portmap [*]xinetd [*]syslog 
Now restart xinetd and portmap service
service restart

To keep on these services after reboot on then via chkconfig command

After reboot verify their status. It must be in running condition

service status

Now open the /etc/sysconfig/syslog file
vi syslog

and locate SYSLOGD_OPTIONS tag

add -r option in this tag to accepts logs from clients
syslog editing
-m 0 disables 'MARK' messages.
-r enables logging from remote machines
-x disables DNS lookups on messages recieved with -r

After saving file restart service with service syslog restart command

service syslog restat


On Linux client

ping from log server and open /etc/syslog.conf file

Now go to the end of file and do entry for serve as user.* @ [ server IP] as shown in image
syslog.conf editing

After saving file restart service with service syslog restart command
service syslog restart

Now restart the client so it can send log entry to server. ( Note that these logs will generate when client boot, so do it restart not shutdown)


Check clients log on Log server

To check the message of client on server open
less messages

In the end of this file you can check the log from clients
messages file

Sunday, March 10, 2013

Reading queue

# FreeRadius + WLAN,0


Certificate and Crypto



Tuesday, March 5, 2013

CentOS / RHEL: Install and Configure phpMyAdmin Administration Of MySQL Database Server


CentOS / RHEL: Install and Configure phpMyAdmin Administration Of MySQL Database Server

by  on JULY 25, 2012 · 7 COMMENTS· last updated at JULY 25, 2012
How do I install phpMyAdmin to handle the administration of MySQL database server over the World Wide Web under Fedora / Scientific / CentOS / RHEL / Red Hat Enterprise Linux 6.x server systems?

phpMyAdmin is a tool written in PHP intended to handle the administration of MySQL over the World Wide Web. Most frequently used operations are supported by the user interface (managing databases, tables, fields, relations, indexes, users, permissions, while you still have the ability to directly execute any SQL statement. It comes with an intuitive web interface, support for most MySQL features.

Step #1: Turn on EPEL repo

phpMyAdmin is not included in default RHEL / CentOS repo. So turn on EPEL repo as described here:
$ cd /tmp
$ wget
# rpm -ivh epel-release-6-5.noarch.rpm

Step #2: Install phpMyAdmin

Type the following command:
# yum search phpmyadmin
# yum -y install phpmyadmin

Sample outputs:
Loaded plugins: rhnplugin
Setting up Install Process
Resolving Dependencies
There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.
--> Running transaction check
---> Package phpMyAdmin.noarch 0:3.5.1-1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
 Package                  Arch                 Version                   Repository          Size
 phpMyAdmin               noarch               3.5.1-1.el6               epel               4.2 M
Transaction Summary
Install       1 Package(s)
Total download size: 4.2 M
Installed size: 17 M
Downloading Packages:
phpMyAdmin-3.5.1-1.el6.noarch.rpm                                          | 4.2 MB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : phpMyAdmin-3.5.1-1.el6.noarch                                                  1/1
  Verifying  : phpMyAdmin-3.5.1-1.el6.noarch                                                  1/1
  phpMyAdmin.noarch 0:3.5.1-1.el6

Step #3: Configure phpMyAdmin

You need to edit /etc/httpd/conf.d/phpMyAdmin.conf, enter:
# vi /etc/httpd/conf.d/phpMyAdmin.conf
It allows only localhost by default. You can setup HTTPD SSL as described here (mod_ssl) and allow LAN / WAN users or DBA user to manage the database over www. Find line that read follows
Require ip
Replace with your workstation IP address:
Require ip
Again find the following line:
Allow from
Replace as follows:
Allow from
Save and close the file. Restart Apache / httpd server:
# service httpd restart
Open a web browser and type the following url:
Sample outputs:
phpMyAdmin in Action
Fig.01: phpMyAdmin in Action
Please note that you will be prompted for a username and password. You need to provide your database username and password to login into the user interface. If you want to manage all database use mysql admin user account called root. phpMyAdmin configuration file is located at /etc/phpMyAdmin/ You can edit this file using a text editor:
# vi /etc/phpMyAdmin/
All directives are explained in Documentation.html and on phpMyAdmin wiki.

Check out related media

This tutorial is also available in video format:

(Video.01: Installing phpMyAdmin Demo)

You should follow me on twitter here or grab rss feed to keep track of new changes.

Compile PHP with mysql extension

1. Download PHP at

2. Compile
./configure --enable-mbstring --with-mysql=yes --with-mcrypt=yes --with-libdir=lib64 --with-mysql-sock=/var/lib/mysql/mysql.sock --with-apxs2=/usr/sbin/apxs --prefix=/usr
make install

Linux Tune Network Stack



Linux Tune Network Stack (Buffers Size) To Increase Networking Performance

by  on MAY 20, 2009 · 27 COMMENTS· last updated at JULY 8, 2009
I've two servers located in two different data center. Both server deals with a lot of concurrent large file transfers. But network performance is very poor for large files and performance degradation take place with a large files. How do I tune TCP under Linux to solve this problem?

By default the Linux network stack is not configured for high speed large file transfer across WAN links. This is done to save memory resources. You can easily tune Linux network stack by increasing network buffers size for high-speed networks that connect server systems to handle more network packets.
The default maximum Linux TCP buffer sizes are way too small. TCP memory is calculated automatically based on system memory; you can find the actual values by typing the following commands:
$ cat /proc/sys/net/ipv4/tcp_mem
The default and maximum amount for the receive socket memory:
$ cat /proc/sys/net/core/rmem_default
$ cat /proc/sys/net/core/rmem_max

The default and maximum amount for the send socket memory:
$ cat /proc/sys/net/core/wmem_default
$ cat /proc/sys/net/core/wmem_max

The maximum amount of option memory buffers:
$ cat /proc/sys/net/core/optmem_max

Tune values

Set the max OS send buffer size (wmem) and receive buffer size (rmem) to 12 MB for queues on all protocols. In other words set the amount of memory that is allocated for each TCP socket when it is opened or created while transferring files:
WARNING! The default value of rmem_max and wmem_max is about 128 KB in most Linux distributions, which may be enough for a low-latency general purpose network environment or for apps such as DNS / Web server. However, if the latency is large, the default size might be too small. Please note that the following settings going to increase memory usage on your server.
# echo 'net.core.wmem_max=12582912' >> /etc/sysctl.conf
# echo 'net.core.rmem_max=12582912' >> /etc/sysctl.conf

You also need to set minimum size, initial size, and maximum size in bytes:
# echo 'net.ipv4.tcp_rmem= 10240 87380 12582912' >> /etc/sysctl.conf
# echo 'net.ipv4.tcp_wmem= 10240 87380 12582912' >> /etc/sysctl.conf

Turn on window scaling which can be an option to enlarge the transfer window:
# echo 'net.ipv4.tcp_window_scaling = 1' >> /etc/sysctl.conf
Enable timestamps as defined in RFC1323:
# echo 'net.ipv4.tcp_timestamps = 1' >> /etc/sysctl.conf
Enable select acknowledgments:
# echo 'net.ipv4.tcp_sack = 1' >> /etc/sysctl.conf
By default, TCP saves various connection metrics in the route cache when the connection closes, so that connections established in the near future can use these to set initial conditions. Usually, this increases overall performance, but may sometimes cause performance degradation. If set, TCP will not cache metrics on closing connections.
# echo 'net.ipv4.tcp_no_metrics_save = 1' >> /etc/sysctl.conf
Set maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.
# echo 'net.core.netdev_max_backlog = 5000' >> /etc/sysctl.conf
Now reload the changes:
# sysctl -p
Use tcpdump to view changes for eth0:
# tcpdump -ni eth0

Recommend readings: