NFS Optimization: Difference between revisions

From DrewWiki
Jump to navigation Jump to search
(New page: ===Server Side=== ====/etc/exports==== <pre>/mnt/raid5 192.168.15.142/32(rw,async,no_root_squash)</pre> async - dramatic throughput increase, but dangerous if a client does not unmount cle...)
 
No edit summary
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
===Server Side===
===Server Side===
====/etc/exports====
====/etc/exports====
<pre>/mnt/raid5 192.168.15.142/32(rw,async,no_root_squash)</pre>
<syntaxhighlight lang=bash>
/mnt/raid5 192.168.15.142/32(rw,async,no_root_squash)
</syntaxhighlight>
async - dramatic throughput increase, but dangerous if a client does not unmount cleanly..
async - dramatic throughput increase, but dangerous if a client does not unmount cleanly..


====Tuning /etc/sysctl.conf====
====Tuning /etc/sysctl.conf====
<pre>
<syntaxhighlight lang=bash>
       net.core.rmem_default = 262144
       net.core.rmem_default = 262144
       net.core.rmem_max = 262144
       net.core.rmem_max = 262144
Line 12: Line 14:
       net.ipv4.ipfrag_high_thresh = 524288
       net.ipv4.ipfrag_high_thresh = 524288
       net.ipv4.ipfrag_low_thresh = 393216
       net.ipv4.ipfrag_low_thresh = 393216
</pre>
</syntaxhighlight>


<syntaxhighlight lang=bash>
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 0 > /proc/sys/net/ipv4/tcp_timestamps  
echo 0 > /proc/sys/net/ipv4/tcp_timestamps  
</syntaxhighlight>


=====TCP Segmentation offload====
====TCP Segmentation offload====
This will take off some of the tcp overhead if your card supports it..
This will take off some of the tcp overhead if your card supports it..
<pre># ethtool -K ethN tso on</pre>
<syntaxhighlight lang=bash>
# ethtool -K ethN tso on
</syntaxhighlight>


===Client Side===
===Client Side===
=====/etc/fstab=====
=====/etc/fstab=====
<pre>192.168.15.20:/mnt/raid5 /mnt/raid5 nfs rsize=32768,wsize=32768,intr,hard 0 0</pre>
NFSv4 Client:
[rw]size=32768 - NFSv3 maximum read write size
<syntaxhighlight lang=bash>
intr - if the mount drops, you'll still be able to ^C out of whatever operation your running
192.168.15.20:/mnt/raid5 /mnt/raid5 nfs defaults 0 0
hard - hm?
</syntaxhighlight>
 
NFSv3 Client:
<syntaxhighlight lang=bash>
192.168.15.20:/mnt/raid5 /mnt/raid5 nfs rsize=32768,wsize=32768,intr,hard 0 0
</syntaxhighlight>
[rw]size=32768 - NFSv3 maximum read write size<br/>
intr - if the mount drops, you'll still be able to ^C out of whatever operation your running<br/>
hard - flush file locks before being able to unmount<br/>

Latest revision as of 01:33, 25 January 2018

Server Side

/etc/exports

/mnt/raid5 192.168.15.142/32(rw,async,no_root_squash)

async - dramatic throughput increase, but dangerous if a client does not unmount cleanly..

Tuning /etc/sysctl.conf

      net.core.rmem_default = 262144
      net.core.rmem_max = 262144
      #
      # Increase the fragmented packet queue length
      net.ipv4.ipfrag_high_thresh = 524288
      net.ipv4.ipfrag_low_thresh = 393216
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 0 > /proc/sys/net/ipv4/tcp_timestamps

TCP Segmentation offload

This will take off some of the tcp overhead if your card supports it..

# ethtool -K ethN tso on

Client Side

/etc/fstab

NFSv4 Client:

192.168.15.20:/mnt/raid5 /mnt/raid5 nfs defaults 0 0

NFSv3 Client:

192.168.15.20:/mnt/raid5 /mnt/raid5 nfs rsize=32768,wsize=32768,intr,hard 0 0

[rw]size=32768 - NFSv3 maximum read write size
intr - if the mount drops, you'll still be able to ^C out of whatever operation your running
hard - flush file locks before being able to unmount