Distributed Systems tutorial 10



 NETWORK FILE SYSTEM (NFS)
Introduction

Ø  NFS, the network filesystem, is probably the most prominent network services using RPC. It allows to access files on remote hosts in exactly the same way as a user would access any local files. This is made possible by a mixture of kernel functionality on the client side (that uses the remote file system) and an NFS server on the server side (that provides the file data). This file access is completely transparent to the client, and works across a variety of server and host architectures.
NFS offers a number of advantages:
Ø  Data accessed by all users can be kept on a central host, with clients mounting this directory at boot time. For example, you can keep all user accounts on one host, and have all hosts on your network mount /home from that host. If installed alongside with NIS, users can then log into any system, and still work on one set of files.
Ø  Data consuming large amounts of disk space may be kept on a single host. For example, all files and programs relating to LaTeX and METAFONT could be kept and maintained in one place.
Ø  Administrative data may be kept on a single host. No need to use rcp anymore to install the same stupid file on 20 different machines.

Preparing NFS
Ø  Before you can use NFS, be it as server or client, you must make sure your kernel has NFS support compiled in. Newer kernels have a simple interface on the proc filesystem for this, the /proc/filesystems file, which you can display using cat:
          $ cat /proc/filesystems
          minix
          ext2
          msdos
          nodev   proc
          nodev   nfs
Ø  If nfs is missing from this list, then you have to compile your own kernel with NFS enabled. Configuring the kernel network options is explained in section ``Kernel Configuration'' .
Ø  For older kernels prior to -1.1, the easiest way to find out whether your kernel has NFS support enabled is to actually try to mount an NFS file system. For this, you could create a directory below /tmp, and try to mount a local directory on it:
     # mkdir /tmp/test
     # mount localhost:/etc /tmp/test
Ø  If this mount attempt fails with an error message saying ``fs type nfs no supported by kernel'', you must make a new kernel with NFS enabled. Any other error messages are completely harmless, as you haven't configured the NFS daemons on your host yet.

Mounting an NFS Volume

Ø  NFS volumes are mounted very much the way usual file systems are mounted. You invoke mount using the following syntax:
              # mount -t nfs nfs volume local dir options
 nfs_volume is given as remote_host:remote_dir. Since this notation is unique to NFS file systems, you can leave out the -t nfs option.
Ø  There are a number of additional options that you may specify to mount upon mounting an NFS volume. These may either be given following the -o switch on the command line, or in the options field of the /etc/fstab entry for the volume. In both cases, multiple options are separated from each other by commas. Options specified on the command line always override those given in the fstab file.
Ø  A sample entry in /etc/fstab might be
     # volume              mount point       type  options
     news:/usr/spool/news  /usr/spool/news   nfs   timeo=14,intr

        This volume may then be mounted using
     # mount news:/usr/spool/news

Ø  In the absence of a fstab entry, NFS mount invocations look a lot uglier. For instance, suppose you mount your users' home directories from a machine named moonshot, which uses a default block size of 4K for read/write operations. You might decrease block size to 2K to suit ' datagram size limit by issuing
     # mount moonshot:/home /home -o rsize=2048,wsize=2048

Ø  The list of all valid options is described in its entirety in the nfs(5) manual page that comes with Rick Sladkey's NFS-aware mount tool which can be found in Rik Faith's util-linux package). The following is an incomplete list of those you would probably want to use:

   rsize = n and wsize = n
These specify the datagram size used by the NFS clients on read and write requests, respectively. They cur- rently default to 1024 bytes, due to the limit on UDP datagram size described above.

  timeo = n
This sets the time (in tenths of a second) the NFS client will wait for a request to complete. The default values is 0.7 sec- onds.
  hard
Explicitly mark this volume as hard-mounted. This is on by default.
  soft
Soft-mount the driver (as opposed to hard-mount).
  intr
Allow signals to interrupt an NFS call. Useful for aborting when the server doesn't respond.
Ø  If you want to provide NFS service to other hosts, you have to run the nfsd and mountd daemons on your machine. As RPC-based programs, they are not managed by inetd, but are started up at boot time, and register themselves with the portmapper. Therefore, you have to make sure to start them only after rpc.portmap is running. Usually, you include the following two lines in your rc.inet2 script:
            if [ -x /usr/sbin/rpc.mountd ]; then
                    /usr/sbin/rpc.mountd; echo -n " mountd"
            fi
            if [ -x /usr/sbin/rpc.nfsd ]; then
                    /usr/sbin/rpc.nfsd; echo -n " nfsd"
            Fi
Ø  The ownership information of files a NFS daemon provides to its clients usually contains only numerical user and group id's. If both client and server associate the same user and group names with these numerical id's, they are said to share the same uid/gid space. For example, this is the case when you use NIS to distribute the passwd information to all hosts on your LAN.
Ø  On some occasions, however, they do not match. Rather updating the uid's and gid's of the client to match those of the server, you can use the ugidd mapping daemon to work around this. Using the map_daemon option explained below, you can tell nfsd to map the server's uid/gid space to the client's uid/gid space with the aid of the ugidd on the client.
ugidd is an RPC-based server, and is started from rc.inet2 just like nfsd and mountd.
            if [ -x /usr/sbin/rpc.ugidd ]; then
                  /usr/sbin/rpc.ugidd; echo -n " ugidd"
            fi
Ø  While the above options applied to the client's NFS configuration, there is a different set of options on the server side that configure its per-client behavior. These options must be set in the /etc/exports file.
Ø  By default, mountd will not allow anyone to mount directories from the local host, which is a rather sensible attitude. To permit one or more hosts to NFS-mount a directory, it must exported, that is, must be specified in the exports file. A sample file may look like this:
             # exports file for vlager
             /home             vale(rw) vstout(rw) vlight(rw)
             /usr/X386         vale(ro) vstout(ro) vlight(ro)
             /usr/TeX          vale(ro) vstout(ro) vlight(ro)
             /                 vale(rw,no root squash)
             /home/ftp         (ro)
Ø  The host name is followed by an optional, comma-separated list of flags, enclosed in brackets. These flags may take the following values:
insecure
Permit non-authenticated access from this machine.

unix-rpc
Require UNIX-domain RPC authentication from this machine. This simply requires that requests originate from a reserved internet port (i.e. the port number has to be less than 1024). This option is on by default.

secure-rpc
Require secure RPC authentication from this machine. This has not been implemented yet. See Sun's documentation on Secure RPC.

kerberos
Require Kerberos authentication on accesses from this machine. This has not been implemented yet. See the MIT documentation on the Kerberos authentication system.

root squash
This is a security feature that denies the super user on the specified hosts any special access rights by mapping requests from uid 0 on the client to uid 65534 (-2) on the server. This uid should be associated with the user nobody.

no root squash
Don't map requests from uid 0. This option is on by default.
ro
Mount file hierarchy read-only. This option is on by default.
rw
Mount file hierarchy read-write.

link relative
Convert absolute symbolic links (where the link contents start with a slash) into relative links by prepending the nec- essary number of ../'s to get from the directory containing the link to the root on the server. This option only makes sense when a host's entire file system is mounted, else some of the links might point to nowhere, or even worse, files they were never meant to point to.
This option is on by default.
link absolute
Leave all symbolic link as they are (the normal behavior for Sun-supplied NFS servers).




map daemon
This option tells the NFS server to assume that client and server do not share the same uid/gid space. nfsd will then build a list mapping id's between client and server by query- ing the client's ugidd daemon.

The Automounter

Ø  Sometimes, it is wasteful to mount all NFS volumes users might possibly want to access; either because of the sheer number of volumes to be mounted, or because of the time this would take at startup. A viable alternative to this is a so-called automounter. This is a daemon that automatically and transparently mounts any NFS volume as needed, and unmounts them after they have not been used for some time.
Ø  One of the clever things about an automounter is that it is able to mount a certain volume from alternative places. For instance, you may keep copies of your X-programs and support files on two or three hosts, and have all other hosts mount them via NFS. Using an automounter, you may specify all three of them to be mounted on /usr/X386; the automounter will then try to mount any of these until one of the mount attempts succeeds. 

Related Posts Plugin for WordPress, Blogger...

Engineering material

GTU IDP/ UDP PROJECT

GTU IDP/ UDP PROJECT

Patel free software download

  © Blogger templates The Professional Template by Ourblogtemplates.com 2008

Back to TOP