Cluster Computing System: Installation and ...

6 downloads 279 Views 2MB Size Report
Sep 30, 2002 - Domainname: a name which represent the network of your ...... telnet among the machines you want to register for the NFS service. If this does ...
Cluster Computing System: Installation and Administration Guide Ekasit Kijsipongse Shobhna Srivastava Sissades Tongsima High Performance Computing Laboratory National Electronics and Computer Technology Center Ministry of Science Technology and Environment, 112 Phalhon Yotin Rd., Klong Luang, Pathumthani 12120, Thailand September 30, 2002

Contents 1

Prologue 1.1 Computing environment . . . . . . . . . . . . . . . . . . . . . .

3 3

2

Cluster Computing Hardware 2.1 Introduction . . . . . . . . 2.2 File server . . . . . . . . . 2.3 Graphics workstation . . . 2.4 Miscelleneous . . . . . . . 2.5 Overall features . . . . . .

. . . . .

6 6 6 8 9 9

3

4

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

Linux Installation 3.1 Booting the installation program . . . . . 3.2 Selecting the right mouse . . . . . . . . . 3.3 Types of Installation . . . . . . . . . . . . 3.4 Partitioning Hard Disk . . . . . . . . . . 3.5 Choosing which partition to be formatted 3.6 Linux Loader Installation (LILO) . . . . . 3.7 Networking . . . . . . . . . . . . . . . . 3.8 Account Configuration . . . . . . . . . . 3.9 Authentication Configuration . . . . . . . 3.10 Selecting Packages . . . . . . . . . . . . 3.11 X Configuration Tool . . . . . . . . . . . 3.12 Start installing . . . . . . . . . . . . . . . 3.13 Installation complete . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

10 10 12 13 14 16 16 18 19 19 21 22 24 24

Optimizing 3D Graphics Card 4.1 Introduction . . . . . . . . . . . . . 4.2 XFree86 4.0.1 . . . . . . . . . . . . 4.3 Installation Steps for XFree86 4.0.1 4.4 Nvidia Driver for Linux . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

26 26 26 27 29

1

. . . .

. . . .

. . . .

4.5 4.6 5

6

7

8

9

Installing the Driver . . . . . . . . . . . . . . . . . . . . . . . . Modifying XF86Config . . . . . . . . . . . . . . . . . . . . . . .

Interconnection Networking 5.1 Introduction . . . . . . . 5.2 Choosing IP Addresses . 5.3 Setting Up the Software . 5.4 Network Configuration .

. . . .

. . . .

. . . .

. . . .

. . . .

Network Information Service (NIS) 6.1 Introduction . . . . . . . . . . . . 6.2 How NIS works . . . . . . . . . 6.3 Setting Up the Portmapper . . . . 6.4 Setting up an NIS Server (Master) 6.5 Setting up slave NIS server . . . . 6.6 Setting up NIS Client . . . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

. . . .

. . . . . .

29 31

. . . .

32 32 34 35 36

. . . . . .

39 39 39 40 41 43 45

Network File System 7.1 Introduction . . . . . . . . . . . . 7.1.1 User space NFS server . . 7.1.2 Kernel-space NFS server . 7.2 Setting Up NFS for the Impatient 7.3 Setting up an NFS server . . . . 7.4 Setting Up NFS Client . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

48 48 48 49 49 51 53

Autofs: Automounting File System 8.1 Introduction . . . . . . . . . . 8.2 Autofs setup . . . . . . . . . . 8.2.1 Prerequsite . . . . . . 8.2.2 Configuration Files . . 8.2.3 How to Start and Stop?

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

55 55 55 55 56 56

IPTables 9.1 Introduction . . . . 9.2 Basic Format . . . 9.2.1 Tables . . . 9.2.2 Commands 9.2.3 Match . . . 9.2.4 Target/Jump 9.3 Configuration File . 9.4 Examples . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

57 57 57 57 59 59 60 60 60

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . . 2

10 Configuring IP Masquerade 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Configuring . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62 62 62

11 Disk Quota Implementation 11.1 Introduction . . . . . . . . . . . . . . . . . . . . 11.2 Quota Installation . . . . . . . . . . . . . . . . . 11.2.1 Linux with preinstalled quota software . . 11.2.2 Linux without preinstalled quota software 11.3 Details of quotas . . . . . . . . . . . . . . . . . 11.4 Assigining Quotas . . . . . . . . . . . . . . . . . 11.5 Points to remember . . . . . . . . . . . . . . . .

. . . . . . .

65 65 65 65 65 66 67 67

. . . .

69 69 69 70 70

. . . . .

72 72 74 76 80 80

14 User Administration 14.1 Add New User . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Delete User . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82 82 82

15 LessTif 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

84 84 84

16 libGLw: OpenGL Widget Library 16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

86 86 86

. . . . . . .

12 Network Time Protocol 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . 12.2 Connecting to the Server . . . . . . . . . . . . . . 12.3 Configuring RedHat Linux As the Server or Client 12.4 Starting/Stopping NTP Service . . . . . . . . . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

. . . . . . .

. . . .

13 RAID: Redundancy Array of Inexpensive [Independent] Disks 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Setting Up RAID Level 0 . . . . . . . . . . . . . . . . . . . 13.3 Setting Up RAID Level 5 . . . . . . . . . . . . . . . . . . . 13.4 Performance Tuning . . . . . . . . . . . . . . . . . . . . . 13.5 Recovery in RAID Level 5 . . . . . . . . . . . . . . . . . .

3

. . . . . . .

. . . .

. . . . .

. . . . . . .

. . . .

. . . . .

List of Figures 1.1 1.2

Multicomputer architecture . . . . . . . . . . . . . . . . . . . . . Cluster computing system from user’s view . . . . . . . . . . . .

4 4

2.1

Cluster sytem for Thailand Tropical Diseases Research Programme

7

3.1 3.2 3.3

Introduction screen for RedHat 7.0 . . . . . . . . . . . . . . . . . Mouse configuration interface . . . . . . . . . . . . . . . . . . . Showing different kinds of installation: GNOME, KDE, Server, Custom, and Upgrade . . . . . . . . . . . . . . . . . . . . . . . . Disk Druid graphical user interface for RedHat7.0 . . . . . . . . . Showing partition table’s information . . . . . . . . . . . . . . . Interface for Linux Loader configuration . . . . . . . . . . . . . . Networking Configuration interface . . . . . . . . . . . . . . . . Setup account for super-user and regular users . . . . . . . . . . . Authentication configuration . . . . . . . . . . . . . . . . . . . . Categories of different packages in RedHat 7.0 . . . . . . . . . . Monitor configuration interface . . . . . . . . . . . . . . . . . . . Graphics card configuration interface . . . . . . . . . . . . . . . . Monitor configuration interface . . . . . . . . . . . . . . . . . . . The final screen that you will see before leaving the installation . .

12 13

3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 5.1 5.2 5.3 5.4 5.5 7.1

14 15 16 17 18 20 20 21 23 23 24 25

Using bus topology to connect multiple PCs to form a cluster . . . 4-node Cluster computing system using completely connected topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning private IP addresses to the 4-node cluster using completely connected interconnection topology . . . . . . . . . . . . Swapping the transmission and receiving ports for a twisted pair wire The 4-node cluster connecting to the Internet through a gateway .

35 35 37

Exporting the directory /usr/local from machine cluster3 via NFS to cluster2 machine. . . . . . . . . . . . . . . . . . .

50

4

33 33

10.1 Assigning one masquerade node to control the incoming and outgoing packets from/to the Internet. . . . . . . . . . . . . . . . . .

63

13.1 Demonstration of RAID level 0 using four hard stripes for breaking up data . . . . . . . . . . . . 13.2 Demonstration of RAID level 1 . . . . . . . . . 13.3 Demonstration of RAID level 4 . . . . . . . . . 13.4 Demonstration of RAID level 5 . . . . . . . . . . 13.5 Showing how Left-symmetric algorithm operates 13.6 Showing how Left-asymmetric algorithm operates

73 73 74 74 78 78

5

disks and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

four . . . . . . . . . . . . . . . . . .

Chapter 1

Prologue The computer market trend exhibits the price-performance ratio of the commodity computer and network hardware has been significantly improved over the past 10 years. A performance of a single off-the-shelf CPU is now catching up with those high end CPUs, e.g., in terms of clock speed, one could expect a processor running at or above 2 GHz in the near future. This demonstrates the possible market share between using parallel computational systems built from commodity components and buying CPU time on costly Supercomputers. With this motivation, many research groups are interested in building an in-house cluster computing system to support the need of using CPU times on Supercomputer in their projects. Cluster computing [?] is much similar to the notion of multicomputer where there are two or more computers connected via interconnection network cloud (see Figure 1.1) as presented in many literatures [?, ?]. Each computer in this system, called a node, has its own CPU and memory. The concept of cluster computing has been adopted and implemented by many research organizations where the size (number of nodes) of the system ranges from 2 to more than 512 nodes reflecting the high degree of scalability. This class of computer can be regarded as having one big Supercomputer instead of viewing it as a network of workstations (NOW) where many users own and operate at the consoles (see Figure 1.2). Users will greatly benefit from using this monolithic perspective (single machine) as opposed to stealing the CPU times of other workstations at night when nobody’s logged on to complete their parallel jobs.

1.1 Computing environment The target operating system used in the cluster computing machine is free UNIX operating system, called Linux. Like other PCs running Linux, this cluster system does not contain any custom hardware component that Linux does not support. 6

M

M

M

M

Interconnection Network Figure 1.1: Multicomputer architecture

M CC

M

M

M

NETWORKING

Figure 1.2: Cluster computing system from user’s view

The Symmetric Multiprocessor (SMP) feature is recognized by most Linux distributions. To form a parallel computer, the parallel computing environment need to be set up. Freely available message passing libraries, e.g. PVM, MPICH and LAM, could be installed to create a usable parallel virtual supercomputer. Furthermore, for numerical computation purposes, parallel numerical libraries, e.g. PETSc, can be provided. The above message passing libraries may require users to have a deep understanding in parallelizing their applications. Doing so will help improve the performance of running parallel applications on the cluster computing system. Unfortunately, training the users to achieve such a goal is costly. Like many typical supercomputers, this cluster computing system can provide services to many users. We can use the queueing management software, e.g. Distributed Queueing System (DQS), to manage jobs submitted by different users. The parallelization scale for this top management software is coarse-grained when comparing with those message passing libraries. This software suite can bridge the gap between using the existing sequential codes and writing the new parallel codes. In other words, running sequential programs on the cluster computing system can still provide the parallelism (two or more programs can be run in parallel when

7

using the queueing management software).

8

Chapter 2

Cluster Computing Hardware 2.1 Introduction This chapter discusses the list of main components used in assembling the 3-node cluster system for Thailand Tropical Deseases Research Programme (T2). This system comprises of three PCs or nodes each of which is a dual Pentium III 850 MHz. One node is designated as a file server which houses 300 Gigabytes of external SCSI hard. Figure 2.1 conceptually demonstrates the structure of this cluster system. The node called ‘‘teetwo’’ is a backend file server serving two frontend nodes ‘‘linda’’ and ‘‘arnold’’ respectively. All three computers are connected via a high speed interconnection networking, fast ethernet 100 Mbps. Only the two frontends can be seen by logging in users. The following sections describe the detailed specifications of this system.

2.2 File server The hardware description of teetwo file server is tabulated in Table 2.1. Highlight features of the file server hardware are listed as follows: 1. The mother board has a dual CPU shared-memory architecture. 2. Both CPUs work together in harmony. Running several programs at the same time can be accomplished with this mode. 3. The SCSI controller is dual SCSI Ultra-160 which has the maximum throughput of 320 Megabytes/sec 4. We use software RAID to help improve the I/O performance by performing parallel accessing to the hard drives.

9

Figure 2.1: Cluster sytem for Thailand Tropical Diseases Research Programme

HARDWARE CPU Motherboard

UNIT 2 1

RAM HardDisk LAN Card Graphics Card CD-ROM Floppy Disk Case

4 4 1 1 1 1 1

Keyboard Mouse

1 1

DESCRIPTION Intel Pentium III 850 MHz running on 100 MHz Bus Tyan Thunder i840 (S2520) with 2 sets of LVD Ultra-160 Internal Cable and 2 sets of LVD Terminators Kingston SDRAM 128 MB Bus 133 MHz Seagate Ultra-160 SCSI 72 GB (ST173404LW) Intel EtherExpress Pro 100+ Asus AGP-V3800 TNT2 Vanta SDRAM 16 MB HP IDE CD Writer (8X,8X,32X) FDD 1.44 MB Net server ATX Case with 2 sets of Redundent 300 W Power Supply Microsoft PS/2 Keyboard Microsoft PS/2 Mouse

Table 2.1: File server hardware description

10

5. The power supply for the file server is redundant power supply. If one goes down the other will serve the system uninteruptably.

2.3 Graphics workstation The hardware description of linda and arnold (their specifications are identical) graphics workstations is tabulated in Table 2.2. HARDWARE CPU Motherboard

UNIT 2 1

RAM HardDisk LAN Card Graphics Card Floppy Disk Keyboard Mouse Monitor Sound Card Speaker

4 1 2 1 1 1 1 1 1 1

DESCRIPTION Intel Pentium III 850 MHz/100 MHz Bus Tyan Thunder i840 (S2520) with 2 sets of LVD Ultra-160 Internal Cable and 2 sets of LVD Terminators Kingston SDRAM 128 MB Bus 133 MHz Seagate Ultra-160 SCSI 9 GB (ST39236LW) Intel EtherExpress Pro 100+ Asus AGP-V6800 Geforce256 DDR 32 MB FDD 1.44 MB Microsoft PS/2 Keyboard Microsoft PS/2 Mouse Viewsonic G810 Creative SoundBlaster Vibra128 Creative Cambridge SoundWorks

Table 2.2: Graphics workstation hardware description

Highlight features of the graphics workstation hardware are listed as follows: 1. The mother board has a dual CPU shared-memory architecture. 2. Both CPUs work together in harmony. Running several programs at the same time can be accomplished with this mode. 3. The SCSI controller is dual SCSI Ultra-160 which has the maximum throughput of 320 Megabytes/sec 4. The graphics card uses GeForce256 DDR Graphics Processing Unit developped by Nvidia. This chip supports and optimizes both 2D and 3D OpenGL graphics rendering.

11

2.4 Miscelleneous The Uninteruptable Power Supply (UPS) from Socomec (EGYS 2000VA/1200W) is provided as well as a KVM switch which offers the three nodes the sharing of a keyboard, a monitor, and a mouse.

2.5 Overall features 1. Using the cluster computing technology to increase the ability of a regular PC by combining several PCs together. 2. The secondary storage for the proposed system can be expanded up to 4 Terabytes. 3. All the software installed for this system are free software. 4. The price/performance ratio is low when compared with other commercial solutions. 5. This system is equipped with many development software packages which users can use to develop many high performance scienctific applications in the future.

12

Chapter 3

Linux Installation Currently Linux operating system is bundled and distributed by many vendors, called distributions such as RedHat, SuSe, Caldera, Mandrake, Turbo Linux etc. Different distributions have both strengths and weaknesses. Choosing one distribution over the other should not cause any difference in terms of performance. Due to the popularity and how the package is well organized, we have chosen RedHat Linux to be installed on the target cluster system. For more information about pros and cons of different distributions please refer to http://www.linux.org. Since there are several installation methods provided by RedHat Linux, the following information will only focus on the major steps in the installation procedure. RedHat Linux provides both graphical (GUI) and text user interfaces for installing the software. For a novice the GUI one is a good choice. However, focusing only on the GUI would not help much if the interface is drastically changed. Hence, this document concentrates on demonstrating only few points which are important to successful installation.

3.1 Booting the installation program It seems that every installation instruction must begin with this topic. We also give you a bit of advice before actually doing it. There are two ways to boot the installation program: Linux bootable CD If you have a Linux bootable CD, things will be so simple as "put the CD in, reboot the machine and follow the instruction given". Linux boot disk

13

If you do not have the CD drive, you will likely to install via network. What you really need to make this works is the boot floppy which can be obtained from your nearest Linux FTP site. # ftp opensource.thai.net Connected to opensource.thai.net. 220 jedi.links.nectec.or.th FTP server (BeroFTPD 1.3.4(1) 530 Please login with USER and PASS. 530 Please login with USER and PASS. KERBEROS_V4 rejected as an authentication type Name (opensource.thai.net:stongsim): ftp 331 Guest login ok, send your complete e-mail address as password. Password: 230 Guest login ok, access restrictions apply. Remote system type is UNIX. Using binary mode to transfer files. ftp> ftp> ftp> ftp>

cd pub/linux-tle/current/dosutils get rawrite.exe cd ../images get bootnet.img

After performing the above steps, you should obtain two files: rawrite.exe and bootnet.img. The first one is used to dump (block-by-block) the image (bootnet.img) to a floppy disk. This program is a DOS utility and can be invoked when you run DOS or Windows: C:> rawrite Enter disk image source file name: bootnet.img Enter Target diskette drive: a Please insert a formatted diskette nto drive A: and press -EnterAnother way to create a boot disk is to use a unix utility program called dd. If you have an access to a Linux machine and have the bootnet.img image stored in it, the following shows the alternative: # dd if=bootnet.img of=/dev/fd0

14

Assuming that bootnet.img is in the directory where you invoke the program dd and your default floppy drive is accessed by a unix device /dev/fd0. Once you have the boot media, you are ready to start the installation process. Simply put the floppy or CD in the drive and reboot the machine. Make sure that you set a computer BIOS to include the drive in the beginning of boot sequence. Figure 3.1 shows the introduction screen.

Figure 3.1: Introduction screen for RedHat 7.0 If you don’t want to use GUI for installation, you can enter the boot command text to continue in text mode.

3.2 Selecting the right mouse Most mice in the market are now using PS/2 protocal (a mouse PS/2 port on the back of computers is located right beside a keyboard port which is also PS/2.) A recommended mouse is the one that has three buttons. The middle button is usually used to paste the text copied which is very handy. If you do not have the three-button mouse, i.e. yours has only two buttons, don’t forget to select "Emulate 3 Buttons". Note that selecting the mouse type here is only for using mouse in the non-graphics mode (linux console). Using mouse in the graphics environment called "X Windows Envoronment" will be discussed next. Figure 3.2 demonstrates the graphical user interface for mouse configuration. 15

Figure 3.2: Mouse configuration interface

3.3 Types of Installation RedHat Linux offers various kinds of installation. They are "GNOME Workstation", "KDE Workstation", "Server", "Customization". and "Upgrade". The default installation type is "GNOME Workstation" which sets GNOME desktop environment, i.e. window manager and other utilities displayed on the desktop are complied with GNOME standard look and behavior. Similar comments go to "KDE Workstation" except that KDE is the default environment. For standalone system either of the options is sufficient. However, these options include only limited set of utility programs. Some of the uninstalled programs may be needed in the future. Of course these programs can be installed at a later time. If the target system has plenty of disk space, choosing "Customization" option and select everything to be installed is a good choice. In summary, if it is not "Customization", a set of preselected packages will be installed. From our experience, it is likely that we have to install other packages which are not a part of the preselected software. We recommend choosing "Customization" and install everything to minimize the former hassle. Finally, the "Upgrade" option is used for upgrading the previously installed RedHat Linux system. It would only give you the upgrade of software packages in the current system. If there is a new software bundled with the new version

16

of RedHat Linux, it will not be installed with this selection. Reinstalling system software to its system partion which is separated from users and local software space could be a good alternative. The interface for this part is shown in Figure 3.3.

Figure 3.3: Showing different kinds of installation: GNOME, KDE, Server, Custom, and Upgrade

3.4 Partitioning Hard Disk RedHat Linux provides easy to use tools for partitioning hard disks ("Disk Druid" and fdisk). The Disk Druid tool seems to be easier to use than fdisk and is shown in Figure 3.4. However, for experience users using fdisk would give you more freedom to partition. Please note that you can add the partition by moving the focus to a button "Add" (for the GUI one you can click on the button) and press enter. The "Edit" button is only for changing the mount point (see later description for more information about this term) of the existing partition. If you want to change anything else, you must delete the partition using "Delete" button and recreate the partition using "Add". The two most important partions which you should create is a root and swap partitions. The root partition is for storing all the software packages you selected in the previous step. The size of this partition could take up almost the whole hard disk, i.e., leaving a small portion for the swap partition. For example 17

Figure 3.4: Disk Druid graphical user interface for RedHat7.0

if the hard disk has a capacity of 20 Gigabyte, for about 19 Gigabyte or more can be used for this partition. Make sure that the allocated partition is mounted to "/". This means that you must enter "/" in the entry box of the partition program when prompted for mount point. The next step of the installation process is assinging the allocated disk to the directory "/" as specified. The mounting task is done internally by the RedHat Linux installation script. After being mounted, the script will create various subdirectories which are used to store programs, configuration scripts, etc. The following are important directories which store the system files: /boot, /usr, /bin, /lib, /usr/X11R6, /etc, /var. The whole file system will be generated to support the multiuser environment. By default the directory /home is created for storing different user directories. This can be changed when a new user is created. Please refer to the later section for more information about account administration. The swap partition serves as a system virtual memory. One example that swap space is useful is when the physical memory is exhausted and a new program is invoked, some of idle processes—programs that were executed but has no activities for a certain period of time—will be moved to the swap partition. The concept of virtual memory make possible running a program whose size is bigger than the physical memory. We normally allocate a space whose size is double of the physical memory for the swap partition. For 18

example, if your physical memory is 512 Megabyte, you should reserve 1 Gigabyte of hard disk for making a swap partition.

3.5 Choosing which partition to be formatted At this step, it is recommended that you should select every partition except the swap one to be formatted (see Figure 3.5. However, if you just want to re-install the software but do not want to remove the user directories, you can select only partition which contains those system files to be formatted. This can be done only if you specify at least one partition to store user files, e.g., /home should be put in another partition. You can also create /usr/local to store other local programs which do not come with the Linux distribution.

Figure 3.5: Showing partition table’s information

3.6 Linux Loader Installation (LILO) In order to boot up your system, you must install Linux Loader (LILO). The information on this part of installation is demonstrated in Figure 3.6. If you skip this procedure make sure that you have created a boot disk or there is another way to bring up the Linux system without using LILO. There are two places to install LILO.

19

Figure 3.6: Interface for Linux Loader configuration

The master boot record (MBR): MBR is the special area which the system uses to start a computer’s BIOS. This is the recommended area to install LILO since it is the starting point before any operating system can be loaded. Note that an operating system is like another program which has to be loaded into memory before running it. After installing, LILO will take control of the boot process and it will load a chosen operating system, i.e., LILO can manage multiple operating system images and will present a list of images which a user can select to run. When booting a system with LILO installed on the MBR, LILO would prompt you with LILO: prompt which you can enter a name of the image you want to boot. The configuration file (lilo.conf) of this routine is kept under /etc directory. If you want to change the setup of this program, please update this file and rerun /sbin/lilo to update any change in the MBR correspondingly. The first sector of the root partition: This is recommmended if you are going to use other boot loader, e.g., OS/2 boot manager or NT boot manager, to manage different operating systems. The external boot loader will take control first and will load Linux if its image is selected. For more information on configuring LILO please see manpage lilo.conf.

20

The boot label shown in the LILO installation is the name which is an alias of a certain boot image (operating system). At boot time LILO will wait for the label 1 to be entered.

3.7 Networking The following information should be requested from your local network administrator before proceeding the installation in this section. The configuration interface for this part which needs such information is demonstrated in Figure 3.7.

Figure 3.7: Networking Configuration interface

Hostname: a name2 which you want to call your computer, e.g., teetwo Domainname: a name which represent the network of your organization, e.g., bio.mahidol.ac.th. Hence your computer will be recognized by other machines in the system with Hostname + Domainname: teetwo.bio.mahidol.ac.th 1

In order to see the list of boot labels in the LILO session, press the tab key on your keyboard. DO NOT INSTALL LILO if you have Windows NT installed. Make sure that you have selectd "creating a boot disk" option. 2 If you do not wish to test the networking feature of Linux, you can still specify a hostname in the provided entry box. If this is not entered, your computer will be known as "localhost".

21

IP address: This address will be assigned by the network adminstrator. With this address you will be able to access your organization’s networking system. Gateway IP address Domain Name Service (DNS) address Uses the above information to fill in the entry list in the network configuration interface. Note here that it is not so strict that you must complete this step in order to complete the installation. Setting up networking can be done after every package is installed. This part of installation is merely for configuring the system so that it can communicate with other machines. The installation script attempts to do this so that your system is ready to connect to the Internet once everything is installed.

3.8 Account Configuration Linux installation program will ask for a root password 3 . You must enter this password and confirm that it is the right password by key in the same password you just put in the entry box called confirm. The information given in the graphical user interface is straightforward. The picture of the account configuration is displayed in Figure 3.8 Furthermore, you should also create an account for a regular user 4 to log in when the installation is complete.

3.9 Authentication Configuration The interface for this part is shown in Figure 3.9. Basically, you should select the following two options: Enable MD5 password: This option allows a long password (up to 256 characters) to be used instead of just 8 characters or less. This makes it very difficult (increasing the password complexity) to crack a password. Enable shadow password: Even though passwords are encripted, the password file is still readable by anybody in the system. Enabling this option will 3

A root password should be at least six charaters and is not a word in a dictionary, your name, phone number or anything that can be easily guessed. 4 It is very dangerous to have just only root account to access the system. A root user is a previledged user who can cause the most damage to the system if not careful. It is also prone to having hackers breach the system. Logging in to the system with such a previledge should be done only for performing system maintenance or administration.

22

Figure 3.8: Setup account for super-user and regular users

Figure 3.9: Authentication configuration

23

offer separating only passwords in /etc/shadow which is only readable by root. The last option in the authentication configuration list is NIS. We recommend to delay setting this after finish assembling the cluster system. For more information about NIS please refer to Network Information Service Section

3.10 Selecting Packages Packages are grouped together and categorized into several components based on their relationships and functions (e.g., C Development, X Window System, Printer Support etc.) Choose the components which you want to include or choosing everything if you have large enough disk space. Selecting packages by yourselves may cause some conflicts which are from software dependencies, i.e., in order to install one software, you must also have another software that provides the service to the previous one. Note that there are some packages which are required by the system such as kernel and some required system libraries. Therefore, you will not be able to change the status of them (selected by default). To make it easier for you, please select everything while waiting for the computer to install everything to your computer.

Figure 3.10: Categories of different packages in RedHat 7.0

24

3.11 X Configuration Tool Most Linux distributions comes with the graphical user interface called X Window. RedHat Linux ships the lastest development of X server XFree86 which supports more state-of-the-art graphics cards. For example, graphics cards that use GeForce 256 chip by Nvidia is not supported in the previous version (XFree863.3.6) but it is now supported in XFree86-4.0.1. For more information on how to setup X, please refer to Chapter 4. The X Configuration interface is demonstrated in Figures 3.11–3.13 Set up the monitor To setup the Graphics environment, you must first select the kind of monitors used for your system. After detecting the type of the monitor, the installation script will try to match the monitor frequencies (virtical and horizontal) with the monitor database. If it couldn’t find the right match, please do not be alarmed. Simply check the monitor’s manual and enter manually the correct frequencies. Please be warned that similar monitor models may have much different in the underlying frequencies. If you are not certain about the frequencies, please do not proceed since doing so may overclock teh monitor and damage or destroy it. Configure the video adapter Configuring the video hardware (graphics card) is not very difficult. If the graphics card is supported, configuration should be very simple. The graphics card will be probed by the X configuration tool and report for any video hardware you have in the system. If you don’t have the supported card, i.e., the probe shows that your card is not listed, here is the comment given by RedHat: "... If you have technical knowledge about your card, you may choose Unlisted card and attempt to configure it by matching your card’s video chipset with one of the available X server ..." If your card is supported, you will be asked for an amount of memory your video card has. Of course you won’t destroy the card if you wrongly choose the memory; however, the functions of the card might not be delivered. Once everything is all set, we recommend you to test your configuration options. You can also choose to customize your X configuration. This part of the installation will allow you to change the resolution and the depth color display. This part is optional and can be postponed to later.

25

Figure 3.11: Monitor configuration interface

Figure 3.12: Graphics card configuration interface

26

Figure 3.13: Monitor configuration interface

3.12 Start installing After configuring everything mentioned above, it is the time to start installing (copying the files to your computer). If you are installing Linux over some existing files and are not certain about removing these files, terminating the installation here is a good idea. To cancel the installation process, simply use Ctrl + Alt + Del soft reboot key binding. Once you continue the installation, your hard drive(s) will be written (updating the partition table).

3.13 Installation complete Once the installation is completed, remove the boot floppy or CD disk and press return to reboot the machine into Linux.

27

Figure 3.14: The final screen that you will see before leaving the installation

28

Chapter 4

Optimizing 3D Graphics Card 4.1 Introduction In order to utilize the power of a graphics card which is optimized for OpenGL, the new version of X server, XFree86 4.0.1 should be installed (some of the new cards may not be supported by the previous version of X servers). Some of the new video cards may even come with drivers provided by their manufacturers. The ASUS-V6800 video card is equipped with a graphics chip, GeForce256 DDR graphics processing unit, produced by NVIDIA company. Even if the company does not give out much of the detailed instructions on how to install the drivers, NVIDIA is one of the few manufacturers which officially supports the use of their products in the Linux environment.

4.2 XFree86 4.0.1 The X server which comes with RedHat 6.2 distribution is version 3.3.6 which does not support the new optimized OpenGL graphics card GeForce256. The new version of XFree86 just came out and it sure does support the new chipset. We need to install the precompiled libraries and drivers for the X server over the existing one. Of course the very first step which must be performed is downloading the underlying software. The latest release of XFree86 can be downloaded from http://www.xfree86.org This website contains numerous mirror sites which you can use to download the software in case the Internet traffic to the selected site is congested. It is adviced to download binary files which were compiled using glibc-2.1 library. Download all the files from the target directory. Note that some of these files may not be needed for some cases. The following are the required packages that must be installed. 1.

Xinstall.sh

The installer script 29

2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

extract Xbin.tgz Xlib.tgz Xman.tgz Xdoc.tgz Xfnts.tgz Xfenc.tgz Xetc.tgz Xvar.tgz Xxserv.tgz Xmod.tgz

The utility for extracting tarballs X clients/utilities and run-time libraries Some data files required at run-time Manual pages XFree86 documentation Base set of fonts Base set of font encoding data Run-time configuration files Run-time data XFree86 X server XFree86 X server modules

The following list exhibits the optional packages which you can choose to install if you need them. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.

Xfsrv.tgz Xnest.tgz Xprog.tgz Xprt.tgz Xvfb.tgz Xf100.tgz Xfcyr.tgz Xflat2.tgz Xfnon.tgz Xfscl.tgz Xhtml.tgz Xps.tgz Xjdoc.tgz

Font server Nested X server X header files, config files and compile-tim X Print server Virtual framebuffer X server 100dpi fonts Cyrillic fonts Latin-2 fonts Some large bitmap fonts Scalable fonts (Speedo and Type1) HTML version of the documentation PostScript version of the documentation Documentation in Japanese

4.3 Installation Steps for XFree86 4.0.1 To install XFree 4.0.1, the following step must be done by a previledge root user. Note that if you are using this document as a guideline to upgrade the X server, make sure that you backup your working Xserver, namely /usr/X11R6 and /etc/X11 directories before installing. 1. You can check if the downloaded package is the right one for you system by running a script Xinstall.sh; first you must change a directory to where you keep those download files and perform the following: # ./Xinstall.sh -check 30

2. If everything goes well, i.e. the output of the above invocation reports the correct set of binaries that you just downloaded, you can now install the software using the same command but with no options. # ./Xinstall.sh 3. The installer program will walk you through. You need to answer the questions prompted by the program. The following summarizes what you may be informed or asked: You are recommended to back everything up before start installing over the existing XFree86 software. The installer attempts to detect if you are installing from within an X session. This is done by detecting if the $DISPLAY shell variable is preset. The OS version, the C-library and the output executable file format are checked. In the new version of XFree86, all X configuration files are moved to /etc/X11 directory. The old /usr/X11R6/lib/X11 still remains for compatibility purpose but the files are linked to this original directory from /etc/X11. 4. After copying all these files to proper location, you now need to create a configuration file from the new X server. Two methods are adviced to create this configuration file. (a) Running the XFree86 server with the option -configure. Note that unlike the previous XFree86, there is only one X server called XFree86 in this new version. # XFree86 -configure (b) Using a utility program called xf86config. 5. The configure script generated by the above methods are not ready to be used yet. You should edit the file to tell the right kind of graphic device the X server controlling. The following section discusses how to install the nvidia driver so that the new Xserver can utilize advanced features of graphic cards which use graphic processor from NVidia to do the graphics.

31

4.4 Nvidia Driver for Linux Nvidia has released drivers for their products. The released driver includes suppors for the latest GeForce 256 DDR as well as TNT2, TNT, and Quandro based graphics accelerator. These are the popular series of their graphic processing units (GPU) used in many well-known graphic cards. The driver improves the 2D functionality to the Xserver as well as OpenGL acceleration, AGP support, support for most flat panels, and 2D multiple monitor support. The driver package has two major components: a kernel module and XFree86.4.0.1 modules for 2D and 3D acceleration. The current version of the driver at the time of writing is 0.95 and there are two files needed for installing the driver. NVIDIA_GLX-0.9-5.tar.gz NVIDIA_kernel-0.9-5.tar.gz Both of these files can be download directly from Nvidia website www.nvidia.com Note that they also provides binaries and source files in both the RPM (Redhat Package Manager) format and the regular tarred-gzipped format. In my opinion, the driver will be included in the XFree86 next release.

4.5 Installing the Driver The standard installation procedures are applied to the two packages. You need to unpack the packages and compile the source files (if you have the sources) or storing the binaries in proper places. The kernel driver NVIDIA kernel will produce a kernel module called NVdriver.o which will be inserted to the kernel when running Xserver. On the other hand, the other package provides the driver for XFree86-4.0.1 to use with the kernel driver. This driver supports the 2D and 3D graphics and video acceleration. Both installation must be performed as root. Another point that should be clarified here is that the source codes are different in non-smp (single CPU) and smp (multiple CPUs) systems. Please make sure to download the right one for your system. Currently the available driver set are 1. For RedHat uni-processor Linux NVIDIA GLX-0.9-5.i386.rpm and NVIDIA kernel-0.9-5.i386.rpm 2. For RedHat multi-processor Linux NVIDIA GLX-0.9-5.i386.rpm and NVIDIA kernel-smp-0.9-5.i386.rpm 32

3. For any 2.2.x Linux (in RPM format) NVIDIA GLX-0.9-5.src.rpm and NVIDIA kernel-smp-0.9-5.src.rpm 4. For any 2.2.x Linux (in tarred-gzipped format) NVIDIA GLX-0.9-5.tar.gz and NVIDIA kernel-smp-0.9-5.tar.gz The kernel module can be compiled and installed by the following steps: # tar zxvf NVIDIA_kernel-0.9.5.tar.gz # cd NVIDIA_kernel-0.9.5 # make The above procedure will create the NVdriver and insert the module into the kernel. The above commands have been tested on RedHat Linux version 6.2 with no errors. However, this should also work under other versions running Linux kernel newer than 2.2.12 product release. For the XFree86-4.0.1 module, the installation can be done similarly. Installing the GLX package will create the following files. /usr/X11R6/lib/modules/drivers/nvidia_drv.o /usr/X11R6/lib/modules/extensions/libglx.so.1.0.5 /usr/lib/libGL.so.1.0.5 /usr/lib/libGLcore.so.1.0.5 These files may have some conflicts with the ones that come with XFree86-4.0.1 and you should remove any following previously installed libraries /usr/X11R6/lib/modules/extensions/libGLcore.a /usr/X11R6/lib/modules/extensions/libglx.a /usr/lib/ligGL.so /usr/X11R6/lib/libGL.so /usr/X11R6/lib/libGLcore.so In the future, the above fix will not be necessary. However, in this release the renaming/replacing scheme is the cleanest approach. In order to check if the provided files are in used, you can check the log file /var/log/XFree86.0.log to see if the libglx module is NVIDIA Corporation copyright. Installing the package from the tarred-gzipped format involves the typical unpacking and making. Since this part deals with installing some new libraries to the system, executing ldconfig as root is also required. The following show how to compile and install these parts of the driver: 33

# # # #

tar zxvf NVIDIA_GLX-0.9-5.tar.gz cd NVIDIA_GLX-0.9-5 make ldconfig -v

4.6 Modifying XF86Config To use the new driver nvidia drv.o, at least one entry in the /etc/X11/XF86Config 1 must be modified. The following is one portion of the XF86Config file generated by running XFree86 -configure mentioned above: Section "Device" Identifier "Mycard" Driver "nv" # Insert Clocks lines here if appropriate EndSection The pound sign "#" is widely used as a begining of the comment line in many configuration files or scripts. The line Driver "nv" tells Xserver to load the driver module nv drv.o from /usr/X11R6/lib/modules/drivers directory. Replacing this line with Driver "nvidia" allows us to utilize the new 2D module. Section "Device" Identifier "Mycard" Driver "nvidia" # Insert Clocks lines here if appropriate EndSection Supporting OpenGL is done through the new 3D module glx. This module is from the NVIDIA GLX package and has the same name as what XFree86-4.0.1 provides. After resolving the conflict above, the new glx module should be properly loaded. Make sure that Load "glx" present in the section module of the XF86Config file, i.e. Section "Module" ... Load "glx" ... EndSection 1

For XFree86-4.0.1 XF86Config, the file XF86Config-4 is also acceptable as the X11 configuration file. If both XF86Config and XF86Config-4 present, the latter shall be used to describe the configuration for XFree86-4.0.1 (at least in RedHat 7.0).

34

Chapter 5

Interconnection Networking 5.1 Introduction Setting up interconnection networking for small cluster requires some effort to successfully put together the system. Normally interconnecting computers to build a cluster system requires at least one network interface card (NIC) per PC and one network switch or hub to exchange information among the PCs. This is the bus topology which is the most commonly used in many do-it-yourself cluster systems and elsewhere. The underlying data exchanging devices such as ethernet switch will have to avoid any contention during data transfering operation which happens quite often on cluster systems. Using this kind of device helps promote the scalability of the system, i.e., more PCs can be added to the system by connecting them to the switch (see Figure 5.1). For a small cluster system which has at most 4 PCs, a completely connected networking topology (see Figure below) can be used to interconnect the four machines. The reason that we set a restriction to having only four machines is because of the limitation on maximum number of PCI slots on PC motherboards in the market. However, this setting needs no network switches. Two NICs are paired up to create a link and there exists a unique link in every possible pair in the proposed cluster system. Mathematically speaking for n nodes cluster system we will need    exactly



or

total combination of NICs to create a completely connected

topology cluster. Hence the 3-node cluster and 4-node cluster systems needs exactly 3 and 12 NICs respectively. It is highly adviced that building a cluster system containing larger number of nodes should seriously consider using the combination of NICs and switch(es).

35

Figure 5.1: Using bus topology to connect multiple PCs to form a cluster

Figure 5.2: 4-node Cluster computing system using completely connected topology

36

5.2 Choosing IP Addresses Internet Protocol (IP) addresses are represented by a "dotted quad" format with each byte separted by a "." character and is converted to a decimal number (0– 255). Each interface (NIC) is normally (multiple interfaces of a single machine having the same IP address is legal in some cases) assigned a unique IP address. Networks of these addresses are represented by the common portion called the network portion while the remaining digits belong to the host portion. The Netmasking technique is used to seperate the network portion out of these addresses by "bitwise anding" the netmask with them. For example if we use 255.255.255.0 as the netmask, two different IP addresses 10.0.0.12 and 10.0.0.21 have 10.0.0.x representing as their network portion and x.x.x.12 and x.x.x.21 representing as the two different host portions. This document focuses on building a small cluster using the completely connected interconnection networking scheme. To select an IP address for each interface, we propose the following approach: Since these addresses are local addresses and will be used by only nodes in the cluster system, private IP address such as 10.x.x.x (where x represents any decimal number from 0 to 255) is used. Each NIC must have a unique IP address and their first three dotted quads represent the system network address, i.e., 10.0.0.x. The host portion of each NIC is represented by a unique number (making all IP addresses different). Note that by doing so, each machine will have multiple IP addresses corresponding to its NICs. The selection scheme of these numbers is described in the next item. Each machine is numerically and sequentially named, e.g., The names 1, 2, 3, and 4 are for four-node cluster system. Using this example, each machine is equipped with 3 NICs. For a NIC on machine 1, if there is a direct connection from this interface to a NIC on machine 2, the NIC on machine 1 will have the host portion "12" which make its IP address 10.0.0.12. This rule is generalized for all the NICs in the system (see Figure 5.3). Note that for setting up the physical link (either twisted-pair or optical fiber cables) between any two NICs, please make sure that the cable is set in such a way that the line that coming out from the transmission port (TX) on one NIC must go directly to the receiving port (RX) on the other side and vice versa (cross). In twisted-pair cable, making the RJ45 connections on both ends of the cable have to be done carefully. Figure 5.4 demonstrates how to arrange wires creating a “crossed wire” for a direct connection between two PCs. For optic fiber cables, 37

Figure 5.3: Assigning private IP addresses to the 4-node cluster using completely connected interconnection topology

since there are just only two main cables, crossing the RX and TX can be done easily by swapping the lines on the other end.

Figure 5.4: Swapping the transmission and receiving ports for a twisted pair wire

5.3 Setting Up the Software Linux kernel come with RedHat already enables networking. It is not likely that you have to recompile the kernel. Current linux kernel (2.2.x) supports many hardware NICs such as cards from 3COM, NE2000, or Intel vendors. For 10/100 Mbs 38

ethernet cards, you should not need to do anything extra. If you do, we recommend you to buy a new set of support ethernet cards which are now very cheap and can save you from many tedius troubles. If you insist on recompiling the kernel, please consult the Kernel-HOWTO especially on compiling network modules for more information.

5.4 Network Configuration Most linux distributions provide the graphical user interface (GUI) to do network configuration. For education purposes, this document will not discuss the GUI ones but rather focus on how to manually change the system network configuration files. Using ifconfig is a typical way to configure networking devices but has to be performed whenever a system gets rebooted. As stated above,  most ethernet drivers attempt to detect cards and assign device name eth[ ] to each NICs in a first come first serve basis. The following demonstrates how each device can be configured manually: # ifconfig eth1 10.0.0.12 netmask 255.255.255.255 up # ifconfig eth2 10.0.0.13 netmask 255.255.255.255 up # ifconfig eth3 10.0.0.14 netmask 255.255.255.255 up Take the first line as an example; the eth1 is set to IP address 10.0.0.12 and the netmask is 255.255.255.255. The "up" in the command lines indicate that the driver is active (this option can be omitted since it is set to "up" by default). To deactivate the device, the "down" option shall be used instead. There are many other options which may be of your interest. For more information about this command, please consult the unix man page. RedHat allows us to do network configuration (like the above steps) at boot time by modifying the configuration files under /etc/sysconfig/ directory. There are several files located in this directory but what we want to focus are: network: This file should contain the information about the general network configuration. For example: NETWORKING=yes FORWARD_IPV4=false HOSTNAME=vivian DOMAINNAME=hpcc.nectec.or.th GATEWAY=203.150.240.6 GATEWAYDEV=eth0 39

where NETWORKING indicates if we want to enable networking on this host. FORWARD IPV4 should be assigned yes if you want this host to act as a gateway between other hosts inside the cluster system and other networks (see also Configuring IP Masquerade Chapter 10). HOSTNAME and DOMAINNAME are name and domain of this host, e.g., in this case HOSTNAME + DOMAINNAME or vivian.hpcc.nectec.or.th becomes a symbolic name of this computer. GATEWAY is the IP address of a gateway to the Internet. GATEWAYDEV is the device name which is attached to the same network with the GATEWAY. Figure 5.5 indicates where gateways exist in the system connecting to the Internet.

Figure 5.5: The 4-node cluster connecting to the Internet through a gateway

static-routes: Due to the nature of completely connected network, all NICs are connected in a point-to-point fashion. A single network address, e.g., 10.0.0.x, is used to define the system internal network address. That is to communicate among nodes in the cluster, only a host portion is needed to establish the link between any two parties. The static routing table must be then created to give out a correct mapping between source and destination nodes. For RedHat linux the file static-routes (located under /etc/sysconfig) should have an entry in the format: any host dev where dest is the IP address of the other end of the source machine’s ethernet device name or device , e.g., eth0. For example, the following portion is used to set up the static-routes on node 1 which connects to nodes 2, 3, and 4 (recall that we name each machine 



40

with a logical number to determine the host name). any host 10.0.0.21 dev eth1 any host 10.0.0.31 dev eth2 any host 10.0.0.41 dev eth3 ifcfg-eth[0. . . n] under network-scripts sub-directory: Create a file called ifcfg- device for each ethernet device. For example, files for device eth0, eth1, eth2 will be named ifcfg-eth0, ifcfg-eth1 and ifcfg-eth2 respectively. Each file should contain 

DEVICE= BOOTPROTO=static IPADDR= NETMASK= ONBOOT=yes where device represents the ethernet device name, e.g., eth0, eth1. The address is the IP address corresponding to the device, e.g., 10.0.0.12 etc. netmask must be 255.255.255.255 if the divice is point-topoint to the other host in cluster system. Common mistake is when you copy the file from one of your own and forget to change the entry in a DEVICE line to match with the real device name. 





After finishing the above setup, please reboot the system to activate any change. For more detail please refer to sysconfig.txt which comes with initscripts package.

41

Chapter 6

Network Information Service (NIS) 6.1 Introduction Most linux systems nowsaday are installed as a part of a network of workstations or a beowulf cluster. In order to make it easier for a system administrator to manage shared information among other computers in the system such as password file, we need a network adminstrative service able to share such information with consistency and validity. Network Information System is a service that provides information which has to be known by all machines on the network. For example login, password and group are usually served by NIS so that a user needs only a pair of login/password to logon a cluster of computers.

6.2 How NIS works There must be at least one NIS server, or cooperating NIS servers where one is the master NIS server and all the other are called slave NIS servers, serving information to NIS clients. Slave servers are used to provide redundancy for NIS. One or more slave servers means higher availability. Whenever an NIS server goes down or is too slow in respnding to requests, an NIS client connected to that server will try to find one that is up or faster. The master NIS server maintains the original copy of NIS database in DBM format. Shared information, usually stored in ASCII files such as /etc/passwd, must be converted to DBM format using a program called makedbm, which comes with the software distribution. Any update to the ASCII files must be notified and updated to NIS database accordingly. On the other hand, slave servers will only have copies of NIS databases. They must be notified of any change to the NIS 42

databases and automatically retrieve the necessary changes in order to synchronize the databases.

6.3 Setting Up the Portmapper The portmapper (man portmap) is a server that converts RPC program numbers into TCP/IP (or UDP/IP) protocol port numbers. It must be running so that NIS clients can make RPC calls to NIS servers. You can start it manually in Redhat by the following command: # /etc/rc.d/init.d/portmap start You can also use the /usr/sbin/setup program to tell linux to start the portmapper at boot time. For secure RPC, the time service should be enabled on all hosts. The following is the sample of what must appear in the file /etc/inetd.conf in order to enable the time service. time time

stream dgram

tcp udp

nowait wait

root root

internal internal

Normally this service is disable by default. To enable, just simply remove the pound sign in front of each line above, (i.e., a pound sign "#" prepending a line representing a line comment.) Make sure that you restart the inetd to affect any changes made. It should be noted that from version 7.0, RedHat changes the format of internet service daemon configuration file, a.k.a inetd.conf . In order to include the above setup, a file xinetd.conf should be modified by adding the following to this file: service time { id type socket_type user wait }

= = = = =

service time { id

time-stream INTERNAL stream root no

= time-dgram 43

type socket_type user wait

= = = =

INTERNAL dgram root yes

} Then, restart the xinetd server.

6.4 Setting up an NIS Server (Master) The following information is pertain to RedHat-linux (version 6.2). It might be useful for setting up NIS server on other linux distributions with slight modification. However, the authors do not guarantee it. 1. Check if the ypserv software is installed # rpm -q ypserv For Redhat 6.2, the above command should report "ypserv-1.3.x-x " 2. Assign NIS domain name Edit the NIS domain name in /etc/sysconfig/network by adding the following line NISDOMAIN= where your domain represents your given domain name, e.g., "teetwo". Make sure that the change is active by either rebooting the system or running a command 

# domainname 3. Limit access from client machines Edit /var/yp/securenets to include your IP or network addresses of client machines that you want to allow to use NIS. It is note that the entry 0.0.0.0

0.0.0.0

should be commented out since it provides an access to everyone. 4. Setting up the system Uncommenting the following lines in /etc/ypserv.conf to increase the accessing security. 44

* *

: passwd.byname : passwd.byuid

: port : port

: yes : yes

5. Create an NIS server list Put all NIS server names (including master and slave servers) in /var/yp/ypservers line by line. 6. Check TCP wrapper Make sure that your NIS server allow the other nodes in the cluster to access. This one is critical since the new (since RedHat 6.2) version of portmap has been compiled with TCP/IP wrapper library, i.e., there is no such a service in the inetd.conf file as other explicit wrapped services such as imapd, popd, rshd, talkd and others. If /etc/hosts.allow, /etc/hosts.deny are not set properly, the NIS client(s) will not be able to connect to the NIS server. Please read more about how to set hosts.allow/deny from the system man page for more information. For example, /etc/hosts.deny should reject accesses to all services from other hosts by default. So, it must contain only the line: ALL: ALL Then, we allow access on hosts by host basis. For example, /etc/hosts.allow may contain a line: ALL: 10.0.0.13 to allow host #1 to access all services which are secured by TCP wrapper (See chapter Improving Security). 7. Start NIS server The ypserv can be started manually by running a command: # /usr/sbin/ypserv or by the rc script: # /etc/rc.d/init.d/ypserv start 8. Generate the NIS (YP) database On the master, execute: /usr/lib/yp/ypinit -m 45

The NIS database should be created under the directory /var/yp/ yourdomain . 

9. Update the database (map) file If you need to update a map, run make in the /var/yp directory on the NIS master. This will update a map if the source file is newer, and push the files to the slave servers. Please don’t use ypinit for updating a map. 10. Make the change permanent Create an automatic startup script which will get this service up and running at boot time. This can be done by using /usr/sbin/setup program or manually by the following command (not recommended): # cd /etc/rc.d/rc3.d # ln -s ../init.d/ypserv S16ypserv # ln -s ../init.d/yppasswdd S66yppasswdd Reboot the machine to reflect any changes made above.

6.5 Setting up slave NIS server 1. Check if the ypserv software is installed # rpm -q ypserv For Redhat 6.2, the above command should report "ypserv-1.3.x-x " 2. Assign NIS domain name Edit the NIS domain name in /etc/sysconfig/network by adding the following line NISDOMAIN=< your domain > where your domain represents your given domain name, e.g., "teetwo". Make sure that the change is active by either rebooting the system or running a command 

# domainname < your domain > 3. Limit access from client machines Edit /var/yp/securenets to include your IP or network addresses of client machines that you want to allow to use NIS. It is note that the entry 46

0.0.0.0

0.0.0.0

should be commented out since it provides an access to everyone. 4. Setting up the system Uncommenting the following lines in /etc/ypserv.conf to increase the accessing security. * *

: passwd.byname : passwd.byuid

: port : port

: yes : yes

5. Check TCP wrapper Make sure that your NIS server allow the other nodes in the cluster to access. This one is critical since the new (since RedHat 6.2) version of portmap has been compiled with TCP/IP wrapper library, i.e., there is no such a service in the inetd.conf file as other explicit wrapped services such as imapd, popd, rshd, talkd and others. If /etc/hosts.allow, /etc/hosts.deny are not set properly, the NIS client(s) will not be able to connect to the NIS server. Please read more about how to set hosts.allow/deny from the system man page for more information. For example, /etc/hosts.deny should reject accesses to all services from other hosts by default. So, it must contain only the line: ALL: ALL Then, we allow access on hosts by host basis. For example, /etc/hosts.allow may contain a line: ALL: 10.0.0.14 to allow host #1 to access all services which are secured by TCP wrapper (See chapter Improving Security). 6. Import the NIS (YP) database from the master server /usr/lib/yp/ypinit -s A copy of the database should be created under /var/yp/ yourdomain

47



7. Synchronize the database from the master server The NIS database imported from NIS master server is required syncronization such that if there are changes in the master database, they should be distributed to all slave servers. The distributed process must be done by using the yppush program which is already included in /var/yp/Makefile at the master server. To enable the synchronization, change the NOPUSH variable in /var/yp/Makefile to: NOPUSH=false Next time, when you change the shared information, you can execute cd /var/yp; make at the master server to update the NIS database on both master and slave servers. 8. Start NIS server The ypserv can be started manually by running a command: # /usr/sbin/ypserv or by the rc script: # /etc/rc.d/init.d/ypserv start 9. Make the change permanent Create an automatic startup script which will get this service up and running at boot time. This can be done by using /usr/sbin/setup program or manually by the following command (not recommended): # cd /etc/rc.d/rc3.d # ln -s ../init.d/ypserv

S16ypserv

Reboot the machine to reflect any changes made above.

6.6 Setting up NIS Client 1. Check if there exists the client NIS software # rpm -q -a |grep ypbind If you see some report describing the existence of ypbind, then you are all set to configure the client side. 48

2. Specify NISDOMAIN Similar to what we do for NIS server, we should identify who is the server by updating the entry in /etc/sysconfig/network NISDOMAIN=< your domain > and either invoke the following command or reboot the machine to have the effect: # domainname < your domain > 3. Update the configuration file Now it’s the time to tell where your NIS server is. If you have the name of the server you can update /etc/yp.conf manually. The entry which we recommend is domain server If there are more than one entry1 for a domain, the last one in the list will be use first. If the current NIS server is down, NIS client will connect to other servers in the list. For example, domain teetwo server 10.0.0.3 specifies teetwo as the NIS domain while the NIS server’s IP address is 10.0.0.3. 4. Start NIS binding service We can either manually start the service called "ypbind": # ypbind or by the rc script: # /etc/rc.d/init.d/ypbind start 5. Make the change permanent Create an automatic startup script which will get this service up and running at boot time. This can be done by using /usr/sbin/setup program or manually by the following command: 1

Even though, the instructions given in yp.conf say that nis hostname can be a given name which has its entry, in the file /etc/hosts, we found that it is necessary to use the host’s real IP address to tell NIS client where the server is. For example the following entry is a decent entry where "teetwo" is a NIS domain and 10.0.0.3 is the NIS server IP address, 

49

# cd /etc/rc.d/rc3.d; ln -s ../init.d/ypbind S17ypbind The above setting will not be effective right away. To activate the change, the ypbind script must be run manually, e.g., invoking /etc/init.d/ypbind with an argument start. Otherwise, the machine must be reboot to make the changes effective. Note that to successfully running ypbind, the directory /var/yp must exist. It should be no problems for Redhat installed system, since this directory is generated while installing software. The following command is to check if ypbind service is running. # rpcinfo -p localhost ypbind

50

Chapter 7

Network File System 7.1 Introduction The purpose of the network file system (NFS) is to make sharing of files over a network possible. This is one of many important features which defines the characteristic of cluster computing system . It allows a file system on one machine to be mounted on other machines in the NFS ready domain. Users will be able to read/write files to this shared space. There are 2 implementations of NFS servers under linux: user-space and kernelspace. Each has some advantages and disadvantages over the other which are shown below. As you’ll see, the kernel-space NFS server is preferred over the user-space one due to the performance. This document describes how to set up the NFS services (both client(s) and server(s)) by using kernel-space NFS server (nfsd.o) which is incorporated by recent versions of Redhat.

7.1.1 User space NFS server Advantages The directory hierarchy exported by the user-space NFS server includes any file systems mounted under the exported directory transparently; thus the client sees the same directory hierarchy as is seen from the server. Since the server is running in the user space, it is easy to do uid and gid mapping as well as access DNS, NIS, NIS+, etc. Disadvantages Since it is running in the user space, it has to do extra memory copy between user and kernel spaces. Also there is an overhead for context switch. They can be very expensive. 51

File and record locks over NFS are not available. The current implementation is not multithreaded which impacts its performance. Since the export control is initialized at the startup time, the server process has to be restarted when /etc/exports is changed.

7.1.2 Kernel-space NFS server Advantages Since it is running in the kernel space and doing RPC calls inside kernel, it doesn’t have to move memory between kernel and user spaces. Its memory operation is much more efficient. There is no overhead for context switch. Since it is a kernel process, it can take advantage of the kernel thread. File and record locks can work across NFS. It is very important when you have a heterogeneous NFS environment. The file export control is implemented as a system call. When /etc/exports is changed, there is no need to restart the NFS server if there are no changes to mountd. Disadvantages Unlike the user-space NFS server, each entry in /etc/exports can export only a directory in one file system. Any file systems mounted under the exported directory are not exported. A separate entry is needed for each mounted file system. Since it is not easy to do callback from the kernel, uid/gid mapping and other functionalities which require access to DNS, NIS, NIS+ and other user-space services are not supported.

7.2

Setting Up NFS for the Impatient

The following are steps used to set up NFS on a newly installed linux machine. For more completed information about how to set up NFS in general please consult the next section and/or the howto online document.

52

General System Configuration There are three machines, cluster1,cluster2, and cluster3 participating in the cluster computing project. The machine cluster3 is being used as a primary file server. The other two machines, namely cluster1 and cluster2 will share a file system owned by cluster3. The directory on cluster3 which we want to share is the /usr/local directory. Assume that there is an empty /usr/local directory used as a mount point exists in either cluster1 or cluster2. After making the share, the /usr/local directory will simply be the space exported by cluster3. Figure 7.1 portrays the effect of making the /usr/local directory on cluster3 visible (available) for cluster2.

Figure 7.1: Exporting the directory /usr/local from machine cluster3 via NFS to cluster2 machine.

Step by Step Instructions 1. The following must be performed as a previlledged user (root). On the file server side, update the file /etc/exportfs by adding the lines

53

/usr/local /usr/local

cluster1(rw) cluster2(rw)

2. Make sure that the NFS server is running properly. The following could overkill the purpose but it gets the job done. # /etc/rc.d/init.d/nfs stop # /etc/rc.d/init.d/nfs start

3. Run the program exportfs to allow both cluster1 and cluster2 machines to mount the /usr/local directory. Usually exportfs is already included in /etc/rc.d/init.d/nfs, so you might skip this step. # /usr/sbin/exportfs

4. Login to cluster1 and cluster2 to perform the mounting operation individually on each machine. # mount -tnfs cluster3:/usr/local /usr/local

These steps should do the trick for most situations.

7.3

Setting up an NFS server

You should make sure that the networking is sane. That is you should be able to telnet among the machines you want to register for the NFS service. If this does not function properly, please consult the document on how to setup network on your system and/or your network administrator. 1. Start the portmapper It’s either called portmap or rpc.portmap and it should live in the /usr/sbin directory (on some machines it’s called rpcbind). It should be started every time you boot your machine, so you need to make/edit the relevant rc scripts which is outside the scope of this document. It’s likely that the system you are currently using will have a script (portmap) for starting the

54

portmap service under /etc/rc.d/init.d directory. You can start portmap1 by hand now: # /etc/rc.d/init.d/portmap start Check that it lives by running ps aux and rpcinfo -p. 2. Start NFS server The core of NFS server are two programs called mountd (rpc.mountd) and nfsd (rpc.nfsd). Before running these two programs we should tell the system how to share files by editing the file /etc/exports. As a general example, if you want to share the file system /usr/local which resides on the machine cluster3 with the machine called cluster1 and cluster2, you should then put the following in the exports file located in cluster3 machine. /usr/local /usr/local

cluster1(rw) cluster2(rw)

The above configuration tells NFS to allow cluster1 and cluster2 to read and write from/to the /usr/local file system (notice the (rw) option in each line). There are other options such as read only (ro) which can be specified to vary the sharing behavior. For more information please consult man pages, e.g., try man exports. After this step, you are ready for starting both mountd and nfsd daemon programs. Running the two utilities should be very easy on the system installed Redhat Linux distribution. Only one startup script, called nfs, which is located under /etc/rc.d/init.d directory is responsible for starting the two program daemons above. To start this script, simply execute the following commands as root: # /etc/rc.d/init.d/nfs stop # /etc/rc.d/init.d/nfs start The first command stops the NFS service while the second command reactivates it. The alternative to issuing two commands is by using the option 1

In fact, in most linux distributions such as Redhat, Mandrake etc., the setup process normally make portmap service start by default. Redhat linux also comes with a setup tool called /usr/sbin/setup which allows us to configure some system features, e.g., Mouse/Keyboard/sound/timezone configurations, and the system services which you can select services to be started at the boot up time.

55

restart. However, experience shows us that the option restart can be very unstable, i.e., the daemons aren’t brought up properly. Enforcing the two commands above somehow yields better result. As with other system services, the nfs script should be set ready when the machine boot with the setup utility as described above. 3. Maintaining Export Files Once you have everything running, you might sometimes need to change the /etc/exports which specifies the directories to share among machines. The utility called exportfs is used to create the export table from /etc/exports. The export table is stored in /var/lib/nfs/xtab which is read by mountd when a remote system requests access to mount a file tree. So, after you change the /etc/exports file, run the command: # exportfs -r In Redhat, this utility is already included in /etc/rc.d/init.d/nfs, so in general you can run it by: # /etc/rc.d/init.d/nfs reload

7.4 Setting Up NFS Client 1. Mounting Exported File Systems On the client side, making those exported file systems available to it is quite simple. A system installed with Redhat Linux likely has NFS ready kernel. Put in another way, the linux kernel must support the foriegn file systems other than the native ext2, for example NFS, VFAT, iso9660 etc. to be operated under the kernel control. If you are lucky, you won’t need to do anything further with the kernel to make this feature available. If everything goes well, you should be able (as root) enter a mount command to connect the remote file systems with your local file system. The new file systems wil be available as a mount point in the client file system structure. Using the previous example, the server side has published or exported the file system /usr/local to both cluster1 and cluster2 machines. The command below show how to mount /usr/local located on cluster3 as a mount point directory /mnt on cluster1 or cluster2. Note that you must telnet to either of these machines to execute the command below as root.

56

# mount -tnfs cluster3:/usr/local /mnt Make sure you have the /mnt directory as a mount point. The file system is now available under /mnt and you can cd there, and ls in it, and look at the individual files. You will notice that it’s not as fast as a local file system, but a lot more convenient than ftp. If, instead of mounting the file system, mount produces a error message like

mount: cluster3:/usr/local failed, reason given by server: Permission de then the exports file is wrong, or you forgot to run exportfs after editing the exports file. If it says mount clntudp_create: RPC: Program not registered it means that nfsd or mountd is not running on the server. This problem, however, may stem from the configuration of hosts.allow or hosts.deny which are used for preventing any unauthorized access. Please take a look at the document pertaining to the tcp-wrappers for more information. Finally, to unmount the file system is simply invoking # umount /mnt 2. Mounting at boot time Recall that regular file systems can be detected and mounted every time a system gets reboot by modifying entries in /etc/fstab file. Like other file systems, NFS space can be mounted if the following entry is added to the fstab file (see man fstab). cluster3:/usr/local

/mnt

nfs

rsize=1024,wsize=1024 0

To make your life easier, there are file system tools which allows us to modify this file given the template. If you are uncertain about what the modified file system, please look for a system administrative program like linuxconf etc.

57

0

Chapter 8

Autofs: Automounting File System 8.1 Introduction Automounting file system is one that mounts the needed file system on demand and the entire process is transperant to the user. There are two kinds of auto mount software used in Linux. First is ’amd’ which is implemented in user space and second is ’autofs’ which is implemented in kernel space. This manual will focus on autofs only. Using automounting file system has at least two advantages: The client and server are loosely coupled leading to increased reliability of your LAN. Because the servers whose file systems that are not used currently can remain offline. Since only the used file systems are mounted, other cannot be seen by users which increases security for these servers.

8.2 Autofs setup If you have a recent version of RedHat Linux it is most likely that it will have autofs software installed. To check whether it is installed type the following command. rpm -qa|grep autofs

8.2.1 Prerequsite The server from which you require to mount must have Network File Sharing (nfs, samba) configured.

58

8.2.2 Configuration Files The main configuraion file for autofs is /etc/autofs.master. /share

/etc/auto.share

--timeout=60

The first field is a mount point on which the file system is mounted on. The second field is the map file and it can have any name. The last field states that it can unmount 60 seconds after use. The map file /etc/auto.share has the folowing format. kernel -ro ftp.kernel.org:/pub/linux tmp -fstype=nfs,rw rho:/& * -fstype=nfs,rw rho:/home/& SAT -fstype=smbfs ://terabyte/share/SAT The first option is the key that specifies the exact mount point. Next is the mount option and last is the location from where it is mounted from. In the first entry ftp.kernel.org:/pub/linux is the nfs to be mounted under /share/kernel. The ‘&’ in the second entry will be substituted by the key value ‘tmp’ and mounted under /share/tmp. In the third entry ‘*’ refers to any key value to which ‘&’ can be subsutited allowing to mount all home directories. However the root mount point is /share so the directories will be mounted under /share/... The last one is an example of mount samba file system under /share/SAT.

8.2.3 How to Start and Stop? To start or stop autofs use the following command: \etc\rc.d\init.d\autofs start|stop

59

Chapter 9

IPTables 9.1 Introduction IPtables is a tool to that performs packet filtering in Linux 2.4 kernel. It is a replacement for ipchains in the previous versions. The entire data transfered through networks is in form of packets.The headers of the packets give us the information that is required for making routing decisions and other administrative details. The actual data that is being transfered belongs to the body. To filter packets, its header is examined and appropriate action is taken. The question then arises why do we need to examine the header and filter the packets? The most important reason would be enhance the security of the network. For example we might want to protect our system from malicious outsiders. Another reason is we might want to restrict or control the usage of the resource that belongs to our network. For example we might want to allow only limited ammount of traffic to pass through.

9.2 Basic Format The basic format of the command is as follows: iptables [-t table] command [match] [target/jump].

9.2.1 Tables The first option is table. There are three kinds of tables namely nat, mangle and filter. Nat: This table is used for network address translation. There are three chains PREROUTING, OUTPUT and POSTROUTING. Prerouting chain is used to alter the packets as soon as they enter the firewall. Output chain is

60

used to alter the packets locally generated. And postrouting chain is used to alter the packets as they are leaving the firewall Mangle: This table is used to mangle packets and has two default chains. This table should not be used to either filter packet or do any address translation. The two default chains are PREROUTING and OUTPUT. As with prerouting in Nat table here also it is used to mangle packets as they enter the firewall. And the output is used to mangle packets that are generated locally. This table changes different packets and how their headers appear. For example TTL or TOS. Filter: This is used to actually filter the packets and if the tables option is not specified then the command is applied to this tables. There are three kinds of chains the INPUT, OUTPUT and FORWARD. The input chain is used on the packets that are destined for local host. Output as in the above cases is used on the locally generated packets. And forward is used on all other chains. The action that can be taken is DROP, LOG, ACCEPT or REJECT on each chain.

61

9.2.2 Commands -A, –append iptables -A INPUT... This command appends at the end of the chain. -D –delete iptables -D INPUT –dport 80 -j DROP iptables -D INPUT 1 There are two ways to delete a rule in a chain. the first is to specify the rule to be deleted as in the first example. Or the second is to specify the number of the rule as in the second example. -R –replace iptables iptables -R INPUT 1 -s 192.168.0.1 -j DROP This is used to replace the old entries at a specific line. -I -insert iptables -I INPUT 1 –dport 80 -j ACCEPT This inserts the rule at the specified location. -L –list iptables -L INPUT This command is used to list the rules in specified chain or table. -F –flush iptables -F INPUT This flushes all the rules in the specified chain or table. It is equivalent to deleting all the rules at once. -N –new-chain iptables -N givenName This adds a new chain in the specified table with ”givenName”. -X –delete-chain iptables -X givenName This is used to delete the entire chain along with the rules in it. -P –policy iptables -P INPUT DROP This sets the default policy for the specific chain. This applies to all the packets that do not match any rule in the chain.

9.2.3 Match This is used to as extended packet matching module. There are two options that are used -p (–protocol) or -m (–match) which are followed by more options. The protocol option that can be matched are tcp, udp and icmp. The match options are mark, limit, owner, ttl, tos, state etc. The three important once are ttl, tos and state. 62

ttl and tos are self-explainatory. The state option tells which state of the packet is to be matched with the rule. There are four states INVALID, ESTABLISHED, NEW and RELATED. Following are some example of the usage of match: iptables -A INPUT -m state --state RELATED,ESTABLISHED iptables -A OUTPUT -m ttl --ttl 60 iptables -A INPUT -p tcp --dport 22

9.2.4 Target/Jump When the rules in a chain are exmained and there is no rule towhich the packet matches then the target option is put into action. The options that are allowed are ACCEPT, DROP, QUEUE or RETURN. Target is used with -P. The jump option is the same as the target it specifies the target of the rule if the packet matches the rule.

9.3 Configuration File The location of the configuration file is specified in the startup script /etc/rc.d/init.d/iptables. This configuration file is read by iptables when it starts. So to make changes permanent we have to edit this file. However there is an option in the start up script ’save’ which saves the changes made to this file without manually having to edit it. So after setting all the rules run the following command: /etc/rc.d/init.d/iptables save The actual location of this file is /etc/sysconfig/iptables.

9.4 Examples SSH

# Allow ssh iptables -A INPUT -i $IFACE -p tcp --sport 22 -m state --state ESTABLIS iptables -A OUTPUT -o $IFACE -p tcp --dport 22 -m state --state NEW,ESTA WWW

# Allow www to 80. iptables -A INPUT -i $IFACE -p tcp --sport 80 -m state --state ESTABLIS iptables -A OUTPUT -o $IFACE -p tcp --dport 80 -m state --state NEW,ESTA

63

# Allow www to 443. iptables -A INPUT -i $IFACE -p tcp --sport 443 -m state --state ESTABLI iptables -A OUTPUT -o $IFACE -p tcp --dport 443 -m state --state NEW,EST TELNET

# Allow telnet outbound. iptables -A INPUT -i $IFACE -p tcp --sport 23 -m state --state ESTABLIS iptables -A OUTPUT -o $IFACE -p tcp --dport 23 -m state --state NEW,ESTA Note: To disallow everthing else we need to set the default policy to DROP.

64

Chapter 10

Configuring IP Masquerade 10.1 Introduction IP masquefrade is a technique used to allow machines behind a firewall or a masquerade host to be able to connect to the Internet even if they use private IP addresses (e.g. addresses begin with 10.x.x. like our cluster system). It has been also used to secure the internal network by hiding the internal information like IP addresses. Anyone from outside the network does not know the internal machines since they are hidden behind the masquerade host. Therefore, the security of this cluster system depends heavily on the strength of the masquerade host which is normally desired in most situations. In other words, only one door is to be secured; not many doors. Figure 10.1 shows the 4-node cluster system configuration where one node from the cluster is served as the only door to the Internet.

10.2 Configuring At the gateway between inside and outside cluster 1. Enable IP forwarding option A gateway is a machine that allows packets from one network to be forwarded to another network. To enable IP forwarding, assign ’yes’ to the following variable in the file /etc/sysconfig/network1. FORWARD_IPV4=yes 2. Create the IPCHAINS configuration file The file /etc/sysconfig/ipchains contains rules to which the gateway must respect in order to handle any packets. The following 1

It should be noted that since RedHat 7.0, this setting has been change to net.ipv4.ip forward in the /etc/sysctl.conf file.

65

Figure 10.1: Assigning one masquerade node to control the incoming and outgoing packets from/to the Internet.

rules specify the criteria for the gateway to allow only packets from inside cluster to be masqueraded to any place outside. -P forward REJECT -A forward -j MASQ -s 10.0.0.0/255.255.255.0 -d 0.0.0.0/0 where the the first rule sets reject as the default policy and the second specifies the masquerade. See man ipchains for more options. 3. Make the change permanent Users can activate/deactivate ipchains rules at any time by running the rc script: # /etc/rc.d/init.d/ipchains start # /etc/rc.d/init.d/ipchains stop However to activate the ipchains rules at boot time, run the utility: # setup , enter into the System services and follow the on-screen formation to enable the ipchains service. 4. Reboot the machine

For each host inside the cluster

66

1. Set the masquerade host as the default gateway The masquerade host acts like a gateway between inside and outside cluster as we’ve configured in the above section. You might think that you can set the masquerade host as the default gateway by assign it into the variable GATEWAY in the file /etc/sysconfig/network. However, it doesn’t work in our case. Since we use point-to-point communication among hosts inside cluster, the masquerade host cannot be reached until the file /etc/sysconfig/static-routes has been worked which is always after /etc/sysconfig/network. So, we reasonably move the default gateway setting into /etc/sysconfig/static-routes. Append the following line into it. any net default gw where masquerade host is the IP address of the masquerade host that this host has a point-to-point connection to. For example in hosts 2, 3 and 4, the masquerade hosts are 10.0.0.12, 10.0.0.13 and 10.0.0.14 respectively. 

2. Reboot the machine Then try to connect to any places outside by using network tools such as ping, telnet, web, etc.

67

Chapter 11

Disk Quota Implementation 11.1 Introduction The main purpose of quota is to limit the disk space or the number of inodes used by a user or group. Disk space is refered to the ammount of space allocated tothe user whereas the inodes refer to the number of files that a user can access at a time. On every server where disk space is shared among users there is a need to restrict them to a certain amount of disk space. This is done to prevent some users hogging up the entire disk space.

11.2 Quota Installation 11.2.1 Linux with preinstalled quota software If you have a recent version of RedHat Linux it is most likely that it will have quota software installed. Then you can skip to the next section.

11.2.2 Linux without preinstalled quota software 1. First you have to reconfigure your kernel to turn on the quota options.You can use the command make menuconfig or make xconfig under the directory /usr/src/linux?.?.?. Quota support (CONFIG_QUOTA) n [y] For further details on how to setup kernel options refer to other manuals. 2. After this you should install the downloaded package to your system using the following steps.

68

$ $ $ $ $ #

gzip -dc | tar xvf cd quota-tools (or whatever directory the software is put in) ./configure make su make install

3. Finally you need to modify the init script to make sure that quotas are turned on at each boot time. quotas should be turned on only after the /etc/fstab has been mounted.The script is as follows: # Check quota and then turn quota on. if [ -x /usr/sbin/quotacheck ] then echo "Checking quotas. This may take some time." /usr/sbin/quotacheck -avug echo " Done." fi if [ -x /usr/sbin/quotaon ] then echo "Turning on quota." /usr/sbin/quotaon -avug fi This script may be placed in rc.local.

11.3 Details of quotas Disk quotas are implemented per user (per group) per partition. During the boot time each partition is checked to see if quota has been implemented on it or not. To implement user or group quotas for a given partition we specify usrquota or grpquota options in the fourth field of fstab file. /dev/hda2 /usr ext2 defaults,usrquota,grpquota 1 1 Then in the root directory of that partition we create files ‘quota.user’ and ‘quota.group’ (‘aquota.user’ and ‘aquota.group’ for version 2) using the touch command. The contents of this file are not in human readable form. The most important thing is to initialize this file we have to run quota check for both group and user. If we use the default -av option it only checks the user quotas and uitilizes the quota.user file only. We use 69

quotacheck -avug to check both user and group quotas To activate the quotas we need to reboot the system. Now we can turn the quotas on and edit the quotas for each user in turn. For versipons where quota software is included and quota option is on by default and there is a script that checks the quotas at boot time.

11.4 Assigining Quotas Quotas can be assigined to either groups or to users. The command used to do so is edquota. It is located in ‘/usr/sbin/edquota’. User Quota (-u) : /usr/sbin/edquota -u username This opens an editor where you can assign the limits for inodes and disk space. It also has three fields Soft limit, Hard limit and Grace period which are discussed later. Group quota (-g) : Its usage is exactly the same as user quota except it has group name instead of user name. /usr/sbin/edquota -u groupname Soft Limit : This the limit of the space or the inodes that the user has. Once pass this limit the user gets warning about exceeding it. Hard Limit : This is the absolute limit that is placed on the user under no condition he/she can exceed it. However it only works when we have Grace Period set. Grace Period : This the time that is given before soft limit is enforced on the user. The command used to do so is: /usr/sbin/edquota -t

11.5 Points to remember 1. “Currently the quota interface have diverged between the Linus 2.4.x kernels and the Alan Cox 2.4.x kernels. Running quota-enabled Samba compiled on 70

Alan Cox kernel(shipped by default with RedHat 7.x) works but fails on linus kernel.” http://de.samba.org/samba/whatsnew/samba-2.2.3.html 2. To implement the quotas on the root file system we need to check and enable the quotas while the file system is in the read only mode.

71

Chapter 12

Network Time Protocol 12.1 Introduction Network Time Protocol (NTP) is used to synchronize time of computers between clients and servers (or among peers). Since the protocol is designed to compensate for the network latency, local clock drift and lose of connectivity from servers, it works with high reliability and can provide very accurate time. In LAN environment, it can provide accuracy within 10 second, and in the  Internet up to 10 second. Without NTP, clocks in the computers, particularly PC, tend to be out of synchronization in very short time. 

Servers in NTP are connected in a hierarchical structure where the first level (stratum) is called Primary Server. The Primary Servers must connect directly to the reference time source such as an atomic clock. The second level servers connect to the Primary Servers. Then the third level servers connect to the second level servers, and so on. The clock in the higher levels closed to the Primary Servers is more accurate than those in the lower levels.

12.2 Connecting to the Server There are lots of public NTP servers available around the world out of which only few of them are the first or second level. They also have very restricted rules for ones who want to connect to. The next levels (3-4) are easily available and have enough accuracy for most applications. In order to synchronize the machines with those servers, just run the NTP daemon which can talk the NTP protocol to them. Note that some servers still need you to follow their rules before allowing the connection, e.g. they require you to send an email for notification, etc. In RedHat Linux, the software daemon called ntpd should be installed by default. To make sure whether it is already installed, type the following command: 72

rpm -qa | grep ntpd

12.3 Configuring RedHat Linux As the Server or Client The main configuration file is at /etc/ntp.conf as shown in the following lines: server

ntp.adelaide.edu.au

server server

ntp0.cornell.edu ntp.shim.org

server 127.127.1.0 fudge 127.127.1.0 stratum 10

# # # #

University of Adelaide, South Australia Cornell University, Ithaca, NY Singapore

# local clock

restrict 127.0.0.1 restrict 10.0.0.0 mask 255.255.255.0 nomodify restrict default ignore driftfile /etc/ntp/drift broadcastdelay

0.008

The server lines specify a list of NTP servers. You can have as many lines as needed to provide reliablity. The restrict lines facilitate the access control such that each address/mask given in the line can have permission. In the example, localhost has no restriction, the private network 10.x.x.x is allowed only for query and synchronization. Finally in the default restrict, all accesses are ignored for any other hosts. During the synchronization process, the clock of the machine gradually shifts to the correct time to prevent clock jump which may cause harm to time-dependent applications. This process takes time if the current time is very far from the correct time. In order to shorten the process, RedHat Linux allows you to use a program called ntpdate to set the clock directly to the correct time at the system boot but before any applications start. The configuration file for ntpdate is at /etc/ntp/step-tickers and contains the list of NTP servers, for example: ntp.adelaide.edu.au ntp0.cornell.edu ntp.shim.org

12.4 Starting/Stopping NTP Service Once you have configured the ntpd and ntpdate, as root use the command: 73

etc/rc.d/init.d/ntp start or /etc/rc.d/init.d/ntp stop to start/stop the ntpdate and ntpd. To make the NTP service started at every system boot, use the command: /sbin/chkconfig ntp on You can check whether the time is being synchronized with the NTP servers by using the command: /usr/sbin/ntpq After the ntpq prompt, type peers and wait for the result like as shown below: ntpq> peers remote refid st t when poll reach delay offset jitter ============================================================================== LOCAL(0) LOCAL(0) 10 l 6 64 377 0.000 0.000 0.008 *cudns.cit.corne gps1.otc.psu.ed 2 u 113 1024 357 360.835 40.299 75.875 202.56.133.180 0.0.0.0 16 u - 1024 0 0.000 0.000 4000.00 +maverick.mcc.ac ntp2.ja.net 2 u 641 1024 267 880.344 -204.18 247.985 bernina.ethz.ch 0.0.0.0 16 u - 1024 0 0.000 0.000 4000.00 +129.127.40.3 ns.saard.net 3 u 132 1024 377 472.290 67.351 222.704

Then, type quit to exit from ntpq prompt. ntpq> quit

74

Chapter 13

RAID: Redundancy Array of Inexpensive [Independent] Disks 13.1 Introduction CPU performance is roughly doubling every 18 months which is a much faster rate than that of disk performance. In 1988, the new concept of disk organization which mimics parallel processing paradigm was introduced. This technology, called Redundancy Array of Inexpensive Disks (RAID), was naturally adopted by the industry. This new organization allows concurrent access (read/write) to hard disks. Intuitively speaking, the RAID system is box(es) full of hard drives which are controlled by either hardware or software RAID controller. In our porposed system, we use Linux’s software RAID which is available free of charge. Obviously, using software approach will degrade the performance of the processor running it. With the current CPU technology, this concern does not seem to be major. In addition to appearing like a single large disk to applications, RAID technology has different methods on how data are distributed over the disks to allow parallel I/O operations which are known as RAID level. The major levels which will be discussed are level 0 and level 1 and level 4 and level 5 the term level is somewhat misleading since there are no level involved. RAID level 0 is shown in Figure 13.1. In this level, the virtual single disk image is divided up into strips of k sectors each, with sector 0 to k-1 being strip 0, sector k to 2k-1 as strip 1 and so on. The RAID level 0 writes data to consecutive strips (4 strips per one RAID block) in a round robin fashion. Distributing data over RAID disk like this is called "striping". This allows information to be distributed over several physical disks. When reading data from one block of RAID disk (4 strips), a single read command is spawned to 75

4 read commands and retrieve data from four strips disks in parallel. Writing a data block is also striped to all disks (see the following figure for more detail on how RAID strip layout). Therefore we have transparent parallel I/O servicing for other applications.

Figure 13.1: Demonstration of RAID level 0 using four hard disks and four stripes for breaking up data

RAID level 1 This level is very similar to the concept described in level 0. This option, however, is a true RAID which offer both better performance and reliability. On the write, data will be written twice on different disks to create a replica of images. On the read, either copy of the disks (see Figure 13.2) can be used. With this characteristic, the read performance can be improved upto twice as good. If a drive crashes, its copy can be used to replace it.

Figure 13.2: Demonstration of RAID level 1

RAID level 4 This also works with striping. Level 4 is like level 0 with a parity strip written onto an extra hard disk. The following figure demonstrates how striping is organized in level 4. The parity strip is created by exclusiveor 4 strips together, e.g., (P0-3) is created by exclusive-or strips 0,1,2,3 together creating a parity strip (P0-3) k byte long. If a drive crashes, it can be recovered by using the exclusive-or of all the remaining drives and their parity drive (see Figure 13.3). 76

Figure 13.3: Demonstration of RAID level 4

RAID level 5 This level is introduced to diminish the load due to the update of data. This overhead occur in level 4 since it is necessary to read all the drives to recalculate the parity data. The heavy load on the parity drive appeard in level 4 creates the bottleneck effect. In RAID level 5 the parity strips are evenly distributed over all drives in a round robin style.

Figure 13.4: Demonstration of RAID level 5

Linux’s software RAID supports RAID level 0, 1, 4 and 5. However, only configuration for level 0 and 5 are explained in this document since the other two are considered less effective. Level 1 has the smallest size of storage though it provides the best redundancy. Level 4 is dominated by Level 5 in terms of write performance.

13.2 Setting Up RAID Level 0 1. Check if the linux kernel already supports software RAID driver

77

# more /proc/mdstat Personalities : [raid0] read_ahead 1024 sectors unused devices: However, if RAID level 0 is compiled as a module, the Personalities might be empty. You should try to insert RAID level 0 module # insmod raid0 and eventually see the same result. 2. Edit the configuration file /etc/raidtab to add a new RAID device An example for RAID level 0 is shown below. raiddev /dev/md0 raid-level persistent-superblock chunk-size nr-raid-disks

0 1 32k 3

device raid-disk

/dev/sdb1 0

device raid-disk

/dev/sdc1 1

device raid-disk

/dev/sdd1 2

where /dev/sdb1, /dev/sdc1 and /dev/sdd1 are free partitions with Linux native or Linux raid autodetect types. See man raidtab for further detail. 3. Initial new RAID device All previous data in disk array will be destroyed then RAID superblock is created. # mkraid --really-force /dev/md0 The RAID device /dev/md0 must be specified in /etc/raidtab. 78

4. Start the new RAID device The command mkraid also start the RAID device being initialized automatically. Check the started RAID device by: # more /proc/mdstat Personalities : [raid0] read_ahead 1024 sectors md0 : active raid0 sdd1[2] sdc1[1] sdb1[0] 244233600 blocks 32k chunks unused devices: Size of the RAID device is reported by the number of blocks (each block has 1024 bytes). However if this is not the case, you can start RAID device manually by command: # raidstart /dev/md0 5. Create the file system on RAID device The RAID device will appear to users just like a typical disk partition. The follow command creates ext2 file system on the RAID device. # mkfs /dev/md0 6. Mount the file system into the directory tree # mount /dev/md0 Finally, you must edit the /etc/fstab to make the modification persistent as shown below. LABEL=/ none /dev/sda5 /dev/md0

/ /proc swap /RAID0

ext2 proc swap ext2

13.3 Setting Up RAID Level 5 1. Check if the linux kernel already supports software RAID driver # more /proc/mdstat Personalities : [raid5] read_ahead 1024 sectors unused devices: 79

defaults defaults defaults defaults

1 0 0 1

1 0 0 2

However, if RAID level 5 is compiled as a module, the Personalities might be empty. You should try to insert RAID level 5 module # insmod raid0 and eventually see the same result. 2. Edit the configuration file /etc/raidtab to add a new RAID device An example for RAID level 5 is shown below. raiddev /dev/md0 raid-level chunk-size parity-algorithm persistent-superblock nr-raid-disks

5 32k left-symmetric 1 4

# Spare disks for hot reconstruction nr-spare-disks 0 device raid-disk

/dev/sdb1 0

device raid-disk

/dev/sdc1 1

device raid-disk

/dev/sdd1 2

device raid-disk

/dev/sde1 3

where /dev/sdb1, /dev/sdc1 and /dev/sdd1 are free partitions with Linux native or Linux raid autodetect types. The left-symmetric is the parity-algorithm that offers maximum performance on typical disks. Since when you traverse the striping units sequentially, it allows each disk to be accessed once before accessing any disks twice. Left-symmetric and left-asymmetric algorithm are demonstrated in Figures 13.5 and 13.6. See man raidtab for further detail. 3. Initial new RAID device 80

Figure 13.5: Showing how Left-symmetric algorithm operates

Figure 13.6: Showing how Left-asymmetric algorithm operates

81

All previous data in disk array will be destroyed then RAID superblock is created. # mkraid --really-force /dev/md0 The RAID device /dev/md0 must be specified in /etc/raidtab. 4. Start the new RAID device The command mkraid also start the RAID device being initialized automatically. Check the started RAID by:

# more /proc/mdstat Personalities : [raid5] read_ahead 1024 sectors md0 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0] 16780800 blocks level 5, 32k chunk, algorithm 2 [4/4] [UUUU] resync=10.1% finish=588.4mi unused devices: Size of the RAID device is reported by the number of blocks (each block has 1024 bytes). The 4/4 means 4 out of 4 disks work normally. If there is a corrupted disk, a letter U in UUUU will be replaced by the letter indicating the corrupted disk, for example 4/3 UUU . When the RAID device is successfully created, the syncronization process begins. The /proc/mdstat shows the progres of syncronization and the approximated remaining time. During syncronization, RAID device can be used for reading/writing but redundancy is not guarunteed. 







However if this is not the case, you can start RAID device manually by command # raidstart /dev/md0 5. Create a file system on RAID device The RAID device will appear to users just like a typical disk partition. The follow command creates ext2 file system on the RAID device. # mkfs /dev/md0 6. Mount the file system into the directory tree # mount /dev/md0

82

Finally, you must edit the /etc/fstab to make the modification persistent as shown below. LABEL=/ none /dev/sda5 /dev/md0

/ /proc swap /RAID5

ext2 proc swap ext2

defaults defaults defaults defaults

13.4 Performance Tuning There are many factors that affect RAID performance: Harddisks - transfer rate, seek time Controllers - transfer rate, single or multichannel OS - buffer and cache size RAID parameters - chunk size, the number of disks in a stripe unit File system configurations - block size, file system type Applications - sequential or random access, single or multiple users It depends on an experiment to get the optimal performace.

13.5 Recovery in RAID Level 5 As mentioned in the previous section, RAID level 5 can tolerate to one disk failure. If this situation occurs, RAID device can continue working in degrade mode by using parity information to recover the original data. Performance is certainly dropped but the applications can survive. However, working in degrade mode is avoidance due to the performance dropped and risk of two disk failure. Thus, it must be necessary to replace the failed disk with the new one as soon as possible with the least system interruption. Once the failed disk is replaced, RAID device will start resyncronization the new disk with the rest in the array again. Below is a brief instruction: 1. Power down the system 2. Replace the failed disk 3. Power up the system again

83

1 0 0 1

1 0 0 2

4. Use the following command to re-insert the disk in the array # raidhotadd /dev/mdX /dev/sdX 5. Then wait for the resyncronization completion RAID level 5 can support automatic failed-disk replacement by allowing users to specify a number of spare disks in the array. Below is a sample configuration in /etc/raidtab to include a spare disk in RAID device (see man raidtab). nr-spare-disks device spare-disk

1 /dev/sdf1 0

It is very important to note that, in our testing, when one disk failed, redundancy was not 100% successful. Data integrity on some data blocks couldn’t be held continueously between before and after failure. About 10% of testing created garbage data which could be harmful to applications. Fortunately the loss of integrity persisted until the system was reboot. Therefore, it is recommended to restart the system as soon as disk failure is detected so that there is the least damage from the using the garbage data. It is possible that RAID device runs into multiple disk failure. Theoritically, all data should be lost if more than one disks corrupt. However, in most cases, multiple disks may fail just because of power down. They attach on the same but out of power cable, while the rest disks on the other cables. If this is the case, RAID device will report multiple disk failure and not response for further operations. Below is a hopeful solution. 1. Unmount the RAID device 2. Stop the RAID device # raidstop /dev/mdX 3. Turn on the disk power 4. Rewrite the RAID superblock # mkraid --really-force /dev/mdX 5. Check file system consistency # fsck /dev/mdX 6. If you are lucky enough, you can remount the RAID device back in place # mount /dev/mdX

84

Chapter 14

User Administration 14.1 Add New User Login as root on the NIS server Run the command: # adduser [-u uid] [-g group] [-d home_directory] [-s shell] [-c comment] [-p password] [-m [-k template]] username This command will add a new user into /etc/passwd and /etc/shadow. Unlock New User The adduser command will lock a newly-created user account by default. To unlock the account, check in /etc/shadow if the second field of that user is !! or not. If so, remove the !! before going to the next step. Update the NIS database: # cd /var/yp; make Assign password: # passwd username

14.2 Delete User Login as root on the NIS server Run the command: 85

# userdel -r username where -r will remove user’s home directory. Update the NIS database: # cd /var/yp; make

86

Chapter 15

LessTif 15.1

Introduction

LessTif is a free software licensed under the GNU Library General Public License (LGPL). It is the Motif library clone. This means that the same code compiled with both libraries should perform exactly the same. The current version of LessTif is 0.91.8 which is available for download from www.lesstif.org website.

15.2 Installation The typical way of installing many software from its source files is by unpacking the package (usually in tarred-gzipped format). Then automatically create a proper Makefile by running the configure script. Finally the package can be compiled and installed by make utility. The following steps demonstrate the installation process: 1. After downloading the source files, unzipped and unpack the package. Note that there are two forms of the compressed file using gzip and bzip2. The bzip2 compressed file normally have bz2 extension. In order to decompress the file, you must use a unix utility bunzip2 which is available with your RedHat Linux system. The following shows the typical way of unpacking the tarred-gzipped package. # tar zxvf lesstif-0.91.8.tar.gz

For the other format, you must perform the following # bunzip2 lesstif-0.91.8.tar.bz2 87

# tar xvf lesstif-0.91.8.tar

2. Creating a suitable Makefile for compiling LessTif. Normally the configure script will probe the system if it has required utilities and libraries for compiling the package and create the Makefile for us. If you want to overide any default setup, you can do that via the command line options of the configure script. # cd lestif-0.91.8 # ./configure

3. You can now compile the source and install them. Note that this step is going to take a long time. # make install

4. The installation process will place all the compiled libraries and headers under the /usr/local/Lesstif directory. The script creates a simbolic link of libraries in the /usr/local/lib directory. You may need to include this directory in a personal environment $LD LIBRARY PATH, for example in bash shell: # export LD_LIBRARY_PATH=/usr/local/lib Otherwise, you can include the Lesstif libraries in the system searchable library path in /etc/ld.so.conf and run the program: # /sbin/ldconfig to update the new entry.

88

Chapter 16

libGLw: OpenGL Widget Library 16.1 Introduction This library is a part of Mesa/OpenGL library development. The Mesa/OpenGL 3D graphics library set is now shipped with XFree86 (the current version is XFree864.0.1) in order to speed up both 2D and 3D graphics rendering as mentioned in the discussion of XFree86. In order to fully utilize the high performance features of the GeForce 256 graphics card, we must use the core OpenGL library provided by the vendor (Nvidia). However, there is an application "MolMol" that requires an API which does not exist in the system, e.g, OpenGL Xt/Mtif Widget Library (libGLw.so). This section discusses about how to create this library from the Mesa/OpenGL source files.

16.2 Installation 1. Download and extract the Mesa archive MesaLib-3.2.1.tar.gz. The current version at the time of writing is 3.2.1 2. Change directory to widgets-sgi under Mesa directory tree 3. Modify the SOURCES line in Makefile.X11 to: SOURCES = GLwDrawA.c GLwMDrawA.c and comment the line: include depend 89

4. Compile the libraries: # make linux-elf After issuing the above command you will see some compilation messages. Compiling libGLw for this version of MesaLib, will produce error messages at the end. This message is not really errors created by compiling the library. It is merely errors from your system not being able to mv some library files. The required libraries have been created. You can verify this by listing the content of this directory. The following libraries and symbolic links are created under the directory: libGLw.a libGLw.so libGLw.so.1 libGLw.so.1.0.0 5. Move the libraries to /usr/local/lib 6. Include /usr/local/lib in the file /etc/ld.so.conf 7. Run command: # ldconfig -v 8. Make sure that you copy all the header files (.h) from the widgets-sgi directory to /usr/include/X11/GLw # mkdir /usr/include/X11/GLw # cp *.h /usr/include/X11/GLw

90

Bibliography [1] Ian Foster. Designing and Building Parallel Programs: Concept and Tools for Parallel Software Engineering. Addison-Wesley Publishing Co., 1994. [2] Kai Hwang. Advance Computer Architecture: Parallelism, scalability, programmability. McGraw-Hill Publishing Co., 1993. [3] High Performance Computing Laboratory, National Electronics, and Computer Technology Center. Cluster computing. URL: http://cluster1.hpcc.nectec.or.th.

91