Environment at PDC
Guided Tours to get acquainted to the PDC environment and to learn how to resolve the most common problems other users have encountered.
You are of course always welcome to contact us at PDC. Contact information is available in the Information section of this web-site.
this helpdesk page for more details. You need to load the module heimdal to get kpasswd in your path.
guided tour describing the proper procedure. PDC will occasionally send mail to its users using their e-mail address at PDC. It is therefore crucial that you either take care of forwarding your mail from PDC to a location where you regularly read your mail.
At PDC, there are some user mailing lists, concerning the use of hardware and software resources at PDC. In these you can find discussions, questions and answers about various things that might be of interest to you. You will find information about these mailing lists on our contact page.
The intention is that all PDC users should have their home directories in the distributed filesystem AFS. However AFS is not the best file system to use in all cases. There is a guide available to aid in choosing filesystems (Storage Services Chooser's Guide)
Initially new users get a quota of ~500 Mbyte. This can be raised, given a motivation, but instead we prefer that large files are produced in some of the following locations:
- /scratch/ On most systems, log in nodes and compute nodes,
there is a filesystem /scratch/ which can store data sets ranging
between 2Gbyte and 60 Gbyte. No backups are made of /scratch/.
The /scratch/ filesystem is never visible outside a node or computer.
The batch system clears all /scratch/ on all batch nodes after the completion of each batch run.
On interactive and log-in-nodes, being shared, it is considered good practice to store files in a sub-directory of /scratch/, i.e. mkdir /scratch/foo/ . It is also considered good behaviour being aware of that /scratch/ is shared, i.e., not to use all of it, and to remove any data as soon as it is not needed.
- ~/projects/ On request users may have an AFS resident volume with reasonable quota for longer terms of storage. N.B. it is the responsibility of the user to back up the tmp volumes. PDC does not back up these volumes. AFS usually is slower than the local /scratch/ disk.
- /gpfs/scratch/f/foo/ Users on IBM SP systems may use the significantly faster file system /gpfs/ for high-performance I/O. Gpfs is distributed on all nodes within the particular SP system. There is no backup made. Files are allowed to stay for at least two weeks. However, should there be a shortage, staff might remove data prematurely. Never use /gpfs/ for large amounts of small files. Small is < 1 Gbyte. Read more about gpfs (/gpfs) at the gpfs howto.
At PDC we are using AFS, which is a global, distributed file system. Given proper authentication, any file in your home directory at PDC, is accessible from any computer around the world running an AFS client. AFS data is stored on a number of AFS server machines. Client machines request file data from servers when necessary and cache them locally. When authenticated, there is no difference in the access you have to your files on one machine from another (in another part of the country perhaps).
AFS is more efficient and it scales better than NFS, for example. It has greater flexibility than an ordinary Unix file system. It also provides for greater stability through replication of system files. Because all authentication is based on Kerberos, AFS provides for considerably better security as well.
More information about AFS can be found in the AFS guide.
At NADA as well as at PDC there is a variety of machines, operating systems, projects, different versions of the same code &c that all have their special set of programs to run. To be able to maintain and use this large set of programs we supply you with the concept of modules, invoked by the `module' command. With the command `module add sp2' you add proper paths of SP2 related directories to your PATH variable. You will certainly need `module add heimdal'You may also find `module add local' useful. To list other available modules type `module avail'. Finally add these commands to the proper loginfile, usually at the end of .profile in your home directory (.cshrc if you are using tcsh).
We use Kerberos for local and remote authentication. One of the unpleasant realities of today is that computer networks are to be considered unsafe, and therefore it is important that secrets like passwords never are sent in cleartext across the network. Usually the password is sent like that when you log in remotely with telnet or rlogin or when you transfer files with ftp. It is therefore important to note that the high security must begin at the local machine, and this is why we distribute what we call the Kerberos Travelkit , that should be installed in your local machine prior to logging in to PDC.
In the PDC environment, we have replaced telnet, rsh, rlogin, rcp (and hopefully soon also ftp) with kerberized equivalents that do not use your password to authenticate to the remote system. Instead they use a so-called ticket. This ticket is aquired in a secure way on the local machine (usually via one of the commands kinit or kauth) before a connection to the remote system is opened.
Unfortunately the ticket is bound to the machine it is created on, so it is not possible to bring over to the remote machine. Therefore there is no ticket present at the remote machine (i.e. at PDC) when logging in via f.i. kerberized telnet (which is the preferred method). Since a ticket is needed to access the file system it is therefore necessary to aquire this first of all. This is done with the command kauth, which asks for your password. Since the telnet connection is encrypted it is safe (enough) to submit the password here.
To prevent replay attacks authentication expires over time. You will have to prove your identity at regular intervals (usually once a day). This is usually only a problem for batch jobs that sit in a queue for a long time, and for that particular problem there are special measures available.
pdc-staff, $Date: 2004/11/15 15:57:51 $