LCG Generic Installation and Configuration



Document identifier: LCG-GIS-MI
Date: 21 June 2005
Author: Guillermo Diez-Andino, Laurence Field, Oliver Keeble, Antonio Retico, Alessandro Usai, Louis Poncet
Version: v2.5.0-0
Abstract: LCG Generic Installation Guide

Contents

Introduction to Manual Installation and Configuration

This document is addressed to Site Administrators in charge of LCG middleware installation and configuration.
It is a generic guide to manual installation and configuration for any supported LCG node types.
It provides a fast method to install and configure the LCG middleware on the various LCG node types (WN, UI, CE, SE ...) on the top of the following Linux distributions. The proposed installation and configuration method is based on the Debian apt-get tool and on a set of shell scripts built within the yaim framework [2].
The provided scripts can be used by Site Administrators with no need for in-depth knowledge of specific middleware configuration details.
Site Administrators are only requested to insert local site-specific data in a single configuration file, according to provided examples.
The resulting configuration is a default site configuration. Local customizations and tuning of the middleware, if needed, can then be done manually1.

New versions of this document will be distributed synchronously with the LCG middleware releases and they will contain the current ``state-of-art'' of the installation and configuration procedures.
A dual document with the upgrade procedures to manually update the configuration of the nodes from the previous LCG version to the current one is also part of the release.

Since the release LCG-2_3_0, the manual installation and configuration of LCG nodes is supported by a set of scripts.
Nevertheless, the automatic configuration for some particular node types has been intentionally left not covered. This mostly happens when a particular possible configuration is not recommended or obsolete within the LCG-2 production environment (e.g. Computing Element with Open-PBS).
Two list of ``supported'' and ``not recommended'' node configurations follows.

The ``supported'' node types are:

For the node types above listed both installation and configuration scripts are provided.

The ``deprecated'' node types are: For the node types above listed only the installation script is provided. Sites Administrators who, for any reasons, choose to go for a not recommended node type can perform the configuration according to the guidelines provided in the generic technical configuration reference [1].

In the following sections the simple steps needed to have a site installed are described.

OS Installation

The current version of the LCG Middleware runs on Scientific Linux 3 (SL3).
We strongly recommend all the LCG production sites to switch as soon as possible at least their service nodes to the SL3 operating system.
In that order we give here a link to the web page with all the needed information is the following:

http://www.scientificlinux.org

The site where the sources, and the images (iso) to create the CDs can be found is

ftp://ftp.scientificlinux.org/linux/scientific/30x/iso/


Java Installation

You should install java sdk 1.4.2 on your system before installing the middleware. Download it from SUN java web site (1.4.2 is required - http://java.sun.com/j2se/1.4.2/download.html). You should absolutely install the J2SDK 1.4.2 rpm package (if you do not install it in RPM format you'll not be able to install the middleware), on the sun java web page follow the link RPM in a self extracting file. Then follow instructions provided by SUN.
Set in your site-info.def (YAIM configuration file) the variable JAVA_LOCATION to your java installation directory.


Node Synchronization

A general requirement for the LCG nodes is that they are synchronized. This requirement may be fulfilled in several ways. If your nodes run under AFS most likely they are already synchronized. Otherwise, you can use the NTP protocol with a time server.

Instructions and examples for a NTP client configuration are provided in this section. If you are not planning to use a time server on your machine you can just skip it.

NTP Software Installation

In order to install the NTP client, you need the following rpms to be installed: The following versions of the above said rpms have been proven to work on our OS configuration (the list includes the corresponding links to download sites):

NTP Configuration

Configuration Tool: YAIM

From now on we will refer to the node to be installed as the target node
In order to work with the yaim installation and configuration tool yaim must be installed on the target node.
In order to download yaim: You now have the directory /opt/lcg/yaim created and the yaim tool installed.
If yaim was already installed, make sure you have the latest version using
> apt-get install lcg-yaim

Site Configuration File

All the configuration values relevant to sites have to be configured in a site configuration file using key-value pairs.
The site configuration file is shared among all the different node types. So we suggest to edit it once and keep it in a safe place in order not to have to edit it again for each installation.
Modifications possibly occurring in the specification of the site configuration file will be published with this document.
An up-to-date example of site configuration file is anyway provided in the file /opt/lcg/yaim/examples/site-info.def


Specification of the Site Configuration File

The general syntax of the file is a sequence of bash-like assignments of variables ($<$variable$>$=$<$value$>$).
"Comment" characters "#" can be used within the file.

WARNING: The Site Configuration File is sourced by the configuration scripts. Therefore there must be no spaces around the equal sign.

Example of wrong configuration:

SITE_NAME = my-site
Example of correct configuration:
SITE_NAME=my-site
A good syntax test for your Site Configuration file (e.g. my-site-info.def) is to try and source it manually, running the command
> source my-site-info.def
and checking that no error messages are produced.

The complete specification of the configurable variables follows.
We strongly recommend that, if you have not clear the meaning of a configuration variable, you just report to us and try and stick to values provided in the examples.
Maybe instead, though you understand the meaning, you are in doubts about the values to be configured into some of the variables above listed.
This may happen, for instance, if you are running a very small site and you are not configuring the whole set of nodes, and therefore you have to refer to some ``public'' service (e.g. RB, BDII ...).
In this case, if you have a reference site, please ask them for indications. Otherwise, send a message to the "LCG-ROLLOUT@cclrclsv.RL.AC.UK" mailing list.
However, if you need to configure a limited set of nodes, maybe you can skip the configuration of some of the variables below. In that case, you might find useful the table 6.1. where the list of variables needed for the configuration of a single node is shown.

FUNCTIONS_DIR :
The directory where yaim will find its functions.
DPMFSIZE :
The maximum file size managed (e.g.200M).
OUTPUT_STORAGE :
Default Output directory for the jobs.
BDII_HTTP_URL :
URL pointing to the BDII configuration file.
CE_SMPSIZE :
Number of cpus in an SMP box (WN specification).
SITE_VERSION :
Site middleware version.
BDII_HOST :
BDII Hostname.
GLOBUS_TCP_PORT_RANGE :
Port range for Globus IO.
DPM_HOST :
Host name of the DPM.
INSTALL_ROOT :
Installation home of the re-locatable distribution.
BDII_REGIONS :
List of node types publishing information on the bdii. For each item listed in the BDII_REGIONS variable you need to create a set of new variables as follows:
BDII_$<$REGION$>$_URL :
URL of the information producer (e.g.: BDII_CE_URL="URL of the CE information producer", BDII_SE_URL="URL of the SE information producer".
CE_SI00 :
Performance index of your fabric in SpecInt 2000 (WN specification). For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html.
CE_BATCH_SYS :
Implementation of site batch system. Available values are ``torque'', ``lsf'', ``pbs'', ``condor'' etc.
MON_HOST :
MON Box Hostname.
GRIDMAP_AUTH :
List of ldap servers in edg-mkgridmap.conf which authenticate users.
DPMPOOL :
.
JAVA_LOCATION :
Path to Java VM installation. It can be used in order to run a different version of java installed locally.
CE_SF00 :
Performance index of your fabric in SpecFloat 2000 (WN specification). For some examples of Spec values see http://www.specbench.org/osg/cpu2000/results/cint2000.html.
CE_CPU_MODEL :
Model of the CPU used by the WN (WN specification). This parameter is a string whose domain is not defined yet in the GLUE Schema. The value used for Pentium III is "PIII".
REG_HOST :
RGMA Registry hostname.
LFC_HOST :
Set this if you are building an LFC_HOST, not if you're just using clients.
DPM_PORT_RANGE :
Optional variable for the port range with default value "20000,25000".
DCACHE_ADMIN :
Host name of the server node which manages the pool of nodes.
VOS :
List of supported VOs. For each item listed in the VOS variable you need to create a set of new variables as follows:
VO_$<$VO-NAME$>$_SW_DIR :
Area on the WN for the installation of the experiment software. If on the WNs a predefined shared area has been mounted where VO managers can pre-install software, then these variable should point to this area. If instead there is not a shared area and each job must install the software, then this variables should contain a dot ( . ).Anyway the mounting of shared areas, as well as the local installation of VO software is not managed by yaim and should be handled locally by Site Administrators. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_SE :
Default SE used by the VO. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_SGM :
ldap directory with VO software managers list. WARNING: VO-NAME must be in capital cases.
VO_$<$VO_NAME$>$_QUEUES :
The queues that the VO can use on the CE.
VO_$<$VO-NAME$>$_USERS :
ldap directory with VO users list. WARNING: VO-NAME must be in capital cases.
VO_$<$VO-NAME$>$_STORAGE_DIR :
Mount point on the Storage Element for the VO. WARNING: VO-NAME must be in capital cases.
CE_OUTBOUNDIP :
TRUE if outbound connectivity is enabled at your site, FALSE otherwise (WN specification).
GRID_TRUSTED_BROKERS :
List of the DNs of the Resource Brokers host certificates which are trusted by the Proxy node (ex: /O=Grid/O=CERN/OU=cern.ch/CN=host/testbed013.cern.ch).
PX_HOST :
PX hostname.
MY_DOMAIN :
site's domain name.
DCACHE_POOLS :
List of pool nodes managed by the DCACHE_ADMIN server node.
WN_LIST :
Path to the list of Worker Nodes. The list of Worker Nodes is a file to be created by the Site Administrator, which contains a plain list of the batch nodes. An example of this configuration file is given in /opt/lcg/yaim/examples/wn-list.conf.
VO_SW_DIR :
Directory for installation of experiment software.
JOB_MANAGER :
The name of the job manager used by the gatekeeper.
DCACHE_PORT_RANGE :
DCACHE Port Range. This variable is facultative and the default value is "20000,25000" .
USERS_CONF :
Path to the list of Linux users (pool accounts) to be created. The list of users is a file to be created by the Site Administrator, which contains a plain list of the users and IDs. An example of this configuration file is given in /opt/lcg/yaim/examples/users.conf.
SE_TYPE :
Type of storage element, advertised in the information system : disk, srm_v1 (use this for dcache).
QUEUES :
The name of the queues for the CE. These are by default set as the VO names.
LCG_REPOSITORY :
APT repository with LCG middleware (use the one in the example).
LFC_DB_PASSWORD :
db password for LFC user.
CE_CPU_VENDOR :
Vendor of the CPU. used by the WN (WN specification). This parameter is a string whose domain is not defined yet in the GLUE Schema. The value used for Intel is ``intel''.
CA_REPOSITORY :
APT repository with Certification Authorities (use the one in the example).
CE_INBOUNDIP :
TRUE if inbound connectivity is enabled at your site, FALSE otherwise (WN specification).
CE_CPU_SPEED :
Clock frequency in Mhz (WN specification).
CA_WGET :
URL of the repository of the Certification Authorities. The configuration of this value is needed only if the installation is being done without root privileges (e.g. installing a re-locatable distribution in user space).
CE_RUNTIMEENV :
List of software tags supported by the site. The list can include VO-specific software tags. In order to assure backward compatibility it should include the entry 'LCG-2', the current middleware version and the list of previous middleware tags.
CE_OS :
Operating System name (WN specification).
MYSQL_PASSWORD :
mysql password for the accounting info collector.
DPMDATA :
Directory where the data is stored (absolute path, e.g./storage).
CE_CLOSE_SE :
Here a label-name to identify a SE is defined. For each item listed in the CE_CLOSE_SE variable you need to create a set of new variables as follows:
CE_CLOSE_$<$LABEL-NAME$>$_HOST :
Hostname of the SE.
CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT :
mount point of the data partition on the SE.
CE_OS_RELEASE :
Operating System release (WN specification).
GRIDICE_SERVER_HOST :
GridIce server host name (monitoring) Usually it is run on the SE node.
CE_HOST :
Computing Element Hostname.
DPMMGR :
db user account for the DPM.
CE_MINPHYSMEM :
RAM size in kblocks (WN specification).
SITE_EMAIL :
The e-mail address as published by the information system.
SE_HOST :
Storage Element Hostname.
FTS_SERVER_URL :
URL of the File Transfer Service server.
CRON_DIR :
Yaim writes all cron jobs to this directory. Change it if you want to turn off Yaim's management of cron.
DPMUSER_PWD :
Password of the db user account.
CE_MINVIRTMEM :
Virtual Memory size in kblocks (WN specification).
SITE_NAME :
Your GIIS.
RB_HOST :
Resource Broker Hostname.

In case you are not configuring a whole site, but you are interested only in some particular nodes, you might find useful the table 6.1., with the correspondance between nodes and needed variables.


Table 1: Variables used per node
BDII BDII_HOST, BDII_HTTP_URL, BDII_REGIONS, BDII_$<$REGION$>$_URL, INSTALL_ROOT, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, USERS_CONF, SITE_NAME,
CE BDII_HOST, SITE_VERSION, CE_SMPSIZE, BDII_HTTP_URL, MON_HOST, CE_BATCH_SYS, CE_SI00, BDII_REGIONS, BDII_$<$REGION$>$_URL, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR,
classic_SE BDII_HOST, SITE_VERSION, CE_SMPSIZE, MON_HOST, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR,
SE_dpm_mysql BDII_HOST, SITE_VERSION, CE_SMPSIZE, DPMFSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, DPMPOOL, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, MYSQL_PASSWORD, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, DPMDATA, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, DPMMGR, RB_HOST, SITE_NAME, CE_MINVIRTMEM, DPMUSER_PWD,
SE_dpm_oracle BDII_HOST, SITE_VERSION, CE_SMPSIZE, DPMFSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, DPMPOOL, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, DPMDATA, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, DPMMGR, RB_HOST, SITE_NAME, CE_MINVIRTMEM, DPMUSER_PWD,
SE_dpm_disk BDII_HOST, SITE_VERSION, CE_SMPSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, DPMUSER_PWD,
MON MON_HOST, INSTALL_ROOT, REG_HOST, JAVA_LOCATION, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, USERS_CONF, MYSQL_PASSWORD, CE_HOST, SITE_NAME,
PX BDII_HOST, SITE_VERSION, CE_SMPSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM,
RB BDII_HOST, SITE_VERSION, CE_SMPSIZE, MON_HOST, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, MYSQL_PASSWORD, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR,
SE_dcache BDII_HOST, SITE_VERSION, CE_SMPSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, DCACHE_ADMIN, LFC_HOST, DCACHE_POOLS, MY_DOMAIN, PX_HOST, USERS_CONF, DCACHE_PORT_RANGE, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR,
UI BDII_HOST, OUTPUT_STORAGE, MON_HOST, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, JAVA_LOCATION, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, PX_HOST, JOB_MANAGER, CE_HOST, SE_HOST, RB_HOST, SITE_NAME, FTS_SERVER_URL,
WN BDII_HOST, MON_HOST, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, JAVA_LOCATION, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, PX_HOST, USERS_CONF, JOB_MANAGER, CE_HOST, SE_HOST, RB_HOST, SITE_NAME, FTS_SERVER_URL,
TAR_UI BDII_HOST, OUTPUT_STORAGE, MON_HOST, INSTALL_ROOT, DPM_HOST, REG_HOST, JAVA_LOCATION, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, PX_HOST, CA_WGET, CE_HOST, SE_HOST, RB_HOST, FTS_SERVER_URL,
TAR_WN BDII_HOST, MON_HOST, INSTALL_ROOT, DPM_HOST, REG_HOST, JAVA_LOCATION, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, PX_HOST, CA_WGET, CE_HOST, SE_HOST, FTS_SERVER_URL,
LFC_mysql BDII_HOST, SITE_VERSION, CE_SMPSIZE, MON_HOST, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, LFC_DB_PASSWORD, CE_CPU_SPEED, CE_INBOUNDIP, MYSQL_PASSWORD, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM,
CE_torque BDII_HOST, SITE_VERSION, CE_SMPSIZE, BDII_HTTP_URL, MON_HOST, CE_BATCH_SYS, CE_SI00, BDII_REGIONS, BDII_$<$REGION$>$_URL, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, LFC_HOST, WN_LIST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_$<$LABEL-NAME$>$_HOST, CE_CLOSE_$<$LABEL-NAME$>$_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR,
WN_torque BDII_HOST, MON_HOST, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, JAVA_LOCATION, VOS, VO_$<$VO-NAME$>$_SW_DIR, VO_$<$VO-NAME$>$_SE, VO_$<$VO-NAME$>$_SGM, VO_$<$VO-NAME$>$_USERS, VO_$<$VO_NAME$>$_QUEUES, VO_$<$VO-NAME$>$_STORAGE_DIR, PX_HOST, USERS_CONF, JOB_MANAGER, CE_HOST, SE_HOST, RB_HOST, SITE_NAME, FTS_SERVER_URL,


RPM Installation Tool:APT-GET

Please before you proceed further MAKE SURE that Java is installed in your system (see 3.).


Middleware Installation

In order to install the node with the desired middleware packages run the command

> /opt/lcg/yaim/scripts/install_node <site-configuration-file> <meta-package> [ <meta-package> ... ]

The complete list of the available meta-packages available with this release is provided in 8.1.(SL3)

For example, in order to install a CE with Torque, after the configuration of the site-info.def file is done, you have to run:

> /opt/lcg/yaim/scripts/install_node site-info.def lcg-CE-torque

WARNING: The ``bare-middleware'' versions of the WN and CE meta-packages are provided in case you are running a not covered LRMS system.
Consider that if you have chosen to go for the ``bare-middleware'' installation, for instance, of the CE, then you will need to run

> /opt/lcg/yaim/scripts/install_node site-info.def lcg-torque
on the machine, in order to get the installation completed with Torque.

WARNING: There is a known installation conflict between the 'torque-clients' rpm and the 'postfix' mail client (Savannah. bug #5509).
In order to workaround the problem you can either uninstall postfix or remove the file /usr/share/man/man8/qmgr.8.gz from the target node.

You can install multiple node types on one machine

> /opt/lcg/yaim/scripts/install_node site-info.def <meta-package> <meta-package> ...


meta-packages-SL3

In the following table the list of SL3 meta-packages is provided.

Table 2: meta-packages available for SL3
Node Type meta-package Name meta-package Description
Worker Node (middleware only) lcg-WN It does not include any LRMS
Worker Node (with Torque client) lcg-WN-torque It includes the 'Torque' LRMS
Computing Element (middleware only) lcg-CE It does not include any LRMS
Computing Element (with Torque) lcg-CE-torque It includes the 'Torque' LRMS
User Interface lcg-UI User Interface
LCG-BDII lcg-LCG-BDII LCG-BDII
MON-Box lcg-MON RGMA-based monitoring system collector server
Proxy lcg-PX Proxy Server
Resource Broker lcg-RB Resource Broker
Classic Storage Element lcg-SECLASSIC Storage Element on local disk
DPM Storage Element (mysql) lcg-SE_dpm_mysql Storage Element with SRM interface
DPM Storage Element (Oracle) lcg-SE_dpm_oracle Storage Element with SRM interface
dCache Storage Element (Oracle) lcg-SEDCache Storage Element interfaced to dCache
LCG File Catalog (mysql) lcg-LFC-mysql LCG File Catalog
Re-locatable distribution lcg-TAR can be used to set up a Worker node or a UI
Torque LRMS lcg-torque Torque client and server to be used in combination with the 'bare middleware' version of CE and WN packages


Certification Authorities

The installation of the up-to-date version of the Certification Authorities (CA) is automatically done by the Middleware Installation described in 8.
Anyway, as the list and structure of Certification Authorities (CA) accepted by the LCG project can change independently of the middleware releases, the rpm list related to the CAs certificates and URLs has been decoupled from the standard LCG release procedure. You should consult the page

http://grid-deployment.web.cern.ch/grid-deployment/lcg2CAlist.html

in order to ascertain what the version number of the latest set of CA rpms is. In order to upgrade the CA list of your node to the latest version, you can simply run on the node the command:
> apt-get update && apt-get -y install lcg-CA

In order to keep the CA configuration up-to-date on your node we strongly recommend Site Administrators to program a periodic upgrade procedure of the CA on the installed node (e.g. running the above command via a daily cron job).

Host Certificates

CE, SE, PROXY, RB nodes require the host certificate/key files before you start their installation.
Contact your national Certification Authority (CA) to understand how to obtain a host certificate if you do not have one already.
Instruction to obtain a CA list can be found in
http://grid-deployment.web.cern.ch/grid-deployment/lcg2CAlist.html

From the CA list so obtained you should choose a CA close to you.

Once you have obtained a valid certificate, i.e. a file

make sure to place the two files in the target node into the directory

/etc/grid-security

Middleware Configuration

The general procedure to configure the middleware packages that have been installed on the node via the procedure described in 8., is to run the command:

> /opt/lcg/yaim/scripts/configure_node <site-configuration-file> <node-type> [ <node-type> ... ]
The complete list of the node types available with this release is provided in 11.1..

For example, in order to configure the WN with Torque you had installed before, after the configuration of the site-info.def file is done, you have to run:

> /opt/lcg/yaim/scripts/configure_node site-info.def WN_torque

In the following paragraph a reference to all the available configuration scripts is given.


Available Node Types

In this paragraph a reference to all the available configuration scripts is given.
For those items in the list tagged with an asterisk (*), there are some particularities in the configuration procedure or extra configuration details to be considered, which are described in a following dedicated section.
For all the unmarked node types, the general configuration procedure is the one above described.



Table 3: Available Node Types
Node Type Node Type Node Description
Worker Node (middleware only) WN It does not configure any LRMS
Worker Node (with Torque client) WN_torque It configures also the 'Torque' LRMS client
Computing Element (middleware only) CE It does not configure any LRMS
Computing Element (with Torque) * CE_torque It configures also the 'Torque' LRMS client and server (see 12.1. for details)
User Interface UI User Interface
LCG-BDII BDII LCG-BDII
MON-Box MON RGMA-based monitoring system collector server
Proxy PX Proxy Server
Resource Broker RB Resource Broker
Classic Storage Element classic_SE Storage Element on local disk
Disk Pool Manager (Oracle) * SE_dpm_oracle Storage Element with SRM interface and Oracle backend (see 12.4. for details)
Disk Pool Manager (mysql) * SE_dpm_mysql Storage Element with SRM interface and mysql backend (see 12.4. for details)
dCache Storage Element SE_dcache Storage Element interfaced with dCache
Re-locatable distribution * TAR_UI or TAR_WN It can be used to set up a Worker Node or a UI (see 12.2. for details)
LCG File Catalog server * LFC_mysql Set up a mysql based LFC server (see 12.5. for details)


Installing multiple node types on one machine

You can use yaim to install more than one node type on a single machine. In this case, you should install all the relevant software first, and then run the configure script. For example, to install a combined RB and BDII, you should do the following;

> /opt/lcg/yaim/scripts/install_node site-info.def lcg-RB lcg-LCG-BDII
> /opt/lcg/yaim/scripts/configure_node site-info.def RB BDII

Note that one combination known not to work is the CE/RB, due to a conflict between the GridFTP servers.

Node-specific extra configuration steps

In this section we list configuration steps actually needed to complete the configuration of the desired node but not supported by the automatic configuration scripts.
If a given node does not appear in that section it means that its configuration is complete


CE-torque

WARNING: in the CE configuration context (and also in the 'torque' LRMS one), a file with a a list of managed nodes needs to be compiled. An example of this configuration file is given in /opt/lcg/yaim/examples/wn-list.conf
Then the file path needs to be pointed by the variable WN_LIST in the Site Configuration File (see 6.1.).

The Maui scheduler configuration provided with the script is currently very basic.
More advanced configuration examples, to be implemented manually by Site Administrators can be found in [5]


The relocatable distribution

Introduction

We are now supplying a tarred distribution of the middleware which can be used to install a UI or a WN. You can untar the distribution somewhere on a local disk, or replicate it across a number of nodes via a network share. You can also use this distribution to install a UI without root privileges.

Once you have the middleware directory available, you must edit the site-info.def file as usual, putting the location of the middleware into the variable INSTALL_ROOT.

If you are sharing the distribution to a number of nodes, commonly WNs, then they should all mount the tree at INSTALL_ROOT. You should configure the middleware on one node (remember you'll need to mount with appropriate privileges) and then it should work for all the others if you set up your batch system and the CA certificates in the usual way. If you'd rather have the CAs on your share, the yaim function install_certs_userland may be of interest. You may want to mount your share ro after the configuration has been done.

dependencies

The middleware in the relocatable distribution has certain dependencies.

We've made this software available as a second tar file which you can download and untar under $EDG_LOCATION. This means that if you untarred the main distribution under /opt/LCG, you must untar the supplementary files under /opt/LCG/edg.

If you have administrative access to the nodes, you could alternatively use the TAR dependencies rpm.

> /opt/lcg/yaim/scripts/install_node site-info.def lcg-TAR

To configure a UI or WN

Run the configure_node script, adding the type of node as an argument;

> /opt/lcg/yaim/scripts/configure_node site-info.def [ TAR_WN | TAR_UI ]

Note that the script will not configure any LRMS. If you're configuring torque for the first time, you may find the config_users and config_torque_client yaim functions useful. These can be invoked like this

/opt/lcg/yaim/scripts/run_function site-info.def config_users
/opt/lcg/yaim/scripts/run_function site-info.def config_torque_client

Installing a UI as a non-root user

If you don't have root access, you can use the supplementary tarball mentioned above to ensure that the dependencies of the middleware are satisfied. The middleware requires java (see 3.), which you can install in your home directory if it's not already available. Please make sure you set the JAVA_LOCATION variable in your site-info.def. You'll probably want to alter the OUTPUT_STORAGE variable there too, as it's set to /tmp/jobOutput by default and it may be better pointing at your home directory somewhere.

Once the software is all unpacked, you should run

> $INSTALL_ROOT/lcg/yaim/scripts/configure_node site-info.def TAR_UI
to configure it.

Finally, you'll have to set up some way of sourcing the environment necessary to run the grid software. A script will be available under $INSTALL_ROOT/etc/profile.d for this purpose. Source grid_env.sh or grid_env.csh depending upon your choice of shell.

Installing a UI this way puts all the CA certificates under $INSTALL_ROOT/etc/grid-security and adds a user cron job to download the crls. However, please note that you'll need to keep the CA certificates up to date yourself. You can do this by running

> /opt/lcg/yaim/scripts/run_function site-info.def install_certs_userland

Further information

In [3] there is more information on using this form of the distribution, including a description of what the configure script does. You should check this reference if you'd like to customise the relocatable distribution.

This distribution is used at CERN to make its lxplus system available as a UI. You can take a look at the docs for this too [4].

Getting the software

You can download the tar file for each operating system from

http://grid-deployment.web.cern.ch/grid-deployment/download/relocatable/LCG-2_5_0-sl3.tar.gz

You can download supplementary tar files for the userland installation from

http://grid-deployment.web.cern.ch/grid-deployment/download/relocatable/LCG-2_5_0-userdeps-sl3.tar.gz


SE-dpm

There are several extra configuration steps to perform in order to configure a dpm SE, mostly dealing with the backend systems.

All information to be retrieved in

http://goc.grid.sinica.edu.tw/gocwiki/How_to_install_the_Disk_Pool_Manager_%28DPM%29


LFC

There are several extra configuration steps to perform in order to configure a Lcg File Catalog, mostly dealing with the backend systems.

All information to be retrieved in

http://goc.grid.sinica.edu.tw/gocwiki/How_to_set_up_an_LFC_service

Firewalls

No automatic firewall configuration is provided by this version of the configuration scripts.
If your LCG nodes are behind a firewall, you will have to ask your network manager to open a few "holes" to allow external access to some LCG service nodes.
A complete map of which port has to be accessible for each service node can be found at the URL
http://lcgdeploy.cvs.cern.ch/cgi-bin/lcgdeploy.cgi/lcg2/docs/lcg-port-table.pdf .

The mentioned reference is strongly ``component-oriented'' and it might result difficult to apply for Site Administrator not very confident with which particular sub-systems are actually running on a given node type.
For further information about firewall configuration mapped on the different node types see the dedicated guide in [1]

Contacts

For questions and suggestions dealing with this document please contact the address
$<$support-lcg-manual-install@cern.ch$>$

Bibliography

1
G. Diez-Andino, K. Oliver, A. Retico, and A. Usai.
Lcg configuration reference, 2004.
../../../../../../gis/lcg-GCR/index.html.

2
L. Field and L. Poncet.
The new manual install, 2004.
http://agenda.cern.ch/askArchive.php?base=agenda&categ=a044377&id=a044377s11t3/transparencies
Presentation given at the LCG Workshop on Operational Issues, Cern, Nov 2.

3
O. Keeble.
Lcg 2 tar distribution, 2004.
http://grid-deployment.web.cern.ch/grid-deployment/documentation/Tar-Dist-Use/.

4
O. Keeble.
Using lxplus as a ui, 2004.
http://www.cern.ch/grid-deployment/documentation/UI-lxplus/.

5
S. Lemaitre and S. Traylen.
Maui-cookbook, 2004.
http://grid-deployment.web.cern.ch/grid-deployment/documentation/Maui-Cookbook/.


Change History



Table 4: Change History
version date description
v2.5.0-1 17/Jul/05 Removing Rh 7.3 support completely.
v2.3.0-2 10/Jan/05 6.1.: CA_WGET variable added in site configuration file.
v2.3.0-3 2/Feb/05 Bibliography: Link to Generic Configuration Reference changed.
" " 12.1., 6.1.: Details added on WN and users lists.
" " script ``configure_torque''. no more available: removed from the list.
v2.3.0-4 16/Feb/05 Configure apt to find your OS rpms.
v2.3.0-5 22/Feb/05 Remove apt prefs stuff, mention multiple nodes on one box.
v2.3.0-6 03/Mar/05 Better lcg-CA update advice.
v2.3.1-1 03/Mar/05 LCG-2_3_1 locations
v2.3.4-0 01/Apr/05 LCG-2_4_4 locations
v2.3.4-1 08/Apr/05 external variables section inserted
v2.3.4-2 31/May/05 4.: fix in firewall configuration
" " 11.: verbatim line fixed
v2.5.0-0 20/Jun/05 6.1.: New variables added
" " 11.1.: New nodes added (dpm)
" " 12.4.: paragraph added
" " 12.5.: paragraph added


About this document ...

LCG Generic Installation & Configuration Guide

This document was generated using the LaTeX2HTML translator Version 2002 (1.62)

Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999, Ross Moore, Mathematics Department, Macquarie University, Sydney.

The command line arguments were:
latex2html -split 0 -html_version 4.0 -no_navigation -address 'GRID deployment' LCG2-Manual-Install.drv_html

The translation was initiated by Antonio Retico on 2005-06-21


Footnotes

... manually1
A technical reference of the configuration actions done by the yaim scripts, suitable to be used to learn more about LCG configuration can be found in [1]
... installed)2
The apt tool could be already installed according to the distribution of OS in use at your site


GRID deployment