Document identifier: | LCG-GIS-MI |
---|---|
Date: | 21 June 2005 |
Author: | Guillermo Diez-Andino, Laurence Field, Oliver Keeble, Antonio Retico, Alessandro Usai, Louis Poncet |
Version: | v2.5.0-0 |
New versions of this document will be distributed synchronously with the
LCG middleware releases and they will contain the current
``state-of-art'' of the installation and configuration procedures.
A dual document with the upgrade procedures to manually update the
configuration of the nodes from the previous LCG version to the current one is
also part of the release.
Since the release LCG-2_3_0, the manual installation and configuration
of LCG nodes is supported by a set of scripts.
Nevertheless, the automatic configuration for some particular node types has
been intentionally left not covered. This mostly happens when a particular
possible configuration is not recommended or obsolete within the LCG-2
production environment (e.g. Computing Element with Open-PBS).
Two list of ``supported'' and ``not recommended'' node
configurations follows.
The ``supported'' node types are:
http://www.scientificlinux.org
The site where the sources, and the images (iso) to create the CDs can be
found is
ftp://ftp.scientificlinux.org/linux/scientific/30x/iso/
http://grid-deployment.web.cern.ch/grid-deployment/download/RpmDir/release/ntp-4.1.1-1.i386.rpm
http://grid-deployment.web.cern.ch/grid-deployment/download/RpmDir/release/libcap-devel-1.10-8.i386.rpm
http://grid-deployment.web.cern.ch/grid-deployment/download/RpmDir/release/libcap-1.10-8.i386.rpm
restrict <time_server_IP_address> mask 255.255.255.255 nomodify notrap noquery server <time_server_name>Additional time servers can be added for better performance results. For each server, the hostname and IP address are required. Then, for each time-server you are using, add a couple of lines similar to the ones shown above into the file /etc/ntp.conf.
137.138.16.69 137.138.17.69
-A input -s <NTP-serverIP-1> -d 0/0 123 -p udp -j ACCEPT -A input -s <NTP-serverIP-2> -d 0/0 123 -p udp -j ACCEPTIf you are using iptables, you can add the following to /etc/sysconfig/iptables
-A INPUT -s <NTP-serverIP-1> -p udp --dport 123 -j ACCEPT -A INPUT -s <NTP-serverIP-2> -p udp --dport 123 -j ACCEPT
Remember that, in the provided examples, rules are parsed in order, so ensure that there are no matching REJECT lines preceding those that you add. You can then reload the firewall
> /etc/init.d/ipchains restart
> ntpdate <your ntp server name> > service ntpd start > chkconfig ntpd on
> ntpq -p
> wget http://www.cern.ch/grid-deployment/gis/yaim/lcg-yaim-x.x.x-x.noarch.rpm
> rpm -ivh lcg-yaim-x.x.x-x.noarch.rpm
> apt-get install lcg-yaim
WARNING: The Site Configuration File is sourced by the configuration
scripts. Therefore there must be no spaces around the equal sign.
Example of wrong configuration:
SITE_NAME = my-siteExample of correct configuration:
SITE_NAME=my-siteA good syntax test for your Site Configuration file (e.g. my-site-info.def) is to try and source it manually, running the command
> source my-site-info.defand checking that no error messages are produced.
The complete specification of the configurable variables follows.
We strongly recommend that, if you have not clear the meaning of a
configuration variable, you just report to us and try and stick to values
provided in the examples.
Maybe instead, though you understand the meaning, you are in doubts about the
values to be configured into some of the variables above listed.
This may happen, for instance, if you are running a very small site and you
are not configuring the whole set of nodes, and therefore you have to refer
to some ``public'' service (e.g. RB, BDII ...).
In this case, if you have a reference site, please ask them for indications.
Otherwise, send a message to the "LCG-ROLLOUT@cclrclsv.RL.AC.UK" mailing list.
However, if you need to configure a limited set of nodes, maybe you can skip the configuration of
some of the variables below. In that case, you might find useful the table
6.1. where the list of variables needed for the configuration
of a single node is shown.
In case you are not configuring a whole site, but you are interested only in some particular nodes,
you might find useful the table 6.1., with the correspondance between nodes and needed variables.
BDII | BDII_HOST, BDII_HTTP_URL, BDII_REGIONS, BDII_REGION_URL, INSTALL_ROOT, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, USERS_CONF, SITE_NAME, |
CE | BDII_HOST, SITE_VERSION, CE_SMPSIZE, BDII_HTTP_URL, MON_HOST, CE_BATCH_SYS, CE_SI00, BDII_REGIONS, BDII_REGION_URL, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR, |
classic_SE | BDII_HOST, SITE_VERSION, CE_SMPSIZE, MON_HOST, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR, |
SE_dpm_mysql | BDII_HOST, SITE_VERSION, CE_SMPSIZE, DPMFSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, DPMPOOL, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, MYSQL_PASSWORD, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, DPMDATA, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, DPMMGR, RB_HOST, SITE_NAME, CE_MINVIRTMEM, DPMUSER_PWD, |
SE_dpm_oracle | BDII_HOST, SITE_VERSION, CE_SMPSIZE, DPMFSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, DPMPOOL, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, DPMDATA, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, DPMMGR, RB_HOST, SITE_NAME, CE_MINVIRTMEM, DPMUSER_PWD, |
SE_dpm_disk | BDII_HOST, SITE_VERSION, CE_SMPSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, DPMUSER_PWD, |
MON | MON_HOST, INSTALL_ROOT, REG_HOST, JAVA_LOCATION, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, USERS_CONF, MYSQL_PASSWORD, CE_HOST, SITE_NAME, |
PX | BDII_HOST, SITE_VERSION, CE_SMPSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, |
RB | BDII_HOST, SITE_VERSION, CE_SMPSIZE, MON_HOST, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, MYSQL_PASSWORD, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR, |
SE_dcache | BDII_HOST, SITE_VERSION, CE_SMPSIZE, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, DCACHE_ADMIN, LFC_HOST, DCACHE_POOLS, MY_DOMAIN, PX_HOST, USERS_CONF, DCACHE_PORT_RANGE, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR, |
UI | BDII_HOST, OUTPUT_STORAGE, MON_HOST, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, JAVA_LOCATION, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, PX_HOST, JOB_MANAGER, CE_HOST, SE_HOST, RB_HOST, SITE_NAME, FTS_SERVER_URL, |
WN | BDII_HOST, MON_HOST, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, JAVA_LOCATION, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, PX_HOST, USERS_CONF, JOB_MANAGER, CE_HOST, SE_HOST, RB_HOST, SITE_NAME, FTS_SERVER_URL, |
TAR_UI | BDII_HOST, OUTPUT_STORAGE, MON_HOST, INSTALL_ROOT, DPM_HOST, REG_HOST, JAVA_LOCATION, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, PX_HOST, CA_WGET, CE_HOST, SE_HOST, RB_HOST, FTS_SERVER_URL, |
TAR_WN | BDII_HOST, MON_HOST, INSTALL_ROOT, DPM_HOST, REG_HOST, JAVA_LOCATION, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, PX_HOST, CA_WGET, CE_HOST, SE_HOST, FTS_SERVER_URL, |
LFC_mysql | BDII_HOST, SITE_VERSION, CE_SMPSIZE, MON_HOST, CE_BATCH_SYS, CE_SI00, INSTALL_ROOT, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, LFC_HOST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, LFC_DB_PASSWORD, CE_CPU_SPEED, CE_INBOUNDIP, MYSQL_PASSWORD, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, |
CE_torque | BDII_HOST, SITE_VERSION, CE_SMPSIZE, BDII_HTTP_URL, MON_HOST, CE_BATCH_SYS, CE_SI00, BDII_REGIONS, BDII_REGION_URL, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, CE_CPU_MODEL, CE_SF00, JAVA_LOCATION, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, CE_OUTBOUNDIP, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, LFC_HOST, WN_LIST, MY_DOMAIN, PX_HOST, USERS_CONF, JOB_MANAGER, QUEUES, SE_TYPE, CE_CPU_VENDOR, CE_CPU_SPEED, CE_INBOUNDIP, CE_OS, CE_RUNTIMEENV, CE_HOST, GRIDICE_SERVER_HOST, CE_OS_RELEASE, CE_CLOSE_SE, CE_CLOSE_LABEL-NAME_HOST, CE_CLOSE_LABEL-NAME_ACCESS_POINT, SE_HOST, SITE_EMAIL, CE_MINPHYSMEM, RB_HOST, SITE_NAME, CE_MINVIRTMEM, CRON_DIR, |
WN_torque | BDII_HOST, MON_HOST, INSTALL_ROOT, DPM_HOST, GLOBUS_TCP_PORT_RANGE, REG_HOST, JAVA_LOCATION, VOS, VO_VO-NAME_SW_DIR, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_USERS, VO_VO_NAME_QUEUES, VO_VO-NAME_STORAGE_DIR, PX_HOST, USERS_CONF, JOB_MANAGER, CE_HOST, SE_HOST, RB_HOST, SITE_NAME, FTS_SERVER_URL, |
> wget ftp://ftp.scientificlinux.org/linux/scientific/30x/i386/SL/RPMS/apt-0.5.15cnc6-4.SL.i386.rpm
> rpm -ivh apt-0.5.15cnc6-4.SL.i386.rpm
Please note that for the dependencies of the middleware to be met, you'll have to make sure that apt can find and download your OS rpms. This typically means you'll have to install an rpm called 'apt-sourceslist', or else create an appropriate file in your /etc/apt/sources.list.d directory.
SL3:
LCG_REPOSITORY="rpm http://grid-deployment.web.cern.ch/grid-deployment/gis apt/LCG-2_5_0/sl3/en/i386 lcg_sl3 lcg_sl3.updates"
In order to install the node with the desired middleware packages run the command
> /opt/lcg/yaim/scripts/install_node <site-configuration-file> <meta-package> [ <meta-package> ... ]
The complete list of the available meta-packages available with this release is
provided in 8.1.(SL3)
For example, in order to install a CE with Torque, after the configuration of the site-info.def file is done, you have to run:
> /opt/lcg/yaim/scripts/install_node site-info.def lcg-CE-torque
WARNING: The ``bare-middleware'' versions of the WN and CE meta-packages are
provided in case you are running a not covered LRMS system.
Consider that if you have chosen to go for the ``bare-middleware''
installation, for instance, of the CE, then you will need to run
> /opt/lcg/yaim/scripts/install_node site-info.def lcg-torqueon the machine, in order to get the installation completed with Torque.
WARNING: There is a known installation conflict between the 'torque-clients'
rpm and the 'postfix' mail client (Savannah. bug #5509).
In order to workaround the problem you can either uninstall postfix or remove
the file
/usr/share/man/man8/qmgr.8.gz from the target node.
You can install multiple node types on one machine
> /opt/lcg/yaim/scripts/install_node site-info.def <meta-package> <meta-package> ...
Node Type | meta-package Name | meta-package Description |
Worker Node (middleware only) | lcg-WN | It does not include any LRMS |
Worker Node (with Torque client) | lcg-WN-torque | It includes the 'Torque' LRMS |
Computing Element (middleware only) | lcg-CE | It does not include any LRMS |
Computing Element (with Torque) | lcg-CE-torque | It includes the 'Torque' LRMS |
User Interface | lcg-UI | User Interface |
LCG-BDII | lcg-LCG-BDII | LCG-BDII |
MON-Box | lcg-MON | RGMA-based monitoring system collector server |
Proxy | lcg-PX | Proxy Server |
Resource Broker | lcg-RB | Resource Broker |
Classic Storage Element | lcg-SECLASSIC | Storage Element on local disk |
DPM Storage Element (mysql) | lcg-SE_dpm_mysql | Storage Element with SRM interface |
DPM Storage Element (Oracle) | lcg-SE_dpm_oracle | Storage Element with SRM interface |
dCache Storage Element (Oracle) | lcg-SEDCache | Storage Element interfaced to dCache |
LCG File Catalog (mysql) | lcg-LFC-mysql | LCG File Catalog |
Re-locatable distribution | lcg-TAR | can be used to set up a Worker node or a UI |
Torque LRMS | lcg-torque | Torque client and server to be used in combination with the 'bare middleware' version of CE and WN packages |
> apt-get update && apt-get -y install lcg-CA
In order to keep the CA configuration up-to-date on your node we strongly recommend Site Administrators to program a periodic upgrade procedure of the CA on the installed node (e.g. running the above command via a daily cron job).
CE, SE, PROXY, RB nodes require the host certificate/key files before you
start their installation.
Contact your national Certification Authority (CA) to understand how to
obtain a host certificate if you do not have one already.
Instruction to obtain a CA list can be found in
http://grid-deployment.web.cern.ch/grid-deployment/lcg2CAlist.html
From the CA list so obtained you should choose a CA close to you.
Once you have obtained a valid certificate, i.e. a file
/etc/grid-security
The general procedure to configure the middleware packages that have been installed on the node via the procedure described in 8., is to run the command:
> /opt/lcg/yaim/scripts/configure_node <site-configuration-file> <node-type> [ <node-type> ... ]The complete list of the node types available with this release is provided in 11.1..
For example, in order to configure the WN with Torque you had installed before, after the configuration of the site-info.def file is done, you have to run:
> /opt/lcg/yaim/scripts/configure_node site-info.def WN_torque
In the following paragraph a reference to all the available
configuration scripts is given.
Node Type | Node Type | Node Description |
Worker Node (middleware only) | WN | It does not configure any LRMS |
Worker Node (with Torque client) | WN_torque | It configures also the 'Torque' LRMS client |
Computing Element (middleware only) | CE | It does not configure any LRMS |
Computing Element (with Torque) * | CE_torque | It configures also the 'Torque' LRMS client and server (see 12.1. for details) |
User Interface | UI | User Interface |
LCG-BDII | BDII | LCG-BDII |
MON-Box | MON | RGMA-based monitoring system collector server |
Proxy | PX | Proxy Server |
Resource Broker | RB | Resource Broker |
Classic Storage Element | classic_SE | Storage Element on local disk |
Disk Pool Manager (Oracle) * | SE_dpm_oracle | Storage Element with SRM interface and Oracle backend (see 12.4. for details) |
Disk Pool Manager (mysql) * | SE_dpm_mysql | Storage Element with SRM interface and mysql backend (see 12.4. for details) |
dCache Storage Element | SE_dcache | Storage Element interfaced with dCache |
Re-locatable distribution * | TAR_UI or TAR_WN | It can be used to set up a Worker Node or a UI (see 12.2. for details) |
LCG File Catalog server * | LFC_mysql | Set up a mysql based LFC server (see 12.5. for details) |
You can use yaim to install more than one node type on a single machine. In this case, you should install all the relevant software first, and then run the configure script. For example, to install a combined RB and BDII, you should do the following;
> /opt/lcg/yaim/scripts/install_node site-info.def lcg-RB lcg-LCG-BDII > /opt/lcg/yaim/scripts/configure_node site-info.def RB BDII
Note that one combination known not to work is the CE/RB, due to a conflict between the GridFTP servers.
WARNING: in the CE configuration context (and also in the 'torque' LRMS one),
a file with a a list of managed nodes needs to be compiled. An example of this
configuration file is given in /opt/lcg/yaim/examples/wn-list.conf
Then the file path needs to be pointed by the variable WN_LIST in the
Site Configuration File (see 6.1.).
The Maui scheduler configuration provided with the script is currently very
basic.
More advanced configuration examples, to be implemented manually by Site Administrators can be found in [5]
Once you have the middleware directory available, you must edit the site-info.def file as usual, putting the location of the middleware into the variable INSTALL_ROOT.
If you are sharing the distribution to a number of nodes, commonly WNs, then they should all mount the tree at INSTALL_ROOT. You should configure the middleware on one node (remember you'll need to mount with appropriate privileges) and then it should work for all the others if you set up your batch system and the CA certificates in the usual way. If you'd rather have the CAs on your share, the yaim function install_certs_userland may be of interest. You may want to mount your share ro after the configuration has been done.
The middleware in the relocatable distribution has certain dependencies.
We've made this software available as a second tar file which you can download and untar under $EDG_LOCATION. This means that if you untarred the main distribution under /opt/LCG, you must untar the supplementary files under /opt/LCG/edg.
If you have administrative access to the nodes, you could alternatively use the TAR dependencies rpm.
> /opt/lcg/yaim/scripts/install_node site-info.def lcg-TAR
Run the configure_node script, adding the type of node as an argument;
> /opt/lcg/yaim/scripts/configure_node site-info.def [ TAR_WN | TAR_UI ]
Note that the script will not configure any LRMS. If you're configuring torque for the first time, you may find the config_users and config_torque_client yaim functions useful. These can be invoked like this
/opt/lcg/yaim/scripts/run_function site-info.def config_users /opt/lcg/yaim/scripts/run_function site-info.def config_torque_client
If you don't have root access, you can use the supplementary tarball mentioned above to ensure that the dependencies of the middleware are satisfied. The middleware requires java (see 3.), which you can install in your home directory if it's not already available. Please make sure you set the JAVA_LOCATION variable in your site-info.def. You'll probably want to alter the OUTPUT_STORAGE variable there too, as it's set to /tmp/jobOutput by default and it may be better pointing at your home directory somewhere.
Once the software is all unpacked, you should run
> $INSTALL_ROOT/lcg/yaim/scripts/configure_node site-info.def TAR_UIto configure it.
Finally, you'll have to set up some way of sourcing the environment necessary to run the grid software. A script will be available under $INSTALL_ROOT/etc/profile.d for this purpose. Source grid_env.sh or grid_env.csh depending upon your choice of shell.
Installing a UI this way puts all the CA certificates under $INSTALL_ROOT/etc/grid-security and adds a user cron job to download the crls. However, please note that you'll need to keep the CA certificates up to date yourself. You can do this by running
> /opt/lcg/yaim/scripts/run_function site-info.def install_certs_userland
In [3] there is more information on using this form of the distribution, including a description of what the configure script does. You should check this reference if you'd like to customise the relocatable distribution.
This distribution is used at CERN to make its lxplus system available as a UI. You can take a look at the docs for this too [4].
You can download the tar file for each operating system from
http://grid-deployment.web.cern.ch/grid-deployment/download/relocatable/LCG-2_5_0-sl3.tar.gz
You can download supplementary tar files for the userland installation from
There are several extra configuration steps to perform in order to configure a dpm SE, mostly dealing with
the backend systems.
All information to be retrieved in
http://goc.grid.sinica.edu.tw/gocwiki/How_to_install_the_Disk_Pool_Manager_%28DPM%29
There are several extra configuration steps to perform in order to configure a Lcg File Catalog, mostly dealing with
the backend systems.
All information to be retrieved in
http://goc.grid.sinica.edu.tw/gocwiki/How_to_set_up_an_LFC_service
version | date | description |
v2.5.0-1 | 17/Jul/05 | Removing Rh 7.3 support completely. |
v2.3.0-2 | 10/Jan/05 | 6.1.: CA_WGET variable added in site configuration file. |
v2.3.0-3 | 2/Feb/05 | Bibliography: Link to Generic Configuration Reference changed. |
" | " | 12.1., 6.1.: Details added on WN and users lists. |
" | " | script ``configure_torque''. no more available: removed from the list. |
v2.3.0-4 | 16/Feb/05 | Configure apt to find your OS rpms. |
v2.3.0-5 | 22/Feb/05 | Remove apt prefs stuff, mention multiple nodes on one box. |
v2.3.0-6 | 03/Mar/05 | Better lcg-CA update advice. |
v2.3.1-1 | 03/Mar/05 | LCG-2_3_1 locations |
v2.3.4-0 | 01/Apr/05 | LCG-2_4_4 locations |
v2.3.4-1 | 08/Apr/05 | external variables section inserted |
v2.3.4-2 | 31/May/05 | 4.: fix in firewall configuration |
" | " | 11.: verbatim line fixed |
v2.5.0-0 | 20/Jun/05 | 6.1.: New variables added |
" | " | 11.1.: New nodes added (dpm) |
" | " | 12.4.: paragraph added |
" | " | 12.5.: paragraph added |
This document was generated using the LaTeX2HTML translator Version 2002 (1.62)
Copyright © 1993, 1994, 1995, 1996,
Nikos Drakos,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999,
Ross Moore,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 0 -html_version 4.0 -no_navigation -address 'GRID deployment' LCG2-Manual-Install.drv_html
The translation was initiated by Antonio Retico on 2005-06-21