Document identifier: | LCG-GIS-MI |
---|---|
Date: | 15 February 2006 |
Author: | Guillermo Diez-Andino, Laurence Field, Oliver Keeble, Antonio Retico, Alessandro Usai, Louis Poncet |
Version: | v2.7.0-2 |
New versions of this document will be distributed synchronously with the
LCG middleware releases and they will contain the current
``state-of-art'' of the installation and configuration procedures.
A dual document with the upgrade procedures to manually update the
configuration of the nodes from the previous LCG version to the current one is
also part of the release.
Since the release LCG-2_3_0, the manual installation and configuration
of LCG nodes is supported by a set of scripts.
Nevertheless, the automatic configuration for some particular node types has
been intentionally left not covered. This mostly happens when a particular
possible configuration is not recommended or obsolete within the LCG-2
production environment (e.g. Computing Element with Open-PBS).
Two list of ``supported'' and ``not recommended'' node
configurations follows.
The ``supported'' node types are:
http://www.scientificlinux.org
The site where the sources, and the images (iso) to create the CDs can be
found is
ftp://ftp.scientificlinux.org/linux/scientific/30x/iso/
Use the latest ntp version available for your system. If you are using APT, an apt-get install ntp will do the work.
restrict <time_server_IP_address> mask 255.255.255.255 nomodify notrap noquery server <time_server_name>Additional time servers can be added for better performance results. For each server, the hostname and IP address are required. Then, for each time-server you are using, add a couple of lines similar to the ones shown above into the file /etc/ntp.conf.
137.138.16.69 137.138.17.69
If you are using iptables, you can add the following to /etc/sysconfig/iptables
-A INPUT -s <NTP-serverIP-1> -p udp --dport 123 -j ACCEPT -A INPUT -s <NTP-serverIP-2> -p udp --dport 123 -j ACCEPT
Remember that, in the provided examples, rules are parsed in order, so ensure that there are no matching REJECT lines preceding those that you add. You can then reload the firewall
> /etc/init.d/iptables restart
> ntpdate <your ntp server name> > service ntpd start > chkconfig ntpd on
> ntpq -p
> wget http://www.cern.ch/grid-deployment/gis/yaim/lcg-yaim-x.x.x-x.noarch.rpm
> rpm -ivh lcg-yaim-x.x.x-x.noarch.rpm
WARNING: The Site Configuration File is sourced by the configuration
scripts. Therefore there must be no spaces around the equal sign.
Example of wrong configuration:
SITE_NAME = my-siteExample of correct configuration:
SITE_NAME=my-siteA good syntax test for your Site Configuration file (e.g. my-site-info.def) is to try and source it manually, running the command
> source my-site-info.defand checking that no error messages are produced.
The complete specification of the configurable variables follows.
We strongly recommend that, if you have not clear the meaning of a
configuration variable, you just report to us and try and stick to values
provided in the examples.
Maybe instead, though you understand the meaning, you are in doubts about the
values to be configured into some of the variables above listed.
This may happen, for instance, if you are running a very small site and you
are not configuring the whole set of nodes, and therefore you have to refer
to some ``public'' service (e.g. RB, BDII ...).
In this case, if you have a reference site, please ask them for indications.
Otherwise, send a message to the "lcg-rollout@cclrclsv.rl.ac.uk"
mailing list.
However, if you need to configure a limited set of nodes, maybe you can skip the configuration of
some of the variables below. In that case, you might find useful the table
6.1. where the list of variables needed for the configuration
of a single node is shown.
In case you are not configuring a whole site, but you are interested only in some particular nodes,
you might find useful the table 6.1., with the correspondance between nodes and needed variables.
BDII | BATCH_LOG_DIR, BDII_FCR, BDII_HOST, BDII_HTTP_URL, BDII_REGIONS, BDII_REGION_URL, CE_BATCH_SYS, CE_HOST, CRON_DIR, GRIDICE_SERVER_HOST, INSTALL_ROOT, MON_HOST, MY_DOMAIN, SITE_NAME, USERS_CONF, VOS, |
CE | APEL_DB_PASSWORD, BATCH_LOG_DIR, BDII_FCR, BDII_HOST, BDII_HTTP_URL, BDII_REGIONS, BDII_REGION_URL, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPM_HOST, EDG_WL_SCRATCH, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, MY_DOMAIN, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, |
VOBOX | BATCH_LOG_DIR, BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPM_HOST, EDG_WL_SCRATCH, FTS_SERVER_URL, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, MY_DOMAIN, OUTPUT_STORAGE, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, |
SE_classic | BATCH_LOG_DIR, BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPM_HOST, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, MY_DOMAIN, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, |
SE_dpm_mysql | BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPMFSIZE, DPMMGR, DPMPOOL, DPMPOOL_NODES, DPMUSER_PWD, DPM_HOST, EDG_WL_SCRATCH, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, MYSQL_PASSWORD, MY_DOMAIN, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, |
SE_dpm_oracle | BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPMFSIZE, DPMMGR, DPMPOOL, DPMPOOL_NODES, DPMUSER_PWD, DPM_HOST, EDG_WL_SCRATCH, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, MY_DOMAIN, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, |
SE_dpm_disk | BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPMPOOL, DPMPOOL_NODES, DPM_HOST, EDG_WL_SCRATCH, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, |
MON | APEL_DB_PASSWORD, BATCH_LOG_DIR, BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPM_HOST, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRID_TRUSTED_BROKERS, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, MYSQL_PASSWORD, MY_DOMAIN, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_SW_DIR, |
PX | BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, DCACHE_ADMIN, DPMDATA, DPM_HOST, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRID_TRUSTED_BROKERS, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_SW_DIR, |
RB | BATCH_LOG_DIR, BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPM_HOST, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, MYSQL_PASSWORD, MY_DOMAIN, PX_HOST, QUEUES, RB_HOST, RB_RLS, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, |
SE_dcache | BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DCACHE_POOLS, DCACHE_PORT_RANGE, DPMDATA, DPM_HOST, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, PX_HOST, QUEUES, RB_HOST, REG_HOST, RESET_DCACHE_CONFIGURATION, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, |
UI | BDII_HOST, CE_HOST, DPM_HOST, EDG_WL_SCRATCH, FTS_SERVER_URL, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, MON_HOST, OUTPUT_STORAGE, PX_HOST, RB_HOST, REG_HOST, SE_LIST, SITE_NAME, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_SE, VO_VO-NAME_SW_DIR, VO_VO-NAME_VOMSES, |
WN | BDII_HOST, CE_HOST, DPM_HOST, EDG_WL_SCRATCH, FTS_SERVER_URL, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, MON_HOST, PX_HOST, RB_HOST, REG_HOST, SE_LIST, SITE_NAME, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_SE, VO_VO-NAME_SW_DIR, |
TAR_UI | BDII_HOST, CE_HOST, DPM_HOST, EDG_WL_SCRATCH, FTS_SERVER_URL, GLOBUS_TCP_PORT_RANGE, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, MON_HOST, OUTPUT_STORAGE, PX_HOST, RB_HOST, REG_HOST, SE_LIST, SITE_NAME, VOBOX_HOST, VOS, VO_VO-NAME_SE, VO_VO-NAME_SW_DIR, VO_VO-NAME_VOMSES, |
TAR_WN | BDII_HOST, CE_HOST, DPM_HOST, EDG_WL_SCRATCH, FTS_SERVER_URL, GLOBUS_TCP_PORT_RANGE, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, MON_HOST, PX_HOST, REG_HOST, SE_LIST, SITE_NAME, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_SE, VO_VO-NAME_SW_DIR, |
LFC_mysql | BDII_HOST, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPM_HOST, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_DB_PASSWORD, LFC_HOST, LFC_LOCAL, MON_HOST, MYSQL_PASSWORD, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, |
CE_torque | APEL_DB_PASSWORD, BATCH_LOG_DIR, BDII_FCR, BDII_HOST, BDII_HTTP_URL, BDII_REGIONS, BDII_REGION_URL, CE_BATCH_SYS, CE_CPU_MODEL, CE_CPU_SPEED, CE_CPU_VENDOR, CE_HOST, CE_INBOUNDIP, CE_MINPHYSMEM, CE_MINVIRTMEM, CE_OS, CE_OS_RELEASE, CE_OUTBOUNDIP, CE_RUNTIMEENV, CE_SF00, CE_SI00, CE_SMPSIZE, CLASSIC_HOST, CLASSIC_STORAGE_DIR, CRON_DIR, DCACHE_ADMIN, DPMDATA, DPM_HOST, EDG_WL_SCRATCH, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GRIDMAP_AUTH, GRID_TRUSTED_BROKERS, GROUPS_CONF, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, LFC_CENTRAL, LFC_HOST, LFC_LOCAL, MON_HOST, MY_DOMAIN, PX_HOST, QUEUES, RB_HOST, REG_HOST, SE_LIST, SITE_EMAIL, SITE_LAT, SITE_LOC, SITE_LONG, SITE_NAME, SITE_SUPPORT_SITE, SITE_TIER, SITE_WEB, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_QUEUES, VO_VO-NAME_SE, VO_VO-NAME_SGM, VO_VO-NAME_STORAGE_DIR, VO_VO-NAME_SW_DIR, VO_VO-NAME_USERS, VO_VO-NAME_VOMS_POOL_PATH, VO_VO-NAME_VOMS_SERVERS, VO_SW_DIR, WN_LIST, |
WN_torque | BDII_HOST, CE_HOST, DPM_HOST, EDG_WL_SCRATCH, FTS_SERVER_URL, GLOBUS_TCP_PORT_RANGE, GRIDICE_SERVER_HOST, GSSKLOG, GSSKLOG_SERVER, INSTALL_ROOT, JAVA_LOCATION, JOB_MANAGER, MON_HOST, PX_HOST, RB_HOST, REG_HOST, SE_LIST, SITE_NAME, TORQUE_SERVER, USERS_CONF, VOBOX_HOST, VOBOX_PORT, VOS, VO_VO-NAME_SE, VO_VO-NAME_SW_DIR, |
> wget ftp://ftp.scientificlinux.org/linux/scientific/30x/i386/SL/RPMS/apt-XXX.i386.rpm
> rpm -ivh apt-XXX.i386.rpm
SL3:
LCG_REPOSITORY="rpm http://grid-deployment.web.cern.ch/grid-deployment/gis apt/LCG-2_7_0/sl3/en/i386 lcg_sl3 lcg_sl3.updates lcg_sl3.security"
Please note that for the dependencies of the middleware to be met, you'll have to make sure that apt
can find and download your OS rpms. This typically means you'll have to install an rpm called
'apt-sourceslist', or else create an appropriate file in your /etc/apt/sources.list.d directory.
If you are not using SLC3 but another OS binary compatible distribution is highly recommended that you configure apt-get in order to give priority, during the installation, to packages listed within your distribution.
In order to have all the known dependencies possibly solved by apt-get you should have at least the following lists in your /etc/apt/sources.list.d/:
Since the deployment team is based at CERN and it uses the local installation, it is still possible that with this bare configuration, some dependencies, though dealt with, cannot be solved because the binary compatible distribution you use does not provide the entire set of packages which CERN SL3 does.
If you prefer not to handle these issues manually you could add in the /etc/apt/sources.list.d/ another list (e.g. cern.list)
### List of available apt repositories available from linuxsoft.cern.ch ### suitable for your system. ### ### See http://cern.ch/linux/updates/ for a list of other repositories and mirrors. ### 09.06.2004 ### # THE default rpm http://linuxsoft.cern.ch cern/slc30X/i386/apt os updates extras rpm-src http://linuxsoft.cern.ch cern/slc30X/i386/apt os updates extras
Then you have to configure your apt-get preferences in order to give priority to your Os and not to CERN SLC3.
A /etc/apt/preferences file like the following one will give priority to your Os in any case except when the package that you need is not present in your-os repository :
Package: swig Pin: release o=grid-deployment.web.cern.ch Pin-Priority: 990 Package: * Pin: release o=your-os.your-domain.org Pin-Priority: 980 Package: * Pin: release o=linux.cern.ch Pin-Priority: 970
The Pin-Priority will give the priority to your-os repository. For the swig package the swig that we distribute is the one to use, if you are doing a mirror of our apt repository change the preferences according to your mirror hostname.
If you are not using apt to install, you can pull the packages directly from SLC3's repository using wget. The address is http://linuxsoft.cern.ch/cern/slc305/i386/apt/.
In order to install the node with the desired middleware packages run the command
> /opt/lcg/yaim/scripts/install_node <site-configuration-file> <meta-package> [ <meta-package> ... ]
The complete list of the available meta-packages available with this release is
provided in 8.1.(SL3)
For example, in order to install a CE with Torque, after the configuration of the site-info.def file is done, you have to run:
> /opt/lcg/yaim/scripts/install_node site-info.def lcg-CE_torque
WARNING: There is a known installation conflict between the 'torque-clients'
rpm and the 'postfix' mail server (Savannah. bug #5509).
In order to workaround the problem you can either uninstall postfix or remove
the file
/usr/share/man/man8/qmgr.8.gz from the target node.
The ``bare-middleware'' versions of the WN and CE meta-packages are provided in case you have an existing LRMS;
> /opt/lcg/yaim/scripts/install_node site-info.def lcg-CE
You can install multiple node types on one machine
> /opt/lcg/yaim/scripts/install_node site-info.def <meta-package> <meta-package> ...
Node Type | meta-package Name | meta-package Description |
BDII | lcg-BDII | BDII |
Computing Element (middleware only) | lcg-CE | It does not include any LRMS |
Computing Element (with Torque) | lcg-CE_torque | It includes the 'Torque' LRMS |
LCG File Catalog (mysql) | lcg-LFC_mysql | LCG File Catalog |
LCG File Catalog (oracle) | lcg-LFC_oracle | LCG File Catalog |
MON-Box | lcg-MON | RGMA-based monitoring system collector server |
Proxy | lcg-PX | Proxy Server |
Resource Broker | lcg-RB | Resource Broker |
Classic Storage Element | lcg-SE_classic | Storage Element on local disk |
dCache Storage Element | lcg-SE_dcache | Storage Element interfaced to dCache without pnfs dependency |
dCache Storage Element | lcg-SE_dcache_gdbm | Storage Element interfaced to dCache with dependency on pnfs (gdbm) |
DPM Storage Element (mysql) | lcg-SE_dpm_mysql | Storage Element with SRM interface |
DPM Storage Element (Oracle) | lcg-SE_dpm_oracle | Storage Element with SRM interface |
DPM disk | lcg-SE_dpm_disk | Disk server for a DPM SE |
Dependencies for the re-locatable distribution | lcg-TAR | This package can be used to satisfy the dependencies of the relocatable distro |
User Interface | lcg-UI | User Interface |
VO agent box | lcg-VOBOX | Agents and Daemons © |
Worker Node (middleware only) | lcg-WN | It does not include any LRMS |
Worker Node (with Torque client) | lcg-WN_torque | It includes the 'Torque' LRMS |
> apt-get update && apt-get -y install lcg-CA
In order to keep the CA configuration up-to-date on your node we strongly recommend Site Administrators to program a periodic upgrade procedure of the CA on the installed node (e.g. running the above command via a daily cron job).
CE, LFC, MON, PROXY, RB, SE and VOBOX nodes require the host certificate/key files before you
start their installation.
Contact your national Certification Authority (CA) to understand how to
obtain a host certificate if you do not have one already.
Instruction to obtain a CA list can be found in
http://grid-deployment.web.cern.ch/grid-deployment/lcg2CAlist.html
From the CA list so obtained you should choose a CA close to you.
Once you have obtained a valid certificate, i.e. a file
/etc/grid-security
The general procedure to configure the middleware packages that have been installed on the node via the procedure described in 8., is to run the command:
> /opt/lcg/yaim/scripts/configure_node <site-configuration-file> <node-type> [ <node-type> ... ]For example, in order to configure the WN with Torque you had installed before, after the configuration of the site-info.def file is done, you have to run:
> /opt/lcg/yaim/scripts/configure_node site-info.def WN_torque
In the following paragraph a reference to all the available
configuration scripts is given.
Node Type | Node Type | Node Description |
BDII | BDII | A top level BDII |
Computing Element (middleware only) | CE | It does not configure any LRMS |
Computing Element (with Torque) * | CE_torque | It configures also the 'Torque' LRMS client and server (see 12.1. for details) |
LCG File Catalog server * | LFC_mysql | Set up a mysql based LFC server (see 12.4. for details) |
MON-Box | MON | RGMA-based monitoring system collector server |
Proxy | PX | Proxy Server |
Resource Broker | RB | Resource Broker |
Classic Storage Element | SE_classic | Storage Element on local disk |
Disk Pool Manager (mysql) * | SE_dpm_mysql | Storage Element with SRM interface and mysql backend (see 12.3. for details) |
Disk Pool Manager disk * | SE_dpm_disk | Disk server for SE_dpm |
dCache Storage Element | SE_dcache | Storage Element interfaced with dCache |
Re-locatable distribution * | TAR_UI or TAR_WN | It can be used to set up a Worker Node or a UI (see 12.2. for details) |
User Interface | UI | User Interface |
VO agent box | VOBOX | Machine to run VO agents |
Worker Node (middleware only) | WN | It does not configure any LRMS |
Worker Node (with Torque client) | WN_torque | It configures also the 'Torque' LRMS client |
You can use yaim to install more than one node type on a single machine. In this case, you should install all the relevant software first, and then run the configure script. For example, to install a combined RB and BDII, you should do the following;
> /opt/lcg/yaim/scripts/install_node site-info.def RB BDII > /opt/lcg/yaim/scripts/configure_node site-info.def RB BDII
All node-types must be given as arguments to the same invocation of configure_node - do not run this command once for each node type. Note that combinations known not to work are the CE/RB, RB/SE, CE/BDII.
WARNING: in the CE configuration context (and also in the 'torque' LRMS one),
a file with a a list of managed nodes needs to be compiled. An example of this
configuration file is given in /opt/lcg/yaim/examples/wn-list.conf
Then the file path needs to be pointed by the variable WN_LIST in the
Site Configuration File (see 6.1.).
The Maui scheduler configuration provided with the script is currently very
basic.
More advanced configuration examples, to be implemented manually by Site Administrators can be found in [6]
Once you have the middleware directory available, you must edit the site-info.def file as usual, putting the location of the middleware into the variable INSTALL_ROOT.
If you are sharing the distribution to a number of nodes, commonly WNs, then they should all mount the tree at INSTALL_ROOT. You should configure the middleware on one node (remember you'll need to mount with appropriate privileges) and then it should work for all the others if you set up your batch system and the CA certificates in the usual way. If you'd rather have the CAs on your share, the yaim function install_certs_userland may be of interest. You may want to mount your share ro after the configuration has been done.
The middleware in the relocatable distribution has certain dependencies.
We've made this software available as a second tar file which you can download and untar under $EDG_LOCATION. This means that if you untarred the main distribution under /opt/LCG, you must untar the supplementary files under /opt/LCG/edg.
If you have administrative access to the nodes, you could alternatively use the TAR dependencies rpm.
> /opt/lcg/yaim/scripts/install_node site-info.def lcg-TAR
For Debian, here is a list of packages which are required for the tarball to work
perl-modules python2.2 libexpat1 libx11-6 libglib2.0-0 libldap2 libstdc++2.10-glibc2.2 tcl8.3-dev libxml2 termcap-compat libssl0.9.7 tcsh rpm rsync cpp gawk openssl wget
Run the configure_node script, adding the type of node as an argument;
> /opt/lcg/yaim/scripts/configure_node site-info.def [ TAR_WN | TAR_UI ]
Note that the script will not configure any LRMS. If you're configuring torque for the first time, you may find the config_users and config_torque_client yaim functions useful. These can be invoked like this
${INSTALL_ROOT}/lcg/yaim/scripts/run_function site-info.def config_users ${INSTALL_ROOT}/lcg/yaim/scripts/run_function site-info.def config_torque_client
You can find a quick guide to this here [5].
If you don't have root access, you can use the supplementary tarball mentioned above to ensure that the dependencies of the middleware are satisfied. The middleware requires java (see 3.), which you can install in your home directory if it's not already available. Please make sure you set the JAVA_LOCATION variable in your site-info.def. You'll probably want to alter the OUTPUT_STORAGE variable there too, as it's set to /tmp/jobOutput by default and it may be better pointing at your home directory somewhere.
Once the software is all unpacked, you should run
> $INSTALL_ROOT/lcg/yaim/scripts/configure_node site-info.def TAR_UIto configure it.
Finally, you'll have to set up some way of sourcing the environment necessary to run the grid software. A script will be available under $INSTALL_ROOT/etc/profile.d for this purpose. Source grid_env.sh or grid_env.csh depending upon your choice of shell.
Installing a UI this way puts all the CA certificates under $INSTALL_ROOT/etc/grid-security and adds a user cron job to download the crls. However, please note that you'll need to keep the CA certificates up to date yourself. You can do this by running
> /opt/lcg/yaim/scripts/run_function site-info.def install_certs_userland
In [3] there is more information on using this form of the distribution. You should check this reference if you'd like to customise the relocatable distribution.
This distribution is used at CERN to make its lxplus system available as a UI. You can take a look at the docs for this too [4].
You can download the tar file for each operating system from
http://grid-deployment.web.cern.ch/grid-deployment/download/relocatable/LCG-2_7_0-sl3.tar.gz
You can download supplementary tar files for the userland installation from
There are several extra configuration steps to perform in order to configure a dpm SE, mostly dealing with
the backend systems.
All information to be retrieved in
http://goc.grid.sinica.edu.tw/gocwiki/How_to_install_the_Disk_Pool_Manager_%28DPM%29
There are several extra configuration steps to perform in order to configure a Lcg File Catalog, mostly dealing with
the backend systems.
All information to be retrieved in
http://goc.grid.sinica.edu.tw/gocwiki/How_to_set_up_an_LFC_service
version | date | description |
v2.5.0-1 | 17/Jul/05 | Removing Rh 7.3 support completely. |
v2.3.0-2 | 10/Jan/05 | 6.1.: CA_WGET variable added in site configuration file. |
v2.3.0-3 | 2/Feb/05 | Bibliography: Link to Generic Configuration Reference changed. |
" | " | 12.1., 6.1.: Details added on WN and users lists. |
" | " | script ``configure_torque''. no more available: removed from the list. |
v2.3.0-4 | 16/Feb/05 | Configure apt to find your OS rpms. |
v2.3.0-5 | 22/Feb/05 | Remove apt prefs stuff, mention multiple nodes on one box. |
v2.3.0-6 | 03/Mar/05 | Better lcg-CA update advice. |
v2.3.1-1 | 03/Mar/05 | LCG-2_3_1 locations |
v2.3.4-0 | 01/Apr/05 | LCG-2_4_4 locations |
v2.3.4-1 | 08/Apr/05 | external variables section inserted |
v2.3.4-2 | 31/May/05 | 4.: fix in firewall configuration |
" | " | 11.: verbatim line fixed |
v2.5.0-0 | 20/Jun/05 | 6.1.: New variables added |
" | " | 11.1.: New nodes added (dpm) |
" | " | 12.3.: paragraph added |
" | " | 12.4.: paragraph added |
v2.5.0-1 | 28/Jun/05 | 7.: note on apt-get preferences added |
v2.6.0-1 | 23/Sep/05 | 10.: host certificates needed on the VOBOX |
v2.7.0-0 | 11/Jan/06 | 6.1.: new variables GROUPS_CONF RB_RLS BATCH_LOG_DIR CLASSIC_HOST CLASSIC_STORAGE_DIR DPMPOOL_NODES SE_LIST BDII_FCR_URL VO_XXX_VOMSES added. |
v2.7.0-1 | 10/Feb/06 | 11.1.: all the forbidden combinations added |
This document was generated using the LaTeX2HTML translator Version 2002 (1.62)
Copyright © 1993, 1994, 1995, 1996,
Nikos Drakos,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999,
Ross Moore,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 0 -html_version 4.0 -no_navigation -address 'GRID deployment' LCG2-Manual-Install.drv_html
The translation was initiated by Oliver KEEBLE on 2006-02-15