Document identifier: | |
Date: | 6 April 2005 |
Author: | GRID Deployment Group (<support-lcg-deployment@cern.ch>) |
Version: | v2.4.0 |
> export CVS_RSH=ssh > export CVSROOT=:pserver:anonymous@lcgdeploy.cvs.cern.ch:/cvs/lcgdeploy
Checkout the tag from CVS.
> cvs checkout -r LCG-x_y_z -d LCG-x_y_z lcg2
This will create a directory LCG-x_y_z that contains the new configuration files. The files in the LCG-x_y_z/rpmlist and LCG-x_y_z/source directories should be copied to the locations that you use for rpmlist and source.
> cp LCG-x_y_z/tools/updaterep.conf /etc/updaterep.conf > updaterep
By default all rpms will be copied to the /opt/local/linux/7.3/RPMS area, which is visible from the client nodes as /export/local/linux/7.3/RPMS. You can change this location by editing /etc/updaterep.conf and modifying the REPOSITORY_BASE variable.
IMPORTANT NOTICE: As the list and structure of Certification Authorities (CA) accepted by the LCG project can change independently from the middle-ware releases, the rpm list related to the CAs certificates and URLs has been decoupled from the standard LCG release procedure. This means that the version of the security-rpm.h file contained in the rpmlist directory associated to the current tag might be incomplete or obsolete. Please go to the URL http://markusw.home.cern.ch/markusw/lcg2CAlist.html and follow the instructions there to update the CAs. Changes and updates of the CAs will be announced on the LCG-Rollout mailing list.
> LCG-x_y_z/tools/lcfgng_server_update.pl LCG-x_y_z/rpmlist/lcfgng-server-rpm.hThis script will report which rpms are missing or have the wrong version and will create a script /tmp/lcfgng_server_update_script.sh. The script will update all the rpms needed on the LCFGng server. Please check and verify that all the commands look reasonable before running it.
You must replace the default root password in the file, private-cfg.h with the one you want to use for your site:
+auth.rootpwd <CRYPTED_PWD>
To obtain the encrypted MD5 encryption password (stronger than the standard crypt method) you can use the following command:
> openssl passwd -1
This command will prompt you to insert the clear text version of the password and then print the encrypted version.
> ./addvo.py -i new-vo.conf > vo-newco.cfg.h
To include this file you will need to add the following line to the vos-cfg.h file.
#include "new-vo-cfg.h"
> do_mkxprof.sh node1 [node2 node3, ...]
If you get an error status for one or more of the configurations, you can get a detailed report on the nature of the error by looking into URL
> http://<LCFGng_Server>/status
Once all node configurations are correctly published, you can proceed and install your nodes following any one of the installation procedures described in the LCFGng Server Installation Guide.
> chmod 400 /etc/grid-security/hostkey.pem
After installing a ResourceBroker, StorageElement, or ComputingElement node you should force a first creation of the grid-mapfile by running
> /opt/edg/sbin/edg-mkgridmap --output=/etc/grid-security/grid-mapfile --safe !
Every 6 hours a cron job will repeat this procedure and update grid-mapfile.
No additional configuration steps are currently needed on a UserInterface node.
Log as root on your RB node, represented by <rb_node> in the example, and make sure that the mysql server is up and running:
> /etc/rc.d/init.d/mysql startIf it was already running you will just get notified of the fact. Now you can choose a DB management <password> you like (write it down somewhere!) and then configure the server with the following commands:
> mysqladmin password <password> > mysql --password=<password> \ --exec "set password for root@<rb_node>=password('<password>')" mysql > mysqladmin --password=<password> create lbserver20 > mysql --password=<password> lbserver20 < /opt/edg/etc/server.sql > mysql --password=<password> \ --exec "grant all on lbserver20.* to lbserver@localhost" lbserver20Note that the database name "lbserver20" is hardwired in the LB server code and cannot be changed so use it exactly as shown in the commands. Make sure that /var/lib/mysql has the right permissions set (755).
Don't forget after upgrading the CE to make sure that the experiment specific runtime environment tags can still be published. For this move the /opt/edg/var/info/<VO-NAME>/<VO-NAME>.ldif files to <VO-NAME>.list.
> /opt/edg/sbin/edg-pbs-knownhosts
A cron job will update this file every 6 hours.
HostbasedAuthentication yes IgnoreUserKnownHosts yes IgnoreRhosts yes
and then restart the server with
> /etc/rc.d/init.d/sshd restart
> /opt/edg/sbin/edg-pbs-shostsequivA cron job will update this file every 6 hours.
Note: every time you will add or remove WNs, do not forget to run
on the CE or the new WNs will not work correctly till the next time cron runs them for you.
> /etc/obj/nfsmount restart
The MySQL database used by the servlets needs to be configured.
> mysql -u root < /opt/edg/var/edg-rgma/rgma-db-setup.sql
Restart the R-GMA servlets.
/etc/rc.d/init.d/edg-tomcat4 restart
For the accounting package, apel, you will need to setup the database.
> mysql -u root --exec "create database accounting" > mysql -u root accounting < /opt/edg/var/edg-rgma/apel-schema.sql > mysql -u root --exec "grant all on accounting.* to accounting@$MON_HOST identified by APELDB_PWD" > mysql -u root --exec "grant all on accounting.* to accounting@localhost identified by APELDB_PWD" > mysql -u root --exec "grant all on accounting.* to accounting@$CE_HOST identified by APELDB_PWD"
> echo 256000 > /proc/sys/fs/file-max
You can make this setting reboot-proof by adding the following code at the end of your /etc/rc.d/rc.local file:
# Increase max number of open files if [ -f /proc/sys/fs/file-max ]; then echo 256000 > /proc/sys/fs/file-max fi
To redirect your WNs to use a web proxy you should edit the /etc/wgetrc file and add a line like:
Note: I could not test this recipe directly as I am not aware of a web proxy at CERN. If you try it and find problems, please post a message on the lcg-rollout list.
Host * HostbasedAuthentication yes
Note: the "Host *" line might already exist. In this case, just add the second line after it.
> /opt/edg/sbin/edg-pbs-knownhosts
A cron job will update this file every 6 hours.
No additional configuration steps are currently needed on a PlainGRIS node.
This is the current version of the BDII service which does not rely on Regional MDSes. If you want to install the new service then you should use the LCG-BDII_node example file from the "examples" directory. After installation the new LCG-BDII service does not need any further configuration: the list of available sites will be automatically downloaded from the default web location defined by SITE_BDII_URL in site-cfg.h and the initial population of the database will be started. Expect a delay of a couple of minutes from when the machine is up and when the database is fully populated.
If for some reason you want to use a static list of sites, then you should copy the static configuration file to /opt/lcg/var/bdii/lcg-bdii-update.conf and add this line at the end of your LCG-BDII node configuration file:
+lcgbdii.auto no
If you need a group of BDIIs being centrally managed and see a different set of sites than those defined by URL above you can setup a web-server and publish the web page containing the sites. The URL for this file has to be used to configure the SITE_BDII_URL in the site-cfg.h. Leave the lcgbdii.auto to yes.
This file has the following structure: http://grid-deployment.web.cern.ch/grid-deployment/gis/lcg2-bdii/dteam/lcg2-all-sites.conf
Make sure that in the site-cfg.h file you have included all Resource Brokers that your users want to use. This is done in the following line:
#define GRID_TRUSTED_BROKERS "/C=CH/O=CERN/OU=GRID/CN=host/BROKER1.Domain.ch \ /C=CH/O=CERN/OU=GRID/CN=host/Broker2.Domain.ch"
Note that queues short, long, and infinite are those defined in the site-cfg.h file and the time limits are those in use at CERN. Feel free to add/remove/modify them to your liking but do not forget to modify site-cfg.h accordingly.
The values given in this example are only reference values. Make sure that the requirements of the experiment as stated here: http://ibird.home.cern.ch/ibird/LCGMinResources.doc are satisfied by your configuration.
@--------------------------------------------------------------------- /usr/bin/qmgr <<EOF set server scheduling = True set server acl_host_enable = False set server managers = root@<CEhostname> set server operators = root@<CEhostname> set server default_queue = short set server log_events = 511 set server mail_from = adm set server query_other_jobs = True set server scheduler_iteration = 600 set server default_node = lcgpro set server node_pack = False create queue short set queue short queue_type = Execution set queue short resources_max.cput = 00:15:00 set queue short resources_max.walltime = 02:00:00 set queue short enabled = True set queue short started = True create queue long set queue long queue_type = Execution set queue long resources_max.cput = 12:00:00 set queue long resources_max.walltime = 24:00:00 set queue long enabled = True set queue long started = True create queue infinite set queue infinite queue_type = Execution set queue infinite resources_max.cput = 80:00:00 set queue infinite resources_max.walltime = 100:00:00 set queue infinite enabled = True set queue infinite started = True EOF @---------------------------------------------------------------------
@--------------------------------------------------------------------- lxshare0223.cern.ch np=2 lcgpro lxshare0224.cern.ch np=2 lcgpro lxshare0225.cern.ch np=2 lcgpro lxshare0226.cern.ch np=2 lcgpro @---------------------------------------------------------------------
where np=2 gives the number of job slots (usually equal to #CPUs) available on the node, and lcgpro is the group name as defined in the default_node parameter in the server configuration.
> /etc/rc.d/init.d/pbs_server restart
This document was generated using the LaTeX2HTML translator Version 2002 (1.62)
Copyright © 1993, 1994, 1995, 1996,
Nikos Drakos,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999,
Ross Moore,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 0 -html_version 4.0 -no_navigation -address 'GRID deployment' LCG2-LCFG-Install.drv_html
The translation was initiated by Oliver KEEBLE on 2005-04-06