v. 1.0 (rev. 14)
Installation Guide
21 Apri 2005
2.2. Standard Deployment Model
3. GLITE Packages AND doWNLOADS
4. The gLite Configuration Model
4.1. The gLite Configuration Scripts
4.2. The gLite Configuration Files
4.2.1. Configuration Parameters Scope
4.2.2. The Local Service Configuration Files
4.2.3. The Global Configuration File
4.2.4. The Site Configuration File
4.2.7. Default Environment Variables
4.2.8. Configuration Overrides
5.2. Installation Pre-requisites
5.3. Security Utilities Installation
5.4. Security Utilities Configuration
5.5. Security Utilities Configuration Walkthrough
6.2. Installation Pre-requisites
6.2.3. Resource Management System
6.3. Computing Element Service Installation
6.4. Computing Element Service Configuration
6.5. Computing Element Configuration Walkthrough
6.7. Starting the CE Services at Boot
6.8. Workspace Service Tech-Preview
7.2. Installation Pre-requisites
7.3. WORKLOAD MANAGER SYSTEM Installation
7.4. WORKLOAD MANAGEMENT SYSTEM Configuration
7.5. WORKLOAD MANAGEMENT SYSTEM Configuration Walkthrough
7.6. Managing the WMS Services
7.7. Starting the CE Services at Boot
7.8. Publishing WMS Services to R-GMA
8. Logging and Bookkeeping Server
8.2. Installation Pre-requisites
8.3. Logging and Bookkeeping Server Installation
8.4. Logging and Bookeeping Server Configuration
8.5. Logging and Bookkeeping Configuration Walkthrough
9.2. Installation Pre-requisites
9.2.3. Resource Management System
9.4. Worker Node Configuration
9.5. Worker Node Configuration Walkthrough
10.1.2. Installation pre-requisites
10.1.3. gLite I/O Server installation
10.1.4. gLite I/O Server Configuration
10.1.5. gLite I/O Server Configuration Walkthrough
10.2.2. Installation pre-requisites
10.2.3. gLite I/O Client installation
10.2.4. gLite I/O Client Configuration
11.2. Installation Pre-requisites
11.3. Local Transfer Service Installation
11.4. Local Transfer Service Configuration
11.5. Local Transfer Service Configuration Walkthrough
12.2. Installation Pre-requisites
12.2.3. Oracle JDBC Drivers, SQLJ translator and InstantClient
12.3. Single Catalog Installation
12.4. Single Catalog Configuration
12.5. Single Catalog Configuration Walkthrough
13. Information and Monitoring System (r-gma)
13.2. R-GMA deployment modules
13.2.1. R-GMA Deployment strategy
13.2.2. R-GMA Server deployment module
13.2.3. R-GMA Client deployment module
13.2.4. R-GMA servicetool deployment module
13.2.5. R-GMA GadgetIN (GIN) deployment module.
14. VOMS Server and Administration Tools
14.2. Installation Pre-requisites
14.3. VOMS Server Installation
14.4. VOMS Server Configuration
14.5. VOMS Server Configuration Walkthrough
15.2. Installation Pre-requisites
15.5. UI Configuration Walkthrough
15.6. Configuration for the UI users
16.1.1. TORQUE Server Overview
16.1.2. TORQUE Client Overview
16.2. Installation Pre-requisites
16.3.1. TORQUE Server Installation
16.3.2. TORQUE Server Service Configuration
16.3.3. TORQUE Server Configuration Walkthrough
16.3.4. Managing the TORQUE Server Service
16.4.1. TORQUE Client Installation
16.4.2. TORQUE Client Configuration
16.4.3. TORQUE Client Configuration Walkthrough
16.4.4. Managing the TORQUE Client
17. The gLite Functional Test Suites
17.2.1. Installation Pre-requisites
17.3.1. Installation Pre-requisites
17.4.1. Installation Pre-requisites
18. Appendix A: Service Configuration File Example.
19. Appendix B: Site Configuration File Example
This document describes how to install and configure the EGEE middleware known as gLite. The objective is to provide clear instructions for administrators on how to deploy gLite components on machines at their site.
Glossary
CE |
Computing Element |
R-GMA |
Relational Grid Monitoring Architecture |
WMS |
Workload Management System |
WN |
Worker Node |
LTS |
Local Transfer Service |
LB |
Logging and Bookkeping |
SC |
Single Catalog |
UI |
User Interface |
Definitions
Service |
A single high-level unit of functionality |
Node |
A computer where one or more services are deployed |
The gLite middleware is a Service Oriented Grid middleware providing services for managing distributed computing and storage resources and the required security, auditing and information services.
The gLite system is composed of a number of high level services that can be installed on individual dedicated computers (nodes) or combined in various ways to satisfy site requirements. This installation guide follows a standard deployment model whereby most of the services are installed on dedicated computers. However, other examples of valid node configuration are also shown.
The following high-level services are part of this release of the gLite middleware:
Figure 1: gLite Service Deployment Scenario shows the standard deployment model for these services.
Each site has to provide the local services for job and data management as well as information and monitoring:
Figure 1: gLite Service Deployment Scenario
The figure shows the proposed mapping of services onto physical machines. This mapping will give the best performance and service resilience. Smaller sites may however consider mapping multiple services onto the same machine. This is in particular true for the CE and package manager and for the SC and the LTS.
Instead of the distributed deployment of the catalogs (a local catalog and a global catalog) a centralized deployment of just a global catalog can be considered as well. This is actually the configuration supported in the gLite 1.0.
The VO services act on the Grid level and comprise the Security services, Workload Management services, Information and Monitoring services. Each VO should have an instance of these services, physical service instances can mostly be shared among VOs. For some services, even multiple instances per VO can be provided as indicated below:
· Security services
o The Virtual Organization Membership Service (VOMS) is used for managing the membership and member rights within a VO. VOMS also acts as attribute authority.
o myProxy is used as secure proxy store
· Workload Management services
o The Workload Management Service (WMS) is used to submit jobs to the Grid.
o The Logging and Bookkeeping service (LB) keeps track of the job status information.
The WMS and the LB can be deployed independently but due to their tight interactions it is recommended to deploy them together. Multiple instances of these services may be provided for a VO.
· Information and Monitoring services
o The R-GMA Registry Servers and Schema Server are used for binding information consumers and producers. There can be more than one Registry Server that can be replicated for resilience reasons.
· Single Catalog (SC)
o The single catalog is used for browsing the LFN space and to find out the location (sites) where files are stored. This is in particular need by the WMS.
· User Interface
o The User Interface (UI) combines all the clients that allow the user to directly interact with the Grid services.
The gLite middleware is currently published in the form of RPM packages and installation scripts from the gLite web site at:
../../../../../../glite-web/egee/packages
Required external dependencies in RPM format can also be obtained from the gLite project web site at:
../../../../../../glite-web/egee/packages/externals/bin/rhel30/RPMS
Deployment modules for each high-level gLite component are provided on the web site and are a straightforward way of downloading and installing all the RPMs for a given component. A configuration script is provided with each module to configure, deploy and start the service or services in each high-level module.
Installation and configuration of the gLite services are kept well separated. Therefore the RPMS required to install each service or node can be deployed on the target computers in any suitable way. The use of dedicated RPMS management tools is actually recommended for production environments. Once the RPMS are installed, it is possible to run the configuration scripts to initialize the environment and the services.
gLite is also distributed using the apt package manager. More details on the apt cache address and the required list entries can be found on the main packages page of the gLite web site (../../../../../../glite-web/egee/packages).
gLite is also available in the form of source and binary tarballs from the gLite web site and from the EGEE CVS server at:
jra1mw.cvs.cern.ch:/cvs/jra1mw
The server support authenticated ssh protocol 1 and Kerberos 4 access and anonymous pserver access (username: anonymous).
Each gLite deployment module contains a number of RPMS for the necessary internal and external components that make up a service or node (RPMS that are normally part of standard Linux distributions are not included in the gLite installer scripts). In addition, each module contains one or more configuration RPMS providing configuration scripts and files.
Each module contains at least the following configuration RPMS:
Name |
Definition |
glite-config-x.y.z-r.noarch.rpm |
The glite-config RPM contains the global configuration files and scripts required by all gLite modules |
glite-<service>-config-x.y.z-r.noarch.rpm |
The glite-<service>-config RPM contains the configuration files and scripts required by a particular service, such as ce, wms or rgma |
In addition, a mechanism to load remote configuration files from URLs is provided. Refer to the Site Configuration section later in this chapter (§4.2.3).
All configuration scripts are installed in:
$GLITE_LOCATION/etc/config/scripts
where $GLITE_LOCATION is the root of the gLite packages installation. By default $GLITE_LOCATION = /opt/glite.
The scripts are written in python and follow a naming convention. Each file is called:
glite-<service>-config.py
where <service> is the name of the service they can configure.
In addition, the same scripts directory contains the gLite Installer library (gLiteInstallerLib.py) and a number of helper scripts used to configure various applications required by the gLite services (globus.py, mysql.py, tomcat.py, etc).
The gLite Installer library and the helper scripts are contained in the glite-config RPM. All service scripts are contained in the respective glite-<service>-config RPM.
All parameters in the gLite configuration files are categorised in one of three categories:
The gLite configuration files are XML-encoded files containing all the parameters required to configure the gLite services. The configuration files are distributed as templates and are installed in the $GLITE_LOCATION/etc/config/templates directory.
The configuration files follow a similar naming convention as the scripts. Each file is called:
glite-<service>.cfg.xml
Each gLite configuration file contains a global section called <parameters/> and may contain one or more <instance/> sections in case multiple instances of the same service or client can be configured and started on the same node (see the configuration file example in Appendix A). In case multiple instances can be defined for a service, the global <parameters/> section applies to all instances of the service or client, while the parameters in each <instance/> section are specific to particular named instance and can override the values in the <parameters/> section.
The configuration files support variable substitution. The values can be expressed in term of other configuration parameters or environment variables by using the ${} notation (for example ${GLITE_LOCATION}).
The templates directory can also contain additional service templates used by the configuration scripts during their execution (like for example the gLite I/O service templates).
Note: When using a local configuration model, before running the configuration scripts the corresponding configuration files must be copied from the templates directory to $GLITE_LOCATION/etc/config and all the user-defined parameters must be correctly instantiated (refer also to the Configuration Parameters Scope paragraph later in this section). This is not necessary if using the site configuration model (see below)
The global configuration file glite-global.cfg.xml contains all parameters that have gLite-wide scope and are applicable to all gLite services. The parameters in this file are loaded first by the configuration scripts and cannot be overridden by individual service configuration files.
Currently the global configuration file defines the following parameters:
Parameter |
Default value |
Description |
User-defined Parameters |
||
site.config.url |
|
The URL of the Site Configuration file for this node. The values defined in the Site Configuration file are applied first and are be overriden by values specified in the local configuration files. Leave this parameter empty or remove it to use local configuration only. |
Advanced Parameters |
||
GLITE_LOCATION |
/opt/glite |
|
GLITE_LOCATION_VAR |
/var/glite |
|
GLITE_LOCATION_LOG |
/var/log/glite |
|
GLITE_LOCATION_TMP |
/tmp/glite |
|
GLOBUS_LOCATION |
/opt/globus |
Environment variable pointing to the Globus package. |
GPT_LOCATION |
/opt/gpt |
Environment variable pointing to the GPT package. |
JAVA_HOME |
/usr/java/j2sdk1.4.2_04 |
Environment variable pointing to the SUN Java JRE or J2SE package. |
CATALINA_HOME |
/var/lib/tomcat5 |
Environment variable pointing to the Jakarta Tomcat package |
host.certificate.file |
/etc/grid-security/hostcert.pem |
The host certificate (public key) file location |
host.key.file |
/etc/grid-security/hostkey.pem |
The host certificate (private key) file location |
ca.certificates.dir |
/etc/grid-security/certificates |
The location where CA certificates are stored |
user.certificate.path |
.certs |
The location of the user certificates relative to the user home directory |
host.gridmapfile |
/etc/grid-security/grid-mapfile |
Location of the grid mapfile |
host.gridmap.dir |
/etc/grid-security/gridmapdir |
The location of the account lease information for dynamic allocation |
|
|
|
System Parameters |
||
installer.export.filename |
/etc/glite/profile.d/glite_setenv.sh |
Full path of the script containing environment definitions This file is automatically generated by the configuration script. If it exists, the new values are appended |
tomcat.user.name |
tomcat4 |
Name of the user account used to run tomcat. |
tomcat.user.group |
tomcat4 |
Group of the user specified in the parameter ‘tomcat.user.name’ |
Table 1: Global Configuration Parameters
All gLite configuration scripts implement a mechanism to load configuration information from a remote URL. This mechanism can be used to configure the services from a central location for example to propagate site-wide configuration.
The URL of the configuration file can be specified as the site.config.url parameter in the global configuration file of each node or as a command-line parameter when launching a configuration script, for example:
glite-ce-config.py --siteconfig=http://server.domain.com/sitename/siteconfig.xml
In the latter case, the site configuration file is only used for running the configuration scripts once and all values are discarded afterwards. For normal operations it is necessary to specify the site configuration URL in the glite-gobal.cfg.xml file.
The site configuration file can contain a global section called <parameters/> and one <node/> section for each node to be remotely configured (see the configuration file example in Appendix B). Each <node/> section must be qualified with the host name of the target node, for example:
<node name=”lxb1428.cern.ch”>
…
</node>
where the host name must be the value of the $HOSTNAME environment variable on the node. The <parameters/> section contains parameters that apply to all nodes referencing the site configuration file.
The <node/> sections can contain the same parameters that are defined in the local configuration files. If more than one service is installed on a node, the corresponding <node/> section can contain a combination of all parameters of the individual configuration files. For example if a node runs both the WMS and the LB Server services, then the corresponding <node/> section in the site configuration file may contain a combination of the parameters contained in the local configuration files for the WMS and the LB Server modules.
If a user-defined parameter (see later in §4.2.1 the definition of parameters scope) is defined in the site configuration file, the same parameter doesn’t need to be defined in the local file (it can therefore keep the token value ‘changeme’ or be removed altogether). However, if a parameter is defined in the local configuration file, it override whatever value is specified in the site configuration file. If a site configuration file contains all necessary values to configure a node, it is not necessary to create the local configuration files. The only configuration file that must always be present locally in the /opt/glite/etc/config/ directory is the glite-global.cfg.xml file, since it contains the parameter that specify the URL of the site configuration file.
This mechanism allows distributing a site configuration for all nodes and at the same time gives the possibility of overriding some or all parameters locally in case of need.
New configuration information can be easily propagated simply by publishing a new configuration file and rerunning the service configuration scripts.
In addition, several different models are possible. Instead of having a single configuration file contains all parameters for all nodes, it’s possible for example to split the parameters in several file according to specific criteria and point different services to different files. For example is possible to put all parameters required to configure the Worker Nodes in one file and all parameters for the servers in a separate files, or have a separate file for each node and so on.
The configuration scripts and files described above represent the common configuration interfaces of all gLite services. However, since the gLite middleware is a combination of various old and new services, not all services can natively use the common configuration model. Many service come with their configuration files and formats. Extensive work is being done to make all services use the same model, but until the migration is completed, the common configuration files must be considered as the public configuration interfaces for the system. The configuration scripts do all the necessary work to map the parameters in the public configuration files to parameters in service specific configuration files. In addition, many of the internal configuration files are dynamically created or modified by the public configuration scripts.
The goal is to provide the users with a consistent set of files and scripts that will not change in the future even if the internal behaviour may change. It is therefore recommended whenever possible to use only the common configuration files and scripts and do not modify directly the internal service specific configuration files.
When anyone of the gLite configuration scripts is run, it creates or modifies a general configuration file called glite_setenv.sh in /etc/glite/profile.d (the location can be changed using a system-level parameter in the global configuration file).
This file contains all the environment definitions needed to run the gLite services. This file is automatically added to the .bashrc file of users under direct control of the middleware, such as service accounts and pool accounts. In addition, if needed the .bash_profile file of the accounts is modified to source the .bashrc file and to set BASH_ENV=.bashrc. The proper environment is therefore created every time an account logins in various ways (interactive, non-interactive or script).
Other users not under control of the middleware can manually source the glite_setenv.sh file as required.
In case a gLite service or client is installed using a non-privileged user (if foreseen by the service or client installation), the glite_setenv.sh file is created in $GLITE_LOCATION/etc/profile.d.
By default the gLite configuration files and scripts define the following environment variables:
GLITE_LOCATION |
/opt/glite |
GLITE_LOCATION_VAR |
/var/glite |
GLITE_LOCATION_LOG |
/var/log/glite |
GLITE_LOCATION_TMP |
/tpm/glite |
PATH |
/opt/glite/bin:/opt/glite/externals/bin:$PATH |
LD_LIBRARY_PATH |
/opt/glite/lib:/opt/glite/externals/lib:$LD_LIBRARY_PATH |
The first four variables can be modified in the global configuration file or exported manually before running the configuration scripts. If these variables are already defined in the environment they take priority on the values defined in the configuration files
It is possible to override the values of the parameters in the gLite configuration files by setting appropriate key/value pairs in the following files:
/etc/glite/glite.conf
~/.glite/glite.conf
The first file has system-wide scope, while the second has user-scope. These files are read by the configuration scripts before the common configuration files and their values take priority on the values defined in the common configuration files.
The gLite Security Utilities module contains the CA Certificates distributed by the EU Grid PMA. In addition, it contains a number of utilities scripts needed to create or update the local grid mapfile from a VOMS server and periodically update the CA Certificate Revocation Lists.
The CA Certificate are installed in the default directory
/etc/grid-security/certificates
This is not configurable at the moment. The installation script downloads the latest available version of the CA RPMS from the gLite software repository.
The glite-mkgridmap script is used to update the local grid mapfile and its configuration file glite-mkgridmap.conf are installed respectively in
$GLITE_LOCATION/sbin
and
$GLITE_LOCATION/etc
The script can be run manually (after customizing its configuration file). Running glite-mkgridmap doesn’t preserve the existing grid-mapfile. However, a wrapper script is provided in $GLITE_LOCATION/etc/config/scripts/mkgridmap.py to update the grid-mapfile preserving any additional entry in the file not downloaded by glite-mkgridmap.
The Security Utilities module configuration script also installs a crontab file in /etc/cron.d that executes the mkgridmap.py script every night at 02:00. The installation of this cron job and the execution of the mkgridmap.py script during the configuration are optional and can be enabled using a configuration parameter (see the configuration walkthrough for more information).
Some services need to run the mkgridmap.py script as part of their initial configuration (this is currently the case for example of the WMS). In this case the installation of the cron job and execution of the script at configuration must be enabled. This is indicated in each case in the appropriate chapter.
The fetch-crl script is used to update the CA Certificate Revocation Lists. This script is provided by the EU GridPMA organization. It is installed in:
/usr/bin
The Security Utilities module configuration script installs a crontab file in /etc/cron.d that executes the glite-fetch-crl every four hours. The CRLs are installed in the same directory as the CA certificates, /etc/grid-security/certificates. The module configuration file (glite-security-utils.cfg.xml) allows specifying an e-mail address to which the errors generated when running the cron job are sent.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
The gLite Security Utilities module is installed as follows:
Parameter |
Default value |
Description |
User-defined Parameters |
||
cron.mailto |
|
E-mail address to which the stderr of the installed cron jobs is sent |
Advanced Parameters |
||
glite.installer.verbose |
true |
Produce verbose output when running the script |
glite.installer.checkcerts |
true |
Activate a check for host certificates and stop the script if not available. The certificates are looked for in the location specified by the global parameters host.certificate.file and host.key.file |
fetch-crl.cron.tab |
00 */4 * * *
|
The cron tab to use for the fetch-crl cron job. |
install.fetch-crl.cron |
true |
Install the glite-fetch-crl cron job. Possible values are 'true' (install the cron job) or 'false' (do not install the cron job) |
install.mkgridmap.cron |
false |
Install the glite-mkgridmap cron job. Possible values are 'true' (install the cron job) or 'false' (do not install the cron job) |
System Parameters |
Table 2: Security Utilities Configuration Parameters
The Security Utilities configuration script performs the following steps:
The Computing Element (CE) is the service representing a computing resource. Its main functionality is job management (job submission, job control, etc.). The CE may be used by a generic client: an end-user interacting directly with the Computing Element, or the Workload Manager, which submits a given job to an appropriate CE found by a matchmaking process. For job submission, the CE can work in push model (where the job is pushed to a CE for its execution) or pull model (where the CE asks the Workload Management Service for jobs). Besides job management capabilities, a CE must also provide information describing itself. In the push model this information is published in the information Service, and it is used by the match making engine which matches available resources to queued jobs. In the pull model the CE information is embedded in a ``CE availability'' message, which is sent by the CE to a Workload Management Service. The matchmaker then uses this information to find a suitable job for the CE.
The CE uses the R-GMA servicetool to publish information about its services and states to the information services R-GMA. See chapter 13 for more details about R-GMA and the R-GMA servicetool.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
The Java JRE or JDK are required to run the CE Monitor. This release requires v. 1.4.2 (revision 04 or greater). The JDK version to be used is a configuration parameter in the glite-global-cfg.xml file. Please change it according to your version and location (see also sections 4.2.3 and 6.4 for more details).
The Resource Management System must be installed on the CE node or on a separate dedicated node before installing and configuring the CE module. This release of the CE module supports PBS, Torque and LSF.
Parameter |
Default value |
Description |
User-defined Parameters |
||
voms.voname |
|
The names of the VOs that this CE node can serve |
voms.vomsnode |
|
The full hostname of the VOMS server responsible for each VO. Even if the same server is reponsible for more than one VO, there must be exactly one entry for each VO listed in the 'voms.voname' parameter. For example: 'host.domain.org' |
voms.vomsport |
|
The port on the VOMS server listening for request for each VO. This is used in the vomses configuration file. For example: '15000' |
voms.vomscertsubj |
|
The subject of the host certificate of the VOMS server for each VO. For example: ‘/C=ORG/O=DOMAIN/OU=GRID/CN=host.domain.org' |
pool.account.basename |
|
The prefix of the set of pool accounts to be created for each VO. Existing pool accounts with this prefix are not recreated |
pool.account.group |
|
The group name of the pool accounts to be used for each VO. For some batch systems like LSF, this group may need a specific gid. The gid can be set using the pool.lsfgid parameter in the LSF configuration section |
pool.account.number |
|
The number of pool accounts to create for each VO. Each account will be created with a username of the form prefixXXX where prefix is the value of the pool.account.basename parameter. If matching pool accounts already exist, they are not recreated. The range of values for this parameter is from 1 to 999 |
cemon.wms.host
|
|
The hostname of the WMS server(s) that receives notifications from this CE |
cemon.wms.port |
|
The port number on which the WMS server(s) receiving notifications from this CE is listening |
cemon.lrms |
|
The type of Local Resource Managment System. It can be 'lsf' or 'pbs'. If this parameter is absent or empty, the default type is 'pbs' |
cemon.cetype |
|
The type of Computing Element. It can be 'condorc' or 'gram'. If this parameter is absent or empty, the default type is 'condorc' |
cemon.cluster |
|
The cluster entry point host name. Normally this is the CE host itself |
cemon.cluster-batch-system-bin-path |
|
The path of the lrms commands. For example: '/usr/pbs/bin' or '/usr/local/lsf/bin'. This value is also used to set the PBS_BIN_PATH or LSF_BIN_PATH variables depending on the value of the 'cemon.lrms' parameter |
cemon.cesebinds |
|
The CE-SE bindings for this CE node. There are three possible format: configfile 'queue[|queue]' se 'queue[|queue]'se se_entry point A ‘.’ character for the queue list means all queues. Example: '.' EGEE::SE::Castor /tmp |
cemon.queues |
|
A list of queues defined on this CE node. Examples are: long, short, infinite, etc. |
pool.lsfgid |
|
The gid of the groups to be used for the pool accounts on some LSF installations, on per each pool account group. This parameter is an array of values containing one value for each VO served by this CE node. The list must match the corresponding lists in the VOMS configuration section. If this is not required by your local LSF system remove this parameter or leave the values empty |
condor.wms.user
|
|
The username of the condor user under which the Condor daemons run on the WMS nodes that this CE serves |
lb.user |
|
The account name of the user that runs the local logger daemon. If the user doesn't exist it is created. In the current version, the host certificate and key are used as service certificate and key and are copied in this user's home in the directory specified by the global parameter 'user.certificate.path' in the glite-global.cfg.xml file |
iptables.chain |
|
The name of the chain to be used for configuring the local firewall. If the chain doesn't exist, it is created and the rules are assigned to this chain. If the chain exists, the rules are appended to the existing chain |
Advanced Parameters |
||
glite.installer.verbose
|
True |
Enable verbose output |
glite.installer.checkcerts |
True |
Enable check of host certificates |
PBS_SPOOL_DIR |
/usr/spool/PBS |
The PBS spool directory |
LSF_CONF_PATH |
/etc |
The directory where the LSF configuration file is located |
globus.osversion |
<empty> |
The kernel id string identifying the system installed on this node. For example: '2.4.21-20.ELsmp'. This parameter is normally automatically detected, but it can be set here |
globus.hostdn |
<empty> |
The host distinguished name (DN) of this node. This is mormally automatically read from the server host certificate. However it can be set here. For example: 'C=ORG, O=DOMAIN, OU=GRID, CN=host/server.domain.org' |
condor.version |
6.7.6 |
The version of the installed Condor-C libraries |
condor.user |
condor |
The username of the condor user under which the Condor daemons must run |
condor.releasedir |
/opt/condor-6.7.6 |
The location of the Condor package. This path is internally simlinked to /opt/condor-c. This is currently needed by the Condor-C software |
CONDOR_CONFIG |
${condor.releasedir}/etc/condor_config |
Environment variable pointing to the Condor configuration file |
condor.scheddinterval |
10 |
How often should the schedd send an update to the central manager? |
condor.localdir |
/var/local/condor |
Where is the local condor directory for each host? This is where the local config file(s), logs and spool/execute directories are located |
condor.blahgahp |
${GLITE_LOCATION}/bin/blahpd |
The path of the gLite blahp daemon |
condor.daemonlist |
MASTER, SCHEDD |
The Condor daemons to configure and monitor |
condor.blahpollinterval |
120 |
How often should blahp poll for new jobs? |
gatekeeper.port |
2119 |
The gatekeeper listen port |
lcg.providers.location |
/opt/lcg |
The location where the LCG providers are installed. |
System Parameters |
Table 3: CE Configuration Parameters
i. Local Logger
ii. Gatekeeper
iii. CE Monitor
Again, you find the necessary steps described in section 13.2.4.6.
Note: Step 1,2
and 3 can also be performed by means of the remote site configuration file or a
combination of local and remote configuration files
The CE configuration script performs the following steps:
The CE configuration script can be run with the following command-line parameters to manage the services:
glite-ce-config.py --start |
Starts all CE services (or restart them if they are already running) |
glite-ce-config.py --stop |
Stops all CE services |
glite-ce-config.py --status |
Verifies the status of all services. The exit code is 0 if all services are running, 1 in all other cases |
When the CE configuration script is run, it installs the gLite script in the /etc/inet.d directory and activates it to be run at boot. The gLite script runs the glite-ce-config.py --start command and makes sure that all necessary services are started in the correct order.
This release of the gLite Computing Element module contains a tech-preview of the Workspace Service developed in collaboration with the Globus GT4 team. This service allows a more dynamic usage of the pool accounts with the possibility of leasing an account and releasing it when it’s not needed anymore.
To use this service, an alternative configuration script has been provided:
/opt/glite/etc/config/scripts/glite-ce-wss-config.py
It requires Ant to be properly installed and configured on the server.
No specic usage instruction are provided for the time being. More information about the Workspace Service and its usage can be found at the bottom of the following page from point 8 onwards (the installation and configuration part is done by the glite-ce module):
http://www.nikhef.nl/grid/lcaslcmaps/install_wss_lcmaps_on_lxb2022
The Workload Management System (WMS) comprises a set of grid middleware components responsible for the distribution and management of tasks across grid resources, in such a way that applications are conveniently, efficiently and effectively executed.
The core component of the Workload Management System is the Workload Manager (WM), whose purpose is to accept and satisfy requests for job management coming from its clients. For a computation job there are two main types of request: submission and cancellation.
In particular the meaning of the submission request is to pass the responsibility of the job to the WM. The WM will then pass the job to an appropriate Computing Element for execution, taking into account the requirements and the preferences expressed in the job description. The decision of which resource should be used is the outcome of a matchmaking process between submission requests and available resources.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
Parameter |
Default value |
Description |
User-defined Parameters |
||
glite.user.name
|
|
Name of the user account used to run the gLite services on this WMS node |
glite.user.group
|
|
Group of the user specified in the 'glite.user.name' parameter. This group must be different from the pool account group specified by the parameter ‘pool.account.group’. |
pool.account.basename
|
|
The prefix of the set of pool account to be created. Existing pool accounts with this prefix are not recreated |
pool.account.group
|
|
The group name of the pool accounts to be used. This group must be different from the WMS service account group specified by the parameter ‘glite.user.group’. |
pool.account.number
|
|
The number of pool accounts to create. Each account will be created with a username of the form prefixXXX where prefix is the value of the pool.account.basename parameter. If matching pool accounts already exist, they are not recreated. The range of values for this parameter is 1-999 |
wms.cemon.port |
|
The port number on which this WMS server is listening for notifications from CEs when working in pull mode. Leave this parameter empty or comment it out if you don't want to activate pull mode for this WMS node. Example: 5120 |
wms.cemon.endpoints |
|
The endpoint(s) of the CE(s) that this WMS node should query when working in push mode. Leave this parameter empty or comment it out if you don't want to activate push mode for this WMS node. Example: 'http://lxb0001.cern.ch:8080/ce-monitor/services/CEMonitor' |
information.index.host
|
|
Host name of the Information Index node. Leave this parameter empty or comment it out if you don't want to use a BD-II for this WMS node |
cron.mailto |
|
E-mail address for sending cron job notifications |
condor.condoradmin
|
|
E-mail address of the condor administrator |
Advanced Parameters |
||
glite.installer.verbose |
true |
Sets the verbosity of the configuration script output |
glite.installer.checkcerts
|
true |
Switch on/off the checking of the existence of the host certificate files |
GSIWUFTPPORT
|
2811 |
Port where the globus ftp server is listening |
GSIWUFTPDLOG
|
${GLITE_LOCATION_LOG}/gsiwuftpd.log |
Location of the globus ftp server log file |
GLOBUS_FLAVOR_NAME
|
gcc32dbg |
The Globus libraries flavour to be used |
condor.scheddinterval |
10 |
Condor scheduling interval |
condor.releasedir |
/opt/condor-6.7.6 |
Condor installation directory |
CONDOR_CONFIG |
${condor.releasedir}/etc/condor_config |
Condor global configuration fil |
condor.blahpollinterval
|
10 |
How often should blahp poll for new jobs? |
information.index.port |
2170 |
Port number of the Information Index |
information,index.base_dn |
mds-vo-name=local, o=grid |
Base DN of the information index LDAP server |
wms.config.file |
${GLITE_LOCATION}/etc/glite_wms.conf |
Location of the wms configuration file |
System Parameters |
||
condor.localdir |
/var/local/condor |
Condor local directory |
condor.daemonlist |
MASTER, SCHEDD, COLLECTOR, NEGOTIATOR |
List of the condor daemons to start. This must a comma-separated list of services as it would appear in the Condor configuration file |
Table 4: WMS Configuration Parameters
i. Local Logger
ii. Proxy Renewal Service
iii. Log Monitor Service
iv. Job Controller Service
v. Network Server
vi. Workload Manager
Again, you find the necessary steps described in section 13.2.4.6.
Note: Step 1,2
and 3 can also be performed by means of the remote site configuration file or a
combination of local and remote configuration files
The WMS configuration script performs the following steps:
The CE configuration script can be run with the following command-line parameters to manage the services:
glite-ce-config.py --start |
Starts all CE services (or restart them if they are already running) |
glite-ce-config.py --stop |
Stops all CE services |
glite-ce-config.py --status |
Verifies the status of all services. The exit code is 0 if all services are running, 1 in all other cases |
glite-ce-config.py --startservice=xxx |
Starts the WMS xxx subservice. xxx can be one of the following: condor = the Condor master and daemons ftpd = the Grid FTP daemon lm = the gLite WMS Logger Monitor daemon wm = the gLite WMS Workload Manager daemon ns = the gLite WMS Network Server daemon jc = the gLite WMS Job Controller daemon pr = the gLite WMS Proxy Renewal daemon lb = the gLite WMS Logging & Bookkeeping client |
glite-ce-config.py --stopservice=xxx |
Stops the WMS xxx subservice. xxx can be one of the following: condor = the Condor master and daemons ftpd = the Grid FTP daemon lm = the gLite WMS Logger Monitor daemon wm = the gLite WMS Workload Manager daemon ns = the gLite WMS Network Server daemon jc = the gLite WMS Job Controller daemon pr = the gLite WMS Proxy Renewal daemon lb = the gLite WMS Logging & Bookkeeping client |
When the WMS configuration script is run, it installs the gLite script in the /etc/inet.d directory and activates it to be run at boot. The gLite script runs the glite-ce-config.py --start command and makes sure that all necessary services are started in the correct order.
The WMS services are published to R-GMA using the R-GMA Service Tool service. The Service Tool service is automatically installed and configured when installing and configuring the WMS module. The WMS configuration file contains a separate configuration section (an <instance/>) for each WMS sub-service. The required values must be filled in the configuration file before running the configuration script.
For more details about the R-GMA Service Tool service refer to section 13.2.4 later in this guide.
The Logging and Bookkeeping service (LB) tracks jobs in terms of events (important points of job life, e.g. submission, finding a matching CE, starting execution etc.) gathered from various WMS components as well as CEs (all those have to be instrumented with LB calls).
The events are passed to a physically close component of the LB infrastructure (locallogger) in order to avoid network problems. This component stores them in a local disk file and takes over the responsibility to deliver them further.
The destination of an event is one of Bookkeeping Servers (assigned statically to a job upon its submission). The server processes the incoming events to give a higher level view on the job states (e.g. Submitted, Running, Done) which also contain various recorded attributes (e.g. JDL, destination CE name, job exit code, etc.).
Retrieval of both job states and raw events is available via legacy (EDG) and WS querying interfaces.
Besides querying for the job state actively, the user may also register for receiving notifications on particular job state changes (e.g. when a job terminates). The notifications are delivered using an appropriate infrastructure. Within the EDG WMS, upon creation each job is assigned a unique, virtually non-recyclable job identifier (JobId) in an URL form.
The server part of the URL designates the bookkeeping server which gathers and provides information on the job for its whole life.
LB tracks jobs in terms of events (e.g. Transfer from a WMS component to another one, Run and Done when the jobs starts and stops execution). Each event type carries its specific attributes. The entire architecture is specialized for this purpose and is job-centric: any event is assigned to a unique Grid job. The events are gathered from various WMS components by the LB producer library, and passed on to the locallogger daemon, running physically close to avoid any sort of network problems.
The locallogger's task is storing the accepted event in a local disk file. Once it's done, confirmation is sent back and the logging library call returns, reporting success.
Consequently, logging calls have local, virtually non-blocking semantics. Further on, event delivery is managed by the interlogger daemon. It takes the events from the locallogger (or the disk files on crash recovery), and repeatedly tries to deliver them to the destination bookkeeping server (known from the JobId) until it succeeds finally.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
Parameter |
Default value |
Description |
User-defined Parameters |
||
glite.user.name |
|
The account used to run the LB daemons |
glite.user.group |
|
Group of the user specified in the 'glite.user.name' parameter. Leave it empty of comment it out to use the same as 'glite.user.name' |
Advanced Parameters |
||
glite.installer.verbose |
true |
Enable verbose output |
glite.installer.checkcerts |
true |
Enable check of host certificates |
lb.database.name |
lbserver20 |
The mySQL database name to create for storing LB data. In this version it must be set to the given value. |
lb.database.username |
lbserver |
The username to be used to access the local mySQL server. Now it must be set to the default value |
lb.index.list |
owner location destination |
Definitions of indices on all the currently supported indexed system attributes |
System Parameters |
Table 5: LB Configuration Parameters
i. Log Server
Again, you find the necessary steps described in section 13.2.4.6.
Note: Step 1,2
and 3 can also be performed by means of the remote site configuration file or a
combination of local and remote configuration files
The LB configuration script performs the following steps:
The gLite Standard Worker Node is a set of clients required to run jobs sent by the gLite Computing Element via the Local Resource Management System. It currently includes the gLite I/O Client, the Logging and Bookeeping Client, the R-GMA Client and the WMS Checkpointing library.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
Install one or more Certificate Authorities certificates in /etc/grid-security/certificates. The complete list of CA certificates can be downloaded in RPMS format from the Grid Policy Management Authority web site (http://www.gridpma.org/). A special security module called glite-security-utils can be installed by downloading and running from the gLite web site (http://www.glite.org) the script glite-security-utils_installer.sh (Chapter 5). The module contains the latest version of the CA certificates plus a number of certificate and security utilities. In particular this module installs the glite-fetch-crl script and sets up a crontab that periodically check for updated revocation lists
The Java JRE or JDK are required to run the R-GMA Client in the Worker Node. This release requires v. 1.4.2 (revision 04 or greater). The JDK version to be used is a configuration parameter in the glite-global-cfg.xml file. Please change it according to your version and location (see also sections 4.2.3 and 6.4 for more details).
The Resource Management System client must be installed on the WN before installing and configuring the WN module. This release of the CE module supports PBS, Torque and LSF.
Parameter |
Default value |
Description |
User-defined Parameters |
||
voms.voname |
|
The names of the VOs that this WN node can serve |
pool.account.basename
|
|
The prefix of the set of pool account to be created. Existing pool accounts with this prefix are not recreated |
pool.account.group
|
|
The group name of the pool accounts to be used |
pool.account.number
|
|
The number of pool accounts to create. Each account will be created with a username of the form prefixXXX where prefix is the value of the pool.account.basename parameter. If matching pool accounts already exist, they are not recreated. The range of values for this parameter is 1-999 |
data.services |
|
Information used for creation of the services.xml (ServiceDiscovery replacement) file. This file is used by the Data CLI tools. The format is: name;URL;serviceType where name is the unique nqme of the service. This is used in the command line if special services need to be adressed. URL is the service endpoint and serviceType is the java class defining the type of the service. |
Advanced Parameters |
||
glite.installer.verbose |
true |
Enable verbose output |
System Parameters |
||
wn.serviceList
|
glite-io-client, glite-rgma-client |
The gLite services, clients or applications that compose this worker node. This parameters takes a comma-separated list of service names |
Table 6: WN Configuration Parameters
The WN configuration script performs the following steps:
GLite I/O server consists basically on the server of the AliEn aiod project, modified to support GSI authentication, authorization and name resolution plug-ins, together with other small features and bug fixes.
It includes plug-ins to access remote files using the dcap or the rfio client library.
It can interact with the FiReMan Catalog, the Replica Metadata Catalog and Replica Location Service, with the File and Replica Catalogs or with the Alien file catalog.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
1. Install one or more Certificate Authorities certificates in /etc/grid-security/certificates. The complete list of CA certificates can be downloaded in RPMS format from the Grid Policy Management Authority web site (http://www.gridpma.org/). A special security module called glite-security-utils (gLite Security Utilities) can be installed by downloading and running from the gLite web site (http://www.glite.org) the script glite-security-utils_installer.sh (Chapter 5). The module contains the latest version of the CA certificates plus a number of certificate and security utilities. In particular this module installs the glite-fetch-crl, glite-mkgridmap and mkgridmap.py scripts and sets up cron jobs that periodically check for updated revocation lists and grid-mapfile entries
2. Customize the mkgridmap configuration file $GLITE_LOCATION/etc/glite-mkgridmap.conf by adding the required VOMS server groups. The information in this file is used to run the glite-mkgridmap script during the Security Utilities configuration to produce the /etc/grid-security/grid-mapfile
3. Install the server host certificate hostcert.pem and key hostkey.pem in /etc/grid-security
With some configuration of the Castor SRM, it is necessary to register the host DN of the gLite I/O Server in the Castor SRM server gridmap-file.
1. Download from the gLite web site the latest version of the gLite I/O server installation script glite-io-server_installer.sh. It is recommended to download the script in a clean directory
2. Make the script executable (chmod u+x glite-io-server_installer.sh) and execute it or execute it with sh glite-io-server_installer.sh
3. Run the script as root. All the required RPMS are downloaded from the gLite software repository in the directory glite-io-server next to the installation script and the installation procedure is started. If some RPM is already installed, it is upgraded if necessary. Check the screen output for errors or warnings.
4. If
the installation is performed successfully, the following components are
installed:
gLite I/O Server in /opt/glite
Globus
in /opt/globus
5. The
gLite I/O server configuration script is installed in
$GLITE_LOCATION/etc/config/scripts/glite-io-server-config.py.
A template configuration file is installed in
$GLITE_LOCATION/etc/config/templates/glite-io-server.cfg.xml
6. The gLite I/O server installs the R-GMA servicetool to publish its information to the information system R-GMA. The details of the installation of the R-GMA servicetool are described in section 13.2.4.5.
Common parameters
All parameters defined in this table are common to all instances.
|
||
Parameter |
Default value |
Description |
User-defined Parameters |
||
I/O Daemon initialization parameters |
||
init.username |
|
The username of the user running the I/O Daemon. If using a astor with a castor SRM, in some configurations this user must be a valid user on the Castor server. If the user doesn't exist on this I/O Server, it will be created. The uid specified in the 'init.uid' parameters may be used. |
init.groupname |
|
The groupname of the user running the I/O Daemon. If using a Castor SRM, in some configurations this group must be a valid user on the Castor server. If the group doesn't exist I/O Server, it will be created. The gid specified in the 'init.gid' parameters may be used. |
init.uid |
|
The userid of the user running the I/O Daemon. If using a Castor SRM, in some configurations the same uid of the Castor user specified in the 'init.username' parameter must be set. Leave this parameter empy or comment it out to use a system assigned uid. |
init.gid |
|
The gid of the user running the I/O Daemon. If using a Castor SRM, in some configurations the same gid of the Castor group specified in the 'init.groupname' parameter must be set. Leave this parameter empy or comment it out to use a system assigned gid. |
Advanced Parameters |
||
General gLite initialization parameters |
||
glite.installer.verbose |
true |
Enable verbose output |
glite.installer.checkcerts |
true |
Enable check for host certificate |
SSL Configuration parameters |
||
service.certificates.type |
host |
This parameter is used to specify if service or host certificates should be used for the services. If this value is 'host', the existing host certificates are copied to the service user home in the directory specified by the 'user.certificate.path' parameter; the 'service.certificate.file' and 'service.key.file' parameters are ignored. If the value is 'service' the service certificates must exist in the location specified by the 'service.certificate.file' and 'service.key.file' parameters |
service.certificate.file |
<empty> |
The service certificate (private key) file location. |
I/O Daemon parameters |
||
io-daemon.MaxTransfers |
20 |
The maximum number of concurrent transfers |
io-resolve-common.SePort |
8443 |
The port of the remote file operation server |
io-resolve-common.SeProtocol |
rfio |
The protocol to be used to contact the remote file operation server. Currently the supported values are: * rfio: use the remote file io (rfio) protocol to access remotely the file * file: use normal posix operations to access a local file (useful only for testing purposes) |
io-resolve-common.RootPathRule |
abs_dir |
The rule to be applied to define the path for creating new files. Allowed values are: * abs_dir: The file name will be created by appending the file name to the path specified by RootPath configuration parameter * user_home_dir: the file name will be created by appending the file name to a path specified by the RootPath configuration parameter, a directory with the user name first letter and then the complete user name. [Note: Since at the moment the user name that is retrieved is the distinguished name, using that option is not suggested] |
io-authz-fas.FileOwner |
<empty> |
When checking the credentials, perform an additional check on that name to verify it was the user's name. Default value is an empty string, that means that this additional test is not performed |
io-authz-fas.FileGroup |
<empty> |
When checking the credentials, perform an additional check on that name to verify it was one of the user's groups. Default value is an empty string, that means that this additional test is not performed |
io-resolve-fireman.OverwriteOwnership |
false |
Overwrite the ownership of the file when creating it. If set to true, the newly created file will have as owner the values set by the FileOwner and FileGroup configuration parameters. |
io-resolve-fireman.FileOwner |
<empty> |
The name of the group that will own any newly created file. This parameter is meaningful only if OverwriteOwnership is set to true. In case this parameter is not set, the Replica Catalog default will apply. Default value is an empty string. |
io-resolve-fireman.FileGroup |
<empty> |
The name of the group of any newly created file. This parameters is meaningful only if OverwriteOwnership is set to true. In case this parameter is not set, the Replica Catalog default will apply. Default value is an empty string. |
io-resolve-fr.OverwriteOwnership |
false |
Overwrite the ownership of the file when creating it. If set to true, the newly created file will have as owner the values set by the FileOwner and FileGroup configuration parameters. Default value is false. |
io-resolve-fr.FileOwner |
|
The name of the user that will own any newly created file. This parameter is meaningful only if OverwriteOwnership is set to true. In case this parameter is not set, the Replica Catalog default will apply. Default value is an empty string. |
io-resolve-fr.FileGroup |
|
The name of the group of any newly created file. This parameter is meaningful only if OverwriteOwnership is set to true. In case this parameter is not set, the Replica Catalog default will apply. Default value is an empty string |
System Parameters |
||
I/O Daemon parameters |
||
io-daemon.EnablePerfMonitor |
false |
Enable the Performace Monitor. If set to true, a process will be spawned to monitor the performance of the server and create some of the statistics. |
io-daemon.PerfMonitorPort |
9998 |
The Performace Monitor port |
io-daemon.CacheDir |
<empty> |
The directory where cached files should be stored |
io-daemon.CacheDirSize |
0 |
The maximum size of the directory where cached files should be stored |
io-daemon.PreloadCacheSize |
5000000 |
The size of the preloaded cache |
io-daemon.CacheLevel |
0 |
The gLite I/O Cache Level |
io-daemon.ResyncCache |
false |
Resynchronize the cache when the daemon starts |
io-daemon.TransferLimit |
100000000 |
The maximum bitrate expressed in b/s that should be used |
io-daemon.CacheCleanupThreshold |
90 |
When a cache clean up is performed, the cache will be clean up to that value. It should be intended as percentage, i.e. a value of 70 means that after a cleanup, the cache will be filled up to 70% of its maximum size |
io-daemon.CacheCleanupLimit |
90 |
Represent the limit that, when reached, triggers a cache clean up. It should be intended in percentage, i.e. a value of 90 means that when the 90% of cache is filled, the cached will be cleaned up up to the value specified by the CacheCleanupThreshold configuration parameter |
io-daemon.RedirectionList |
<empty> |
The redirection list that should be used in the Cross-Link Cache Architecture |
io-resolve-common.DisableDelegation |
true |
Don't use client's delegated credentials to contact the Web Services |
io-authz-catalogs.DisableDelegation |
true |
Don't use client's delegated credentials to contact the RMC Service |
io-authz-fas.DisableDelegation |
true |
Don't use client's delegated credentials to contact the FAS service |
io-resolve-fr.DisableDelegation |
true |
Don't use client's delegated credentials to contact the RMC Service |
VO dependant gLite I/O Server instances
A separate gLite I/O Server instance can be installed for each VO that this server must support. The values in this table (‘<instance>’ section in the configuration file) are specific to that instance. At least one instance must be defined. Create additional instance sections for each additional VO you want to support on this node. |
||
Parameter |
Default value |
Description |
User-defined Parameters |
||
vo.name |
|
The name of the VO served by this instance. |
io-daemon.Port |
|
The port to be used to contact the server. Please note that this port is only used for authentication and session establishment messages. When the real data transfer will be performed using a QUANTA parallel TCP stream a pool of sockets are opened on the server side binding a tuple of available ports from 50000 to 51000. This port should not be higher than 9999 and different I/O server instances should not run on contigous ports (for example set one to 9999 and another one to 9998) |
init.CatalogType |
|
The type of catalog to use: - 'catalogs' (EDG Replica Location Service and Replica Metadata Catalog), - 'fireman' (gLite Fireman Catalog), - 'fr' (File and Replica Catalog) The parameters not used by the chosen catalog type can be removed or left empty |
io-resolve-common parameters are required by all types of catalogues |
||
io-resolve-common.SrmEndPoint |
|
The endpoint of the SRM Server. If that value starts with httpg://, the GSI authentication will be used (using the CGSI GSOAP plugin), otherwise no authentication is requested. Example: httpg://gridftp05.cern.ch:8443/srm/managerV1 |
io-resolve-common.SeHostname |
|
The name of the Storage Element where the files are staged. It's the hostname of the remote file operation server. Example: gridftp05.cern.ch |
io-resolve-common.RootPath |
|
The path that should be prefixed to the filename when creating new files. Example: /castor/cern.ch/user/g/glite/VO-NAME/SE/ |
EDG RLS/RM parameters The parameters are only required when using the EDG catalogs. Leave them empty or comment them if not used. |
||
io-authz-catalogs.RmcEndPoint |
|
The endpoint of the RMC catalog. If that value starts with httpg:// the GSI authentication will be used (using the CGSI GSOAP plugin); if it starts with https:// the SSL authentication will be used, using the CGSI GSOAP plugin in SSL compatible mode), otherwise no authentication is requested. This is also the value of the 'io-resolve-catalogs.RmcEndpoint' parameter. Example: https://lxb2028:8443/VO-NAME/edg-replica-metadata-catalog/services/edg-replica-metadata-catalog |
io-resolve-catalogs.RlsEndpoint |
|
The endpoint of the Rls catalog. If that value starts with httpg:// the GSI authentication will be used (using the CGSI GSOAP plugin); if it starts with https:// the SSL authentication will be used, using the CGSI GSOAP plugin in SSL compatible mode), otherwise no authentication is requested. Example: https://lxb2028:8443/VO-NAME/edg-local-replica-catalog /services/edg-local-replica-catalog |
Parameters required by the Fireman and FR catalogs. |
||
io-authz-fas.FasEndpoint |
|
The endpoint of the Fas catalog. If that value starts with httpg:// the GSI authentication will be used (using the CGSI GSOAP plugin); if it starts with https:// the SSL authentication will be used, using the CGSI GSOAP plugin in SSL compatible mode), otherwise no authentication is requested. Examples: http://lxb2024.cern.ch:8080/glite-data-catalog-service-fr/services/FAS (for FR) http://lxb2024.cern.ch:8080/glite-data-catalog-service-fr/services/FiremanCatalog (for Fireman) |
Fireman parameters |
||
io-resolve-fireman.FiremanEndpoint |
|
The endpoint of the FiReMan catalog. If that value starts with httpg:// the GSI authentication will be used (using the CGSI GSOAP plugin); if it starts with https:// the SSL authentication will be used, using the CGSI GSOAP plugin in SSL compatible mode), otherwise no authentication is requested. Example: http://lxb2024.cern.ch:8080/glite-data-catalog-service-fr/services/FiremanCatalog |
FR parameters |
||
io-resolve-fr.ReplicaEndPoint |
|
The endpoint of the Replica catalog. If that value starts with httpg:// the GSI authentication will be used (using the CGSI GSOAP plugin); if it starts with https:// the SSL authentication will be used, using the CGSI GSOAP plugin in SSL compatible mode), otherwise no authentication is requested. Example: http://lxb2024.cern.ch:8080/glite-data-catalog-service-fr/services/ReplicaCatalog |
io-resolve-fr.FileEndPoint |
|
The endpoint of the File catalog. If that value starts with httpg:// the GSI authentication will be used (using the CGSI GSOAP plugin); if it starts with https:// the SSL authentication will be used, using the CGSI GSOAP plugin in SSL compatible mode), otherwise no authentication is requested. If that value is not set, the File Catalogs will not be contacted and the io-resolve-fr plug-in will managed only GUIDs. Example: http://lxb2024.cern.ch:8080/glite-data-catalog-service-fr/services/FileCatalog |
Advanced Parameters |
||
Logging parameters |
||
log.Priority |
DEBUG |
The log4cpp log level. Possible values are: DEBUG, INFO, WARNING, ERROR, CRITICAL, ALERT, FATAL |
log.FileName |
$GLITE_LOCATION_LOG/glite-io-server.log |
The location of the log file |
Table 7: gLite I/O Server Configuration Parameters
Note: Step 1,2 and 3 can also be performed by means of the remote site configuration file or a combination of local and remote configuration files
The gLite I/O server configuration script performs the following steps:
GLOBUS_LOCATION
[default is /opt/globus]
The gLite I/O Client provides some APIs (both posix and not) for accessing remote files using glite-io. It consists basically on a C wrapper of the AlienIOclient class provided by the org.glite.data.io-base module.
Install one or more Certificate Authorities certificates in /etc/grid-security/certificates. The complete list of CA certificates can be downloaded in RPMS format from the Grid Policy Management Authority web site (http://www.gridpma.org/). A special security module called glite-security-utils can be installed by downloading and running from the gLite web site (http://www.glite.org) the script glite-security-utils_installer.sh (Chapter 5). The module contains the latest version of the CA certificates plus a number of certificate and security utilities. In particular this module installs the glite-fetch-crl script and sets up a crontab that periodically check for updated revocation lists
Parameter |
Default value |
Description |
User-defined Parameters |
||
io-client.Server |
changeme |
The hostname where the gLite I/O Server is running |
Advanced Parameters |
||
glite.installer.verbose |
true |
Enable configuration script verbose output |
io-client.ServerPort |
9999 |
The port that the gLite I/O Server is listening for |
io-client.EncryptName |
true |
Enable encryption of the file name when sending a remote open request |
io-client.EncryptData |
false |
Enable encryption of the data block send and received |
log.Priority |
DEBUG |
The log4cpp log level. Possible values are: 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL', 'ALERT', 'FATAL’ |
log.FileName |
$GLITE_LOCATION_LOG/glite-io-client.log |
The location of the log file |
System Parameters |
||
io-client.CacheLevel |
7 |
The AliEn aiod Cache Level value |
io-client.NumberOfStreams |
1 |
Number of QUANTA tcp parallel streams |
Table 8: gLite I/O Client configuration parameters
The data movement services of gLite are responsible to securely transfer files between Grid sites. The transfer is performed always between two gLite Storage Elements having the same transfer protocol available to them (usually gsiftp). The gLite Local Transfer Service is composed of two separate services, the File Transfer Service and the File Placement Service, and a number of transfer agents.
The File Transfer Service is responsible for the actual transfer of the file between the SEs. It takes the source and destination names as arguments and performs the transfer. The FTS is managed by the site administrator, i.e. there is usually only one such service serving all VOs. The File Placement Service performs the catalog registration in addition to the copy. It makes sure that the catalog is only updated if the copy through the FTS was successful. The user will see this as a single atomic operation. The FPS is instantiated per VO. If a single node must support multiple VOs, then multiple instances of the FPS can be installed and configured.
The Data Transfer Agents perform data validation and scheduling operation. There are currently three agents, the Checker, the Fetcher and the Data Integrity Validator. They are instantiated per VO.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
The Java JRE or JDK are required to run the R-GMA Server. This release requires v. 1.4.2 (revision 04 or greater). The JDK/JRE version to be used is a parameter in the configuration file. Please change it according to your version and location.
Due to license reasons, we cannot redistribute Java. Please download it from http://java.sun.com/ and install it if you have not yet installed it.
Parameter |
Default value |
Description |
User-defined Parameters |
||
transfer-fps.vo.name |
|
Name of the VO for a given instance |
transfer-fps.db.name |
|
Database name for File Placement Service. Ex. 'transfer' |
transfer-fps.db.user |
|
Name of the database user owning the fps database |
transfer-fps.db.password |
|
Password for accessing the fps database |
transfer-fps.catalog.url |
|
URL of the catalog to connect to. Ex.: http://lxb1427.cern.ch:8080/glite-data-catalog-service-fr-mysql/services/FiremanCatalog |
transfer-agent.user.name |
|
User which will own the transfer-agent in the crontab |
Advanced Parameters |
||
glite.installer.verbose |
true |
Enable verbose output |
glite.installer.checkcerts |
true |
Enable check of host certificates |
glite-data-transfer-fps.DBRESOURCENAME |
transfer |
|
transfer-fps.catalog.update.interval |
10 |
Interval between checks for files in Done state by the CatalogUpdater thread (in miliseconds) |
transfer-fps.catalog.query.attribute |
<empty> |
If this attribute is not set for the service, every client can see every job. If this attribute is set for the service, then only clients exposing this attribute can see all the jobs. Other clients can only see jobs which correspond to their DN |
transfer-fps.catalog.submit.attribute |
<empty> |
If this attribute is not set for the service, every client can submit new jobs. If this attribute is set for the service, only clients exposing this attribute are able to submit new jobs |
transfer-fps.catalog.cancel.attribute |
<empty> |
If this attribute is not set for the service, any client can cancel any job. If this attribute is set for the service, clients not exposing this attribute can only cancel their jobs (the ones with the same DN), while clients exposing this attribute are allowed to cancel any job |
transfer-agent-vo.Quota |
70 |
The percentage of the max concurrent transfers that the VO is allowed to submit. For example, if the Transfer Service is able to process 1000 request at the same time, a quota of 60 means that the VO can run simoutaneously up to 600 (1000 * 0.6) Transfers. |
transfer-agent-fts-urlcopy.MaxTransfers |
10 |
The maximum number of transfers the Glite UrlCopy can process simultaneously. Those transfers are intended per VO. |
transfer-agent-fts-urlcopy.Streams |
1 |
The number of parallel streams that should be used during the transfer. |
glite-data-transfer-fps.CATALOG_DISABLE |
false |
Disable catalog lookup |
transfer-agent.check.frequency |
2 |
Delay (in minutes) between two transfer-agent Check actions |
transfer-agent.fetch.frequency |
3 |
Delay (in minutes) between two transfer-agent Check actions |
System Parameters |
||
transfer-fps.db.driver-class |
org.gjt.mm.mysql.Driver |
JDBC driver classname |
transfer-fps.name |
glite-data-transfer-fps |
FPS name |
transfer-fps.docBase |
${GLITE_LOCATION}/share/java/glite-data-transfer-fps.war |
Location of the transfer-fps.war file |
transfer-fps.db.url |
jdbc:mysql://localhost:3306/{$transfer-fps.db.name} |
Used DB URL |
transfer-agent-dao-mysql.HostName |
localhost |
Host where the MySQL database is running |
transfer-agent-dao-mysql.DBName |
${transfer-fps.db.name} |
Must match the transfer-fps.db.name |
transfer-agent-dao-mysql.User |
${transfer-fps.db.user} |
Must match the transfer-fps.db.user |
transfer-agent-dao-mysql.Password |
${transfer-fps.db.password} |
Must match the transfer-fps.db.password |
transfer-agent-vo.Name
|
${transfer-fps.vo.name} |
Must match the transfer-fps.vo.name |
transfer-agent-dao-mysql.SocketName |
/var/log/mysql/mysql.sock |
MySQL sock file |
transfer-fps.CHANNEL_NAME |
|
In the current release should not be changed |
transfer-fps.CHANNEL_DOMAIN_A |
|
In the current release should not be changed |
transfer-fps.CHANNEL_DOMAIN_B |
|
In the current release should not be changed |
transfer-fps.CHANNEL_CONTACT |
|
In the current release should not be changed |
transfer-fps.CHANNEL_BANDWIDTH
|
|
In the current release should not be changed |
Table 9: Local Transfer Service Configuration Parameters
The Local Transfer Service configuration script performs the following steps:
On the Grid, the user identifies files using Logical File Names (LFN).
The LFN is the key by which the users refer to their data. Each file may have several replicas, i.e. managed copies. The management in this case is the responsibility of the File and Replica Catalog.
The replicas are identified by Site URLs (SURLs). Each replica has its own SURL, specifying implicitly which Storage Element needs to be contacted to extract the data. The SURL is a valid URL that can be used as an argument in an SRM interface (see section [*]). Usually, users are not directly exposed to SURLs, but only to the logical namespace defined by LFNs. The Grid Catalogs provide mappings needed for the services to actually locate the files. To the user the illusion of a single file system is given.
Currently gLite provides two different modules for installing the catalog on MySQL or on Oracle. The names of the modules are:
gilte-data-single-catalog |
č |
MySQL version |
gilte-data-single-catalog-oracle |
č |
Oracle version |
In what follows the installation instructions are given for a generic single catalog version. Whenever the steps or requirements differ for MySQL and Oracle it will be noted. Replace glite-data-single-catalog with glite-data-single-catalog-oracle to use the Oracle version.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
The Java JDK is required to run the Single Catalog Server. This release requires v. 1.4.2 (revision 04 or greater). The JDK version to be used is a parameter in the configuration file. Please change it according to your version and location.
Due to license reasons, we cannot redistribute Java. Please download it from http://java.sun.com/ and install it if you have not yet installed it.
The Oracle version
requires the JDBC drivers (ocrs12.jar, ojdbc14.jar, orai18n.jar) to be
installed on the server before running the installation script. These packages
cannot be redistributed and are subject to export restrictions. Please download
them from the Oracle web site
http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc101040.html
and install them in ${CATALINA_HOME}/common/lib.
Parameter |
Default value |
Description |
User-defined Parameters |
||
catalog-service-fr-mysql.VONAME |
|
Name of the Virtual Organisation which is served by the catalog instance |
catalog-service-fr-mysql.DBNAME |
|
Database name used for a catalog service |
catalog-service-fr-mysql.DBUSER |
|
Database user name owning the catalog database |
catalog-service-fr-mysql.DBPASSWORD |
|
Password for acessing the catalog database
|
|
|
|
Advanced Parameters |
||
glite.installer.verbose |
true |
Enable verbose output |
glite.installer.checkcerts |
true |
Enable check of host certificates |
|
|
|
System Parameters |
||
catalog-service-fr-mysql.DBURL |
Mysql: jdbc:mysql://localhost:3306/${catalog-service-fr-mysql.DBNAME} |
URL of the database
|
Table 10: Single Catalog Configuration Parameters
The Single Catalog configuration script performs the following steps:
The R-GMA (Relational Grid Monitoring Architecture) is the Information and Monitoring Service of gLite. It is based on the Grid Monitoring Architecture (GMA) from the Grid Global Forum (GGF), which is a simple Consumer-Producer model that models the information infrastructure of a Grid as a set of consumers (that request information), producers (that provide information) and a central registry which mediates the communication between producers and consumers. R-GMA offers a global view of the information as if each Virtual Organisation had one large relational database.
Producers contact the registry to announce their intention to publish data, and consumers contact the registry to identify producers, which can provide the data they require. The data itself passes directly from the producer to the consumer: it does not pass through the registry.
R-GMA adds a standard query language (a subset of SQL) to the GMA model, so consumers issue SQL queries and receive tuples (database rows) published by producers, in reply. R-GMA also ensures that all tuples carry a time-stamp, so that monitoring systems, which require time-sequenced data, are inherently supported.
The R-GMA server part is divided into four components:
The client part of R-GMA contains the producer and consumers of information. There is one generic client and a set of four specialized clients to deal with a certain type of information:
Client
to make the data from the R-GMA site-publisher, servicetool and GIN constantly
available. By default the glue and service tables are archived, however this
can be configured.
Figure 2 gives an overview of the R-GMA architecture and the
distribution of the different
R-GMA components.
Figure 2 R-GMA components
In order to facilitate the installation of the information system R-GMA, the different components of the server and clients have been combined into one R-GMA server deployment module and several client sub-deployment modules that are automatically installed together with the corresponding gLite deployment modules that use them. Table 11 gives a list of R-GMA deployment modules, their content and/or the list of gLite deployment modules that install/use them.
In order to use the information system R-GMA, you first have
to install the R-GMA server on one node. If you want, you can install further
R-GMA servers on other nodes. The following
Deployment module |
Contains |
Used / included by |
R-GMA server |
R-GMA server R-GMA registry server R-GMA schema server R-GMA browser R-GMA site publisher R-GMA data archiver R-GMA servicetool |
|
R-GMA client |
RGMA client APIs |
User Interface (UI) Worker Node (WN) |
R-GMA servicetool |
R-GMA servicetool |
Computing Element (CE) Data Local Transfer Service Data Single Catalog (MySQL) Data Single Catalog (Oracle) I/O-Server Logging & Bookkeeping (LB) R-GMA server Torque Server VOMS Server Workload Management System (WMS) |
R-GMA GIN |
R-GMA GadgetIN |
Computing Element (CE) |
Table 11: R-GMA deployment modules
rules have to be taken into account when installing a single or multiple servers and enabling/disabling the different options of the server(s):
Next, you can install the different services, e.g. the Computing Element. All necessary R-GMA components needed by a service are automatically downloaded and installed together with the service. You will only need to configure the corresponding parts of R-GMA by modifying the corresponding configuration files accordingly.
There is one common R-GMA configuration file (glite-rgma-common.cfg.xml)
that is used by all R-GMA components to handle common R-GMA settings and that
is shipped with each
R-GMA component. In addition, each R-GMA component comes with its own
configuration file (see the following sections for details).
The R-GMA server is the central server of the R-GMA service infrastructure. It contains the four R-GMA server parts – server, schema, registry and browser (see section 13.1.1) as well as the R-GMA clients – R-GMA servicetool, site publisher and data archiver (see section 13.1.2):
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
The Java JRE or JDK are required to run the R-GMA Server. This release requires v. 1.4.2 (revision 04 or greater). The JDK/JRE version to be used is a parameter in the configuration file. Please change it according to your version and location.
Due to license reasons, we cannot redistribute Java. Please download it from http://java.sun.com/ and install it if you have not yet installed it.
Parameter |
Default value |
Description |
User-defined parameters |
||
rgma.server.hostname |
|
Hostname of the R-GMA server. |
rgma.schema.hostname |
|
Host name of the R-GMA schema service. Example: lxb1420.cern.ch (See also configuration parameter ‘rgma.server.run_schema_service’ in the R-GMA server configuration file in case you install a server) |
rgma.registry.hostname |
|
Host name of the R-GMA registry service. You must specify at least one and you can specify several if you want to use several registries. Example: lxb1420.cern.ch (See also configuration parameter ‘rgma.server.run_registry_service’ in the R-GMA server configuration file in case you install a server). |
Advanced Parameters |
||
System Parameters |
||
rgma.user.name |
rgma |
Name of the user account used to run the R-GMA gLite services. Example: rgma |
rgma.user.group |
rgma |
Group of the user specified in the parameter ‘rgma.user.name’. Example: rgma |
Table 12: R-GMA common configuration parameters
Parameter |
Default value |
Description |
User-defined Parameters |
||
rgma.server. |
|
MySQL root password. Example: verySecret |
rgma.server. |
no |
Run a schema server by yourself (yes|no). If you want to run it on your machine set ‘yes’ and set the parameter ‘rgma.schema.hostname’ to the hostname of your machine otherwise set it to ‘no’ and set the ‘rgma.schema.hostname’ to the host name of the schema server you want to use. Example: yes |
rgma.server.
|
no |
Run a registry server by yourself (yes|no). If you want to run it on your machine set ‘yes’ and add your hostname to the parameter list ‘rgma.registry.hostname’ otherwise set it to ‘no’. Example: yes |
rgma.server.
|
yes |
Run an R-GMA browser (yes|no). Running a browser is optional but useful. Example: yes |
rgma.server. |
no |
Run the R-GMA data archiver (yes|no). Running an archiver makes the
data from the site-publisher, servicetool and GadgetIN constantly available.
If you turn on this option, by default the glue and service tables are
archived. To change the archiving behaviour, you have to create/change the
archiver configuration file and point the parameter ‘rgma.server. Example: yes |
rgma.server.run_site_publisher |
yes |
Run the R-GMA site-publisher (yes|no). Running the site-publisher publishes your site to R-GMA. Example: yes |
rgma.site-publisher.site-name |
|
Hostname of the site. It has to be a DNS entry owned by the site and does not have to be shared with another site (i.e it uniquely identifies the site). Example: lxb1420.cern.ch |
rgma.site-publisher.contact.system_administrator |
|
Contact email address of the site system administrator. Example: systemAdministrator@mysite.com |
rgma.site-publisher.contact.user_support |
|
Contact email address of the user support.
Example: userSupport@mysite.com |
rgma.site-publisher.contact.site_security |
|
Contact email address of the site security responsible. Example: security@mysite.com |
rgma.site-publisher.location.latitude |
|
Latitude of your site. Please go to 'http://www.multimap.com/' to find the correct value for your site. Example: 46.2341 |
rgma.site-publisher.location.longitude |
|
Longitude of your site. Please go to 'http://www.multimap.com/' to find the correct value for your site. Example: 6.0447 |
Advanced Parameters |
||
glite.installer.verbose |
true |
Enable verbose output. Example : true |
rgma.server.
|
1000 |
Maximum number of threads that are created for the tomcat http connector to process requests. This, in turn specifies the maximum number of concurrent requests that the connector can handle. Example: 1000 |
rgma.server.archiver_configuration_file |
${GLITE_LOCATION}/etc/rgma-flexible-archiver/glue-config.props |
Configuration file
to be used to setup the flexible-archiver database to select which tables are supposed to be backuped. By
default, the glue and service
tables are selected. (See also parameter ‘rgma.server. Example: '/my/path/my_config_file.props' |
System Parameters |
||
rgma.server.
|
R-GMA |
Path under which R-GMA server should be deployed. Example: R-GMA |
rgma.server. |
R-GMA.war |
Name of war file for R-GMA server. Example: R-GMA.war |
Table 13: R-GMA server Configuration Parameters
The R-GMA configuration script performs the following steps:
1.
Reads the following environment variables if set in the environment or
in the global gLite configuration file $GLITE_LOCATION/etc/config/glite-global.csf.xml:
GLITE_LOCATION_VAR [default is /var/glite]
GLITE_LOCATION_LOG [default is /var/log/glite]
GLITE_LOCATION_TMP [default is /tmp/glite]
2.
Sets the following environment variables if not already set using the
values set in the global and R-GMA configuration files:
GLITE_LOCATION [=/opt/glite if not set anywhere]
CATALINA_HOME to the location specified in the global
configuration file
[default is
/var/lib/tomcat5/]
JAVA_HOME to the location specified in the
global configuration file
3. Configures the gLite Security Utilities module
4. Checks the directory structure of $GLITE_LOCATION.
5.
Load the R-GMA server configuration file
$GLITE_LOCATION/etc/config/glite-rgma-server.cfg.xml
and the R-GMA common configuration file
$GLITE_LOCATION/etc/config/glite-rgma-common.cfg.xml
and checks the configuration values.
6. Prepares the tomcat environment by:
a. setting the CATALINA_OPTS for the maximum java heap size ‘-Xmx’ to half the memory size of your machine.
b. setting the maximum number of threads for the http connector using the configuration value.
c. deploying the R-GMA server application by creating the corresponding context file in $CATALINA_HOME/conf/Catalina/localhost/XXX.xml where XXX is the deploy path name of the R-GMA server specified in the configuration file (the default is R-GMA).
7.
Configures the general R-GMA setup by running the R-GMA setup script
$GLITE_LOCATION/share/rgma/scripts/rgma-setup.py
using the configuration values from the configuration file for
server, schema and registry hostname.
8. Exports the environment variable RGMA_HOME to $GLITE_LOCATION
9. Starts the MySQL server.
10. Sets the MySQL root password using the configuration value. If the password is already set, the script verifies if the present password and the one specified by the configuration file are the same.
11. Configures
the R-GMA server by running the R-GMA server setup script
$GLITE_LOCATION/share/rgma/scripts/rgma-server-setup.py
using the option to run/not run a schema, registry and browser from
the configuration file.
12. Fills the MySQL DB with the R-GMA configuration.
13. Configures
the R-GMA server security by updating the file
$GLITE_LOCATION/etc/rgma-server/ServletAuthentication.props
with the location of the keys and cert files for tomcat.
14. If the site publisher or data archiver (flexible-archiver) are turned on in the configuration, the R-GMA client security is configured:
a. The hostkey and certificates are copied to the .cert subdirectory of the R-GMA user home directory.
b.
The security configuration file for the client
‘$GLITE_LOCATION/etc/rgma/ClientAuthentication.props
is updated with the location of the cert and key files.
15. If the site publisher is turned on in the configuration, the site publisher will be configured:
a.
The configuration file
$GLITE_LOCATION/etc/rgma-publish-site/site.props
is updated with the site name and the corresponding contact addresses.
16. If the data archiver (flexible-archiver) is turned on in the configuration, the flexible archiver is configured:
a.
The configuration file for the archiver properties, specified in the
configuration parameter ‘rgma.server.archiver_configuration_file’ is copied to
$GLITE_LOCATION/etc/rgma-flexible-archiver/flexy.props.
b.
The flexible-archiver database is set up via
$GLITE_LOCATION/bin/rgma-flexible-archiver-db-setup \
$GLITE_LOCATION/etc/rgma-flexible-archiver/flexy.props.
17. Starts the MySQL server.
18. Starts the tomcat server and gives time to go up to full speed before continuing
19. Starts the data archiver if enabled.
20. Starts the site publisher if enabled.
The R-GMA Client module is a set of client API in C, C++, Java and Python to access the information and monitoring functionality of the R-GMA system. The client is automatically installed as part of the User Interface and Worker Node.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
Install one or more Certificate Authorities certificates in /etc/grid-security/certificates. The complete list of CA certificates can be downloaded in RPMS format from the Grid Policy Management Authority web site (http://www.gridpma.org/). A special security module called glite-security-utils (gLite Security Utilities) is installed and configured automatically when installing and configuring the R-GMA Client (refer to Chapter 5 for more information about the Security Utilities module). The module contains the latest version of the CA certificates plus a number of certificate and security utilities. In particular this module installs the glite-fetch-crl, glite-mkgridmap and mkgridmap.py scripts and sets up cron jobs that periodically check for updated revocation lists and grid-mapfile entries if required).
The Java JRE or JDK are required to run the R-GMA client java API. This release requires v. 1.4.2 (revision 04 or greater). The JDK/JRE version to be used is a parameter in the configuration file. Please change it according to your version and location.
Due to license reasons, we cannot redistribute Java. Please download it from http://java.sun.com/ and install it if you have not yet installed it.
If you install the client as part of another deployment module (e.g. the UI), the R-GMA client is installed automatically and you can continue with the configuration description in the next section. Otherwise, the installation steps are:
Parameter |
Default value |
Description |
User-defined Parameters |
||
Advanced Parameters |
||
glite.installer.verbose |
True |
Enable verbose output |
System Parameters |
Table 14: R-GMA Client Configuration Parameters
If you use the R-GMA
client as a sub-deployment module that is downloaded and used by another
deployment module, the configuration script is run automatically by the
configuration script of the other deployment module and you can skip the
following steps. Otherwise:
The R-GMA Client configuration script performs the following steps:
The R-GMA servicetool is an R-GMA client tool to publish information about the services it knows about and their current status. The tool is divided into three parts:
A daemon monitors regularly configuration files containing information about the services a site has installed. At regular intervals, this information is published to the ServiceTable. Each service specifies a script that needs to be run to obtain status information. The scripts are run by the daemon at the specified frequency and the results are inserted into the ServiceStatus table.
The second part of the tool is a command line program that modifies the configuration files to add delete and modify services. It does not communicate with the daemon directly but the next time the daemon scans the configuration file the changes will be published.
The third part of the tool is a command line program to query the service tables for status information.
This service is normally installed automatically with other modules and doesn’t need to be installed independently.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
Install one or more Certificate Authorities certificates in /etc/grid-security/certificates. The complete list of CA certificates can be downloaded in RPMS format from the Grid Policy Management Authority web site (http://www.gridpma.org/). A special security module called glite-security-utils (gLite Security Utilities) is installed and configured automatically when installing and configuring the R-GMA Servicetool (refer to Chapter 5 for more information about the Security Utilities module). The module contains the latest version of the CA certificates plus a number of certificate and security utilities. In particular this module installs the glite-fetch-crl, glite-mkgridmap and mkgridmap.py scripts and sets up cron jobs that periodically check for updated revocation lists and grid-mapfile entries if required).
The Java JRE or JDK are required to run the R-GMA servicetool. This release requires v. 1.4.2 (revision 04 or greater). The JDK/JRE version to be used is a parameter in the configuration file. Please change it according to your version and location.
Due to license reasons, we cannot redistribute Java. Please download it from http://java.sun.com/ and install it if you have not yet installed it.
If you install the servicetool as part of another deployment module (e.g. the Computing element), the R-GMA servicetool is installed automatically and you can continue with the configuration description in the next section. Otherwise, the installation steps are:
Parameter |
Default value |
Description |
User-defined Parameters |
||
rgma.servicetool.sitename
|
|
DNS name of the site for the published services (in general the hostname). Example: lxb2029.cern.ch |
Advanced Parameters |
||
glite.installer.verbose |
True |
Enable verbose output. Example : true |
System Parameters |
Table 15: R-GMA servicetool configuration parameters
Parameter |
Default value |
Description |
User-defined Parameters |
||
rgma.servicetool.enable |
true |
Publish this service via the R-GMA servicetool. If this varaiable set to false the other values below are not taken into account. Example: true |
rgma.servicetool.name |
|
Name of the service. This should be globally unique. Example: host.name.service_name |
rgma.servicetool.
|
|
URL to contact the service at. This should be unique for each service. Example: http://your.host.name:port/serviceGroup/ServiceName |
rgma.servicetool. |
|
The service type. This should be uniquely defined for each service type. Currently two methods of type naming are recommended: · The targetNamespace from the WSDL document followed by a space and then the service name · A URL owned by the body or individual who defines the service type Example: Name of service type |
rgma.servicetool. |
|
Service version in the form ‘major.minor.patch’ Example: 1.2.3 |
rgma.servicetool. |
|
How often to publish the service details (like endpoint, version etc). (Unit: seconds) Example: 600 |
rgma.servicetool. |
|
Script to run to determine the service status. This script should return an exit code of 0 to indicate the service is OK, other values should indicate an error. The first line of the standard output should be a brief message describing the service status (e.g. ‘Accepting connections’ Example: /opt/glite/bin/myService/serviceStatus |
rgma.servicetool. |
|
How often check and publish service status. (Unit: seconds) Example: 60 |
rgma.servicetool.url_wsdl |
|
URL of a WSDL document for the service (leave blank if the service has no WSDL). |
rgma.servicetool. |
|
URL of a document containing a detailed description of the service and how it should be used. Example: http://your.host.name/service/semantics.html |
Advanced Parameters |
||
System Parameters |
Table 16: R-GMA servicetool configuration parameters for
a service to be published via the R-GMA servicetool
The R-GMA configuration script performs the following steps:
1.
Set the following environment variables if not already set using the
values set in the global and R-GMA configuration files:
GLITE_LOCATION [=/opt/glite if not set anywhere]
JAVA_HOME to the location specified in the
global configuration file
2.
Read the following environment variables if set in the environment or in
the global gLite configuration file $GLITE_LOCATION/etc/config/glite-global.csf.xml:
GLITE_LOCATION_VAR [default is /var/glite]
GLITE_LOCATION_LOG [default is /var/log/glite]
GLITE_LOCATION_TMP [default is /tmp/glite]
3. Checks the directory structure of $GLITE_LOCATION.
4.
Configures the R-GMA servicetool by creating the servicetool
configuration file at
$GLITE_LOCATION/etc/rgma-servicetool/rgma-servicetool.conf
that specifies the sitename
5.
Takes the set of parameters for the R-GMA servicetool from each instance
in the service configuration files and for each of these instances, a
configuration file at
$GLITE_LOCATION/etc/rgma-servicetool/services/XXXX.service
is created, where XXXX is the name of the service.
6.
Starts the R-GMA servicetool via
/etc/init.d/rgma-servicetool start
The R-GMA GadgetIN (GIN) is an R-GMA client to extract information from MDS and to republish it to R-GMA. The R-GMA GadgetIN is installed and used by the Computing Element (CE) to publish its information and does not need to be installed independently.
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
Install one or more Certificate Authorities certificates in /etc/grid-security/certificates. The complete list of CA certificates can be downloaded in RPMS format from the Grid Policy Management Authority web site (http://www.gridpma.org/). A special security module called glite-security-utils (gLite Security Utilities) is installed and configured automatically when installing and configuring the R-GMA Servicetool (refer to Chapter 5 for more information about the Security Utilities module). The module contains the latest version of the CA certificates plus a number of certificate and security utilities. In particular this module installs the glite-fetch-crl, glite-mkgridmap and mkgridmap.py scripts and sets up cron jobs that periodically check for updated revocation lists and grid-mapfile entries if required).
The Java JRE or JDK are required to run the R-GMA GadgetIN. This release requires v. 1.4.2 (revision 04 or greater). The JDK/JRE version to be used is a parameter in the configuration file. Please change it according to your version and location.
Due to license reasons, we cannot redistribute Java. Please download it from http://java.sun.com/ and install it if you have not yet installed it.
If you install the R-GMA GadgetIN as part of another deployment module (e.g. the Computing element), the R-GMA GadgetIN is installed automatically and you can continue with the configuration description in the next section. Otherwise, the installation steps are:
1. Download the latest version of the R-GMA GadgetIN installation script glite-rgma-gin_installer.sh from the gLite web site. It is recommended to download the script in a clean directory.
2.
Make the script executable
chmod u+x glite-rgma-gin _installer.sh
and execute it or execute it with
sh glite-rgma-gin _installer.sh
3. Run the script as root. All the required RPMS are downloaded from the gLite software repository in the directory glite-rgma-gin next to the installation script and the installation procedure is started. If some RPM is already installed, it is upgraded if necessary. Check the screen output for errors or warnings.
4.
If the installation is performed successfully, the following components
are installed:
gLite in /opt/glite
($GLITE_LOCATION)
gLite-essentials java in $GLITE_LOCATION/externals/share
5.
The gLite R-GMA gin configuration script is installed in
$GLITE_LOCATION/etc/config/scripts/glite-rgma-gin-config.py.
All the necessary template configuration files are installed into
$GLITE_LOCATION/etc/config/templates/
The next section will guide you through the different files and
necessary steps for the configuration.
If you use the R-GMA
GadgetIN as a sub-deployment module that is downloaded and used by another
deployment module (e.g. the CE), the configuration script is run automatically
by the configuration script of the other deployment module and you can skip
step 3. Otherwise:
Parameter |
Default value |
Description |
User-defined Parameters |
||
rgma.gin.run_generic_info_provider |
|
Run generic information provider (gip) backend (yes|no). Within LCG this comes with the ce and se Example: no |
rgma.gin.run_fmon_provider
|
|
Run fmon backend (yes|no). This is used by LCG for gridice. Example: no |
rgma.gin.run_ce_provider |
|
Run ce backend (yes|no). |
Advanced Parameters |
||
glite.installer.verbose |
True |
Enable verbose output. Example : true |
System Parameters |
Table 17: R-GMA GadgetIN configuration parameters
The R-GMA GadgetIN configuration script performs the following steps:
[To Be Added]
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
1. Install one or more Certificate Authorities certificates in /etc/grid-security/certificates. The complete list of CA certificates can be downloaded in RPMS format from the Grid Policy Management Authority web site (http://www.gridpma.org/). A special security module called glite-security-utils can be installed by downloading and running from the gLite web site (http://www.glite.org) the script glite-security-utils_installer.sh (Chapter 5). The module contains the latest version of the CA certificates plus a number of certificate and security utilities. In particular this module installs the glite-fetch-crl script and sets up a crontab that periodically check for updated revocation lists
2. Install the server host certificate hostcert.pem and key hostkey.pem in /etc/grid-security
The Java JRE or JDK are required to run the R-GMA Server. This release requires v. 1.4.2 (revision 04 or greater). The JDK/JRE version to be used is a parameter in the configuration file. Please change it according to your version and location.
Due to license reasons, we cannot redistribute Java. Please download it from http://java.sun.com/ and install it if you have not yet installed it.
1. Download from the gLite web site the latest version of the VOMS Server installation script glite-voms-server_install.sh. It is recommended to download the script in a clean directory
2. Make the script executable (chmod u+x glite-voms-server_installer.sh) and execute it
3. Run the script as root. All the required RPMS are downloaded from the gLite software repository in the directory glite-voms-server next to the installation script and the installation procedure is started. If some RPM is already installed, it is upgraded if necessary. Check the screen output for errors or warnings.
4.
If the installation is performed successfully, the following components
are installed:
gLite in /opt/glite
Tomcat in /var/lib/tomcat5
5. The gLite VOMS Server and VOMS Admnistration configuration script is installed in $GLITE_LOCATION/etc/config/scripts/glite-voms-server-config.py. A template configuration file is installed in $GLITE_LOCATION/etc/config/templates/glite-voms-server.cfg.xml
1. Copy the global configuration file template $GLITE_LOCATION/etc/config/template/glite-global.cfg.xml to $GLITE_LOCATION/etc/config, open it and modify the parameters if required (Table 1)
2. Copy the configuration file template from $GLITE_LOCATION/etc/config/templates/glite-voms-server.cfg.xml to $GLITE_LOCATION/etc/config/glite-voms-service.cfg.xml and modify the parameters values as necessary (Table 18)
3.
Some parameters have default values; others must be changed by the user.
All parameters that must be changed have a token value of changeme. Since
multiple instances of the VOMS Server can be installed on the same node
(one per VO), some of the parameters refer to individual instances. Each
instance is contained in a separate name <instance/> tag. A default
instance is already defined and can be directly configured. Additional
instances can be added by simply copying and pasting the <instance/>
section, assigning a name and changing the parameters values as desired.
The following parameters can be set:
Parameter |
Default value |
Description |
User-defined Parameters |
||
voms.vo.name |
|
Name of the VO associated with the VOMS instance |
voms.port.number |
|
Port number of the VOMS instance |
voms.code |
|
VOMS code. In the multi-instance scenario this parameter MUST have a unique value for each VOMS instance |
vo.admin.dn |
|
Certificate DN of the VO admin |
vo.admin.ca |
|
CA of the VO admin |
vo.admin.cn |
|
Common name of the VO admin |
vo.admin.e-mail |
|
E-mail address of the VO admin |
vo.ca.URI |
|
URI from where the CRIs are downloaded |
voms.mysql.passwd |
|
Password (in clear) of the root user of the MySQL server used for VOMS databases |
voms.db.user |
|
Name of the VOMS database owner. In multi-instance scenario this value CAN be redefined in each 'instance' section |
voms.db.passwd |
|
Password (clear) of the VOMS database owner. In multi-instance scenario this value CAN be redefined in the 'instance' section |
Advanced Parameters |
||
glite.installer.verbose |
true |
Enable verbose output |
glite.installer.checkcerts |
true |
Enable check of host certificates |
voms-admin.install |
true |
Install the VOMS Admin interface on this server |
System Parameters |
Table 18: VOMS Configuration Parameters
4. As root run the VOMS Server configuration file $GLITE_LOCATION/etc/config/scripts/glite-voms-server-config.py
5. The VOMS Server is now ready.
The VOMS Server configuration script performs the following steps:
1.
Set the following environment variables if not already set using the
values defined in the global and lb configuration files:
GLITE_LOCATION [default is /opt/glite]
CATALINA_HOME [default is /var/lib/tomcat5]
2.
Read the following environment variables if set in the environment or in
the global gLite configuration file
$GLITE_LOCATION/etc/config/glite-global.cfg.xml:
GLITE_LOCATION_VAR
GLITE_LOCATION_LOG
GLITE_LOCATION_TMP
3. Load the VOMS Server configuration file $GLITE_LOCATION/etc/config/glite-voms-server.cfg.xml
4.
Set the following additional environment variables needed internally by
the services (this requirement should disappear in the future):
PATH=$GLITE_LOCATION/bin:$GLITE_LOCATION/externals/bin:$GLOBUS_LOCA
TION/bin:$PATH
LD_LIBRARY_PATH=$GLITE_LOCATION/lib:$GLITE_LOCATION/externals/lib
$GLOBUS_LOCATION/lib:$LD_LIBRARY_PATH
The gLite user Interface is a suite of clients and APIs that users and applications can use to access the gLite services. The gLite User Interface includes the following components:
· Data Catalog command-line clients and APIs
· Data Transfer command-line clients and APIs
· gLite I/O Client and APIs
· R-GMA Client and APIs
· VOMS command-line tools
· Workload Managemenet System clients and APIs
· Logging and bookkeeping clients and APIs
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
1. http://www.gridpma.org/A security module called glite-security-utils is installed and configured automatically by http://www.glite.org/ by the UI installer. The module contains the latest version of the CA certificates plus a number of certificate and security utilities. In particular this module installs (for the root install) the glite-fetch-crl script and sets up a crontab that periodically check for updated revocation lists. In case of the non-privileged user installation the CRL update is left to the decision of the user and adding of it into the user's crontab is the manual step to do.
The Java JRE or JDK are required to run the R-GMA Server. This release requires v. 1.4.2 (revision 04 or greater). The JDK/JRE version to be used is a parameter in the configuration file. Please change it according to your version and location.
Due to license reasons, we cannot redistribute Java. Please download it from http://java.sun.com/ and install it if you have not yet installed it.
The gLite User Interface can be installed as root or as non-privileged user. The installation procedure is virtually identical. The root installation installs by default the UI RPMS in the standard location /opt/glite (the location of the gLite RPMS can be changed by means of the prefix command line switch).
The non-privileged user installation does not differ from the root one. The user installation is still based on the services provided by the rpm program (dependency checking, package removal and upgrade), but uses a copy of the system RPM database created in user space and used for the local user installation. This approach allows performing a non-privileged user installation and still keeping the advantages of using a package manager.
Location of the gLite UI installed by the non-privileged user is by default set to `pwd`/glite_ui (glite_ui directory in the current working directory).
The destination directory of both root and user installations can be modified by using of the –basedir <path> option of the ui installer script, where the <path> MUST be an absolute path.
The installation steps are the same in both the root and no-root installation cases:
1. Download from the gLite web site the latest version of the UI installation script glite-ui_install.sh. It is recommended to download the script in a clean directory
2. Make the script executable (chmod u+x glite-ui_installer.sh) and execute it. If needed, pass the –basedir <path> option to specify the target installation directory.
3. Run the script as root or as normal user. All the required RPMS are downloaded from the gLite software repository in the directory glite-ui next to the installation script and the installation procedure is started. If some RPM is already installed, they are upgraded if necessary. Check the screen output for errors or warnings. This step can fail in case if some of the OS RPMs are missing. These RPMs MUST be installed manually from the OS distribution CD, or by apt/yum tools
4.
If the installation is performed successfully, the following
components are installed:
a) root installation:
gLite in /opt/glite (= GLITE_LOCATION)
Globus in /opt/globus (= GLOBUS_LOCATION)
GPT in /opt/gpt (=
GPT_LOCATION)
b) user installation:
gLite, Globus and GPT are installed in the tree from
`pwd`/glite_ui by removing the /opt/[glite, globus, gpt] part. The
GLITE_LOCATION, GLOBUS_LOCATION and GPT_LOCATION variables are set to the
`pwd`/glite_ui value.
5. The gLite UI configuration script is installed in $GLITE_LOCATION/etc/config/scripts/glite-voms-server-config.py. A template configuration file is installed in $GLITE_LOCATION/etc/config/templates/glite-ui.cfg.xml
1. Copy the global configuration file template $GLITE_LOCATION/etc/config/template/glite-global.cfg.xml to $GLITE_LOCATION/etc/config, open it and modify the parameters if required (Table 1)
2. Copy the configuration file templates from $GLITE_LOCATION/etc/config/templates/glite-ui.cfg.xml to $GLITE_LOCATION/etc/config/glite-ui-service.cfg.xml
$GLITE_LOCATION/etc/config/templates/glite-io-client.cfg.xml to $GLITE_LOCATION/etc/config/glite-io-client.cfg.xml
$GLITE_LOCATION/etc/config/templates/glite-rgma-client.cfg.xml to $GLITE_LOCATION/etc/config/glite-rgma-client.cfg.xml
$GLITE_LOCATION/etc/config/templates/glite-rgma-common.cfg.xml to $GLITE_LOCATION/etc/config/glite-rgma-common.cfg.xml
$GLITE_LOCATION/etc/config/templates/glite-security-utils.cfg.xml to $GLITE_LOCATION/etc/config/glite-security-utils.cfg.xml
and modify the parameter values as necessary (Table 19). For glite-io-client, glite-rgma-client, glite-rgma-common and glite-security-utils configuration files refer please the corresponding cahpters of this guide. Alternatively, a site configuration file can be used (refer to section 4.2.4 for more information)
3.
Some parameters have default values; others must be changed by the user.
All parameters that must be changed have a token value of changeme.
The configuration file contains one or more <set> sections, one per
each VO that the UI must be configured for. It also contains a global
<parameters> section
The following parameters can be set:
For the <set/> section:
Parameter |
Default value |
Description |
User-defined Parameters |
||
name |
|
Name of set |
ui.VirtualOrganisation |
|
Name of the VO corresponding to this set |
ui.NSAddresses |
|
Array of the WMS Network Servers for a given VO |
ui.LBAddresses |
|
Array of Logging and Bookkeeping servers corresponding to each NS server |
ui.MyProxyServer |
|
MyProxy server to use |
ui.voms.server |
|
VOMS server name for given VO |
ui.voms.port |
|
"VOMS server port number |
ui.voms.cert.subject |
|
DN of the VOMS server's certificate |
py-ui.requirements |
|
Requirements for job matchmaking |
Parameter |
Default value |
Description |
User-defined Parameters |
||
py-ui.DefaultVo |
|
Default VO to connect |
Advanced Parameters |
||
glite.installer.verbose |
true |
Enable verbose output |
glite.installer.checkcerts |
false |
Switch on/off the checking of the existence of the host certificate files |
py-ui.rank |
- other.GlueCEStateEstimatedResponseTime |
|
py-ui.RetryCount |
3 |
|
py-ui.OutputStorage |
“/tmp” |
|
py-ui.ListenerStorage |
“/tmp” |
|
py-ui.LoggingTimeout |
10 |
|
py-ui.LoggingSyncTimeout |
10 |
|
py-ui.NSLoggerLevel |
1 |
|
py-ui.DefaultStatusLevel |
1 |
|
py-ui.DefaultLogInfoLevel |
1 |
|
System Parameters |
||
|
|
|
Table 19: UI Configuration Parameters
4. Run the UI configuration file $GLITE_LOCATION/etc/config/scripts/glite-ui-config.py
5. The gLite user Interface is now ready.
The UI configuration script performs the following steps:
1. To avoid the unnecessary editing of the global configuration file, the installer tries to modify the GLITE_LOCATION* GLOBUS_LOCATION variables to point to the currently installed UI instance. All modifications are reported in the configuration script output.
2.
Set the following environment variables if not already set using the
values defined in the global and ui configuration files:
GLITE_LOCATION [default is /opt/glite or `pwd`/glite_ui]
GLOBUS_LOCATION [default is /opt/globus or `pwd`glite_ui]
3.
Read the following environment variables if set in the environment or in
the global gLite configuration file
$GLITE_LOCATION/etc/config/glite-global.cfg.xml:
GLITE_LOCATION_VAR
GLITE_LOCATION_LOG
GLITE_LOCATION_TMP
4. Load the UI configuration file $GLITE_LOCATION/etc/config/glite-ui.cfg.xml or a site configuration file from the specified URL
5.
Set the following additional environment variables:
PATH=$GLITE_LOCATION/bin:$GLITE_LOCATION/externals/bin:$GLOBUS_LOCATION/bin:$GPT_LOCATION/bin:$PATH
LD_LIBRARY_PATH=$GLITE_LOCATION/lib:$GLITE_LOCATION/externals/lib:$GLOBUS_LOCATION/lib:
$GPT_LOCATION/lib:$LD_LIBRARY_PATH
6. Saves the necessary configuration variables into the /etc/glite/profile.d/ directory for the root installation and to the $HOME/.glite directory for the user installation. For the user installation the script also modifies the .bashrc and .cshrc files to automatically source these files during the login.
To get the environment configured correctly, each gLite UI user MUST before the first using of the glite UI to run the $GLITE_LOCATION/etc/config/scripts/glite-ui-config.py configuration script. The value of the GLITE_LOCATION variable MUST be previously communicated by the administrator of the UI installation. In this case the script creates the copy of the $GLITE_LOCATION/etc/vomses file in the $HOME/.vomses file (required by the VOMS client) and sets up the automatic sourcing of the UI instance parameters.
To assure the correct functionality of the gLite UI after the execution of the glite-ui-config.py script, it is necessary either:
1) to source the corresponding file with UI environment parameters in /etc/glite/profile.d/ or ~<ui_manager>/.glite directory
2) log off and log in back. The file with UI environment variables wil be sourced automatically.
TORQUE (Tera-scale Open-source Resource and QUEue manager) is a resource manager providing control over batch jobs and distributed compute nodes. It is a community effort based on the original PBS project and has incorporated significant advances in the areas of scalability and fault tolerance.
The torque system is composed by a pbs_server which provides the basic batch services such as receiving/creating a batch job or protecting the job against system crashes. The pbs_mom (second service) places the job into execution when it receives a copy of the job from a Server. The mom_server creates a new session as identical to a user login session as if possible. It also has the responsibility for returning the job’s output to the user when directed to do so by the pbs_server. The job scheduler, is another daemon which contains the site’s policy controlling which job is run and where and when it is run. The scheduler appears as a batch Manager to the server. The scheduler being used by the torque module is maui.
This deployment module contains and configures the pbs_server (server configuration, queues creation, etc …) and maui services. It is also responsible for registering both services into RGMA via the servicetool deployment module.
The sshd configuration required for the torque clients to copy their output back to the torque server is also carried out in this module.
A Torque Server (the Computing Element node) could easily work as a Torque Client (the Worker Node) by including and configuring the pbs_mom service. By design the Torque Server deployment module does not include the RPMS and configuration necessary to make it work as a Torque Client. The only additional task to make a Torque Server be also a Torque Client is the installation and configuration of the Torque Client deployment module.
This deployment module configures the pbs_mom service aimed at being installed in the worker nodes. It’s also responsible for the ssh configuration to allow copying the job output back to the Torque Server (Computing Element).
These installation instructions are based on the RPMS distribution of gLite. It is also assumed that the target server platform is Red Hat Linux 3.0 or any binary compatible distribution, such as Scientific Linux or CentOS. Whenever a package needed by gLite is not distributed as part of gLite itself, it is assumed it can be found in the list of RPMS of the original OS distribution.
1. Download from the gLite web site the latest version of the Torque Server installation script glite-torque-server_installer.sh. It is recommended to download the script in a clean directory
2. Make the script executable (chmod u+x glite-torque-server_install.sh).
3. Run the script as root. All the required RPMS are downloaded from the gLite software repository in the directory glite-torque-server next to the installation script and the installation procedure is started. If some RPM is already installed, it is upgraded if necessary. Check the screen output for errors or warnings.
4. If
the installation is performed successfully, the following components are
installed:
gLite in /opt/glite ($GLITE_LOCATION)
torque in
/var/spool/pbs
5. The gLite torque-server configuration script is installed in $GLITE_LOCATION/etc/config/scripts/glite-torque-server-config.py. A template configuration file is installed in $GLITE_LOCATION/etc/config/templates/glite-torque-server.cfg.xml
6. The gLite torque-server installs the R-GMA servicetool to publish its information to the information system R-GMA. The details of the installation of the R-GMA servicetool are described in section 13.2.4.5.
Copy the global configuration file template
$GLITE_LOCATION/etc/config/template/glite-global.cfg.xml
to
$GLITE_LOCATION/etc/config
open it and modify the parameters if required (see Table 20)
Copy the configuration file template from
$GLITE_LOCATION/etc/config/templates/glite-torque-server.cfg.xml
to
$GLITE_LOCATION/etc/config/glite-torque-server.cfg.xml
and modify the parameters values as necessary. Some parameters have
default values, others must be changed by the user. All parameters that must be
changed have a token value of changeme. The parameters that can be set
can be found in Table 20. The R-GMA servicetool related parameters can be found
in Table 15.
The parameters in the file can be divided into two categories:
<instance name="changeme"
service="wn-torque">
….
</instance>
At least one worker node instance must be defined. If you want to use multiple
clients, create a separate instance for each client by copying/pasting the
<instance> section in this file.
Next, change the name of each client instance from ‘changeme’ to the client name
and adapt the parameters of each instance accordingly.
c.
Queues (third part of Table 20)
For every queue to be created in the Torque Server the configuration file
contains the list of parameters grouped by the tag
<instance name="xxxx "
service="pbs-queue">
…
</instance>
where xxxx is the name of the queue. Adapt the parameters of each instance
accordingly. If you want to configure more queues please add a separate
instance by copying/pasting the <instance> section in this file for each
queue.
By default, the configuration file defines three queues (short, long and infinite) with different values and with acl_groups disabled. It’s up to the users to customize their queues depending on their requirements.
Common parameters
|
||||
Parameter |
Default value |
Description |
||
User-defined Parameters |
||||
torque-server.name |
|
Name of the machine where the job server is running, it usually corresponds to the Computing Element: Example: ${HOSTNAME}. |
||
torque-server.force |
False |
This parameter specifies the behaviour of the pbs_server setting parameters and queue creation.In case it is True it will take the whole control of the queue creation/deletion. That means that if it's specified a queue in the config file and latter removed from the configuration file it will also be removed in the pbs_server configuration, on the contrary, no queue removal will be performed. |
||
se.name |
|
Storage Element name (if necessary). |
||
Advanced Parameters |
||||
glite.installer.verbose
|
True |
Enable verbose output. |
||
torque-server.scheduling |
True |
When the attribute scheduling is set to true, the server will call the job scheduler, if false the job scheduler is not called. The value of scheduling may be specified on the pbs_server command line with the -a option. |
||
torque-server.acl-host.enable |
False |
Enables the server host access control list. Values True,False. |
||
torque-server.default.queue |
short |
The queue which is the target queue when a request does not specify a queue name, must be set to an existing queue. |
||
torque-server.log.events |
511 |
A bit string which specifies the type of events which are logged, Default value 511 (all events). |
||
torque-server.query.other-jobs |
True |
The setting of this attribute controls if general suers, other than job owner, are allowd to query the status of or select the job. |
||
torque-server.scheduler.interaction |
|
The time, in seconds, between iterations of attempts by the batch server to schedule jobs.On each iteration, the server examines the available resources and runnable jobs to see if a job can be initiated.This examination also occurs whenever a running batch job terminates or a new job is placed in the queued state in an execution queue. |
||
torque-server.default.node |
glite |
A node specification to use if there is no other supplied. specification. This attribute is only used by servers where a nodes file exist in the server_priv directory providing a list of nodes to the server. If the nodes file does does a single node. |
||
torque-server.node.pack |
False |
Controls how multiple processor nodes are allocated to jobs. If this attribute is set to true, jobs will be assigned to the multiple processor nodes with the fewest free processors.This packs jobs into the fewest possible nodes leaving multiple processor nodes free for jobs which need many processors on a node. If set to false, jobs will be scattered across nodes reducing conflicts over memory between jobs.If unset, the jobs are packed on nodes in the order that the nodes are declared to the server (in the nodes file) nodes reducing conflicts over memory between jobs. |
||
maui.server.port |
40559 |
Port on which the Maui server will listen for client connections, by default 40559. |
||
maui.server.mode |
NORMAL |
Secifies how Maui interacts with the outside world. Possible values NORMAL, TEST AND SIMULATION. |
||
maui.defer.time |
00:01:00 |
Specifies amount of time a job will be held in the deferred state before being released back to the Idle job queue. Format [[[DD:]HH:]MM:]SS |
||
maui.rm.poll.interval |
00:00:10 |
Maui will refresh its resource manager information every 10 seconds. Ths parameter specifies the global poll interval for all resource managers. |
||
maui.log.filename |
${GLITE_LOCATION_LOG}/maui.log |
Name of the maui log file |
||
maui.log.max.size |
10000000 |
Maximum allowed size (in bytes) the log file before it will be rolled. |
||
maui.log.level |
1 |
Specifies the verbosity of Maui logging where 9 is the most verbose (NOTE: each logging level is approximately an order of magnitude more verbose than the previous level. Values [0..9]" |
||
System Parameters |
||||
Worker node instances
|
||
Torque-wn.name |
|
Worker Node name to be used by the torque server. It can also be the CE itself. Example: lxb1426.cern.ch. [Type: string]. |
torque-wn.number.processors |
|
Number of processors of the machine. Example: 1,2 , .... [Type: string]. |
torque-wn.attribute |
|
Attribute that can be used by the server for different purposes (for example to establish a default node. [Type: string]. |
Queue instances
|
||
queue.name |
|
Queue name |
queue.type |
|
Must be set to either Execution or Routing. If a queue is from routing type the jobs will be routed
to another server (route_destinations attributed). |
queue.resources.max.cpu.time |
|
Maximum amount of CPU time used by all processes in the job. Format: seconds, or [[HH:]MM:]SS. |
queue.max.wall.time |
|
Maximum amount of real time during which the job can be in the running state. Format: seconds, or [[HH:]MM:]SS. |
queue.enabled |
|
Defines if the queue will or will not accept new jobs. When false the queue is disabled and will not accept jobs. |
queue.started |
|
It set to true, jobs in the queue will be processed, either routed by the server if the queue is a routing queue or scheduled by the job scheduler if an execution queue. When False, the queue is considered stopped. |
queue.acl.group.enable |
|
Attribute which when true directs the server to use the queue group access control list acl_groups. |
queue.acl.groups |
|
List which allows or denies enqueuing of jobs owned by members of the listed groups. The groups in the list are groups on the server host, not submitting hosts. Syntax: [+|-]group_name[,...].Example: +test authorizes the test group users to submit jobs to this queue. |
Table 20: TORQUE Server configuration parameters
1.
Configure the R-GMA servicetool. For this you have to configure the
servicetool itself as well as configure the sub-services of Torque server for
the publishing via the R-GMA servicetool:
2.
R-GMA servicetool configuration:
Copy the R-GMA servicetool configuration file template
$GLITE_LOCATION/etc/config/templates/glite-rgma-servicetool.cfg.xml
to
$GLITE_LOCATION/etc/config
and modify the parameters values as necessary. Some parameters have
default values; others must be changed by the user. All parameters that must
be changed have a token value of changeme. Table 15 shows a list of the
parameters that can be set. More details can be found in section 13.2.4.6.
3.
Service Configuration for the R-GMA servicetool:
Modify the R-GMA servicetool related configuration values that are
located in the Toque configuration file
glite-torque-server.cfg.xml
that was mentioned before. In this file, you will find for each
service that should be published via the R-GMA servicetool one instance of a
set of parameters that are grouped by the tag
<instance name="xxxx"
service="rgma-servicetool">
Where xxxx is the name of corresponding subservice. Table 16 on page 85
in the section 13.2.4 about the R-GMA servicetool shows the general list of
parameters for each service for the publishing via the R-GMA servicetool.
For Torque-server the following sub-services are published via the R-GMA
servicetool and need to be updated accordingly:
ii. Torque PBS server
iii. Torque maui
Again, you find the necessary steps described in section 13.2.4.6.
Note: Step 1,2
and 3 can also be performed by means of the remote site configuration file or a
combination of local and remote configuration files
4. As root run the Torque Server Configuration file /opt/glite/etc/config/scripts/glite-torque-server-config.py.
Once reached this point
the Torque Server Service is ready and the Torque Clients have to be properly
installed and configured.
The Torque Server configuration script performs the following steps:
5. Load the Torque Server configuration file $GLITE_LOCATION/etc/config/glite-torque-server.cfg.xml
6. Add the torque and maui ports to /etc/services.
7. Create the /var/spool/pbs/server_name file containing the torque server hostname.
8. Create the list with the torque clients under /var/spool/pbs/server_priv/nodes.
9. Create the pbs_server configuration.
10. Start the pbs_server.
11. Look for changes in the pbs_server configuration since the last time the Torque Server was configured.
12. Establish the server configuration performing the necessary updates.
13. Create the queues configuration. It will check if any new queue has been defined in the configuration file, if any queue has been removed and depending on the value of the value torque-server.force it will behave in a different way (see torque-server.force parameter description).
14. Execute the defined queues configuration
15. Create the /opt/edg/etc/edg-pbs-shostsequiv.conf file used by the script edg-pbs-shostsequiv. This file includes the list of nodes that will included in the /etc/ssh/shosts file to allow HostbasedAuthentication.
16. Create the edg-pbs-shostsequiv script. This file contains a crontab entry to call periodically the /opt/edg/sbin/edg-pbs-shostsequiv script. This file is then added to the /etc/cron.d/ directory.
17. Run the /opt/edg/sbin/edg-pbs-shostsequiv script.
18. Look for duplicated key entries in /etc/ssh/ssh_known_hosts.
19. Create the configuration file /opt/edg/etc/edg-pbs-knownhosts.conf. This file contains the nodes which keys will be added to the /etc/ssh/ssh_known_hosts file apart from the torque client nodes (which are taken directly from the torque server via the pbsnodes –a command).
20. Create the edg-pbs-knownhosts script. This script contains a crontab entry to call periodically the /opt/edg/sbin/edg-pbs-knownhosts script. This file is then added to the /etc/cron.d/ directory.
21. Run /opt/edg/sbin/edg-pbs-knownhosts to add the keys to /etc/ssh/ssh_known_hosts.
22. Create the required sshd configuration (modifying the /etc/ssh/sshd_config file) to allow the torque clients (Worker Nodes) copying their output directly to the Torque Server via HostBasedAuthentication.
23. Restart the sshd daemon to take the changes into account.
24. Restart the pbs_server.
25. Create the maui configuration file in /var/spool/maui/maui.cfg.
26. Start the maui service.
27. Configure the servicetool to register the torque services defined in the configuration file.
The TORQUE SERVER configuration script can be run with the following command-line parameters to manage the services:
glite-torque-server-config.py –start |
Starts all TORQUE SERVER services or restart them if they are already running (pbs_server, maui and servicetool) |
glite-torque-server-config.py –stop |
Stops all TORQUE SERVER services (pbs_server, maui and servicetool) |
1. Download from the gLite web site the latest version of the torque-server installation script glite-torque-client_installer.sh. It is recommended to download the script in a clean directory.
2. Make the script executable (chmod u+x glite-torque-client_install.sh).
3. Run the script as root. All the required RPMS are downloaded from the gLite software repository in the directory glite-torque-client next to the installation script and the installation procedure is started. If some RPM is already installed, it is upgraded if necessary. Check the screen output for errors or warnings.
4.
If the installation is performed successfully, the following components
are installed:
gLite in /opt/glite ($GLITE_LOCATION)
Torque client in /var/spool/pbs
5. The gLite torque-client configuration script is installed in $GLITE_LOCATION/etc/config/scripts/glite-torque-client-config.py. A template configuration file is installed in $GLITE_LOCATION/etc/config/templates/glite-torque-client.cfg.xml.
1. Copy the global configuration file template $GLITE_LOCATION/etc/config/template/glite-global.cfg.xml to $GLITE_LOCATION/etc/config, open it and modify the parameters if required (see Table)
2.
Copy the configuration file template from $GLITE_LOCATION/etc/config/templates/glite-client-server.cfg.xml
to $GLITE_LOCATION/etc/config/glite-torque-client.cfg.xml and modify the
parameters values as necessary. Some parameters have default values, others
must be changed by the user. All parameters that must be changed have a token
value of changeme. The following parameters can be set:
Note:
Step 1 and 2 can also be performed by means of the remote site configuration file
or a combination of local and remote configuration files
Parameter |
Default value |
Description |
||
User-defined Parameters |
||||
torque-server.name |
|
Name of the machine where the job server is running, it usually corresponds to the Computing Element: Example: ${HOSTNAME}. |
||
se.name |
|
Storage Element name (if necessary). |
||
Advanced Parameters |
||||
Glite.installer.verbose |
True |
Enable verbose output. |
||
mom-server.logevent |
255 |
Sets the mask that determines which event types are logged by pbs_mom. |
||
System Parameters |
||||
3. As root run the Torque Client Configuration file /opt/glite/etc/config/scripts/glite-torque-client-config.py.
The Torque Client configuration script performs the following steps:
The TORQUE CLIENT configuration script can be run with the following command-line parameters to manage the services:
glite-torque-client-config.py –start |
Starts all TORQUE CLIENT services (or restart them if they are already running, pbs_mom) |
glite-torque-client-config.py –stop |
Stops all TORQUE CLIENT services (pbs_mom) |
There are three test suites described in this section, gLite I/O, Catalog and WMS.
The I/O test suite covers basic gLite I/O functionality (open file, create a file, read a file, write to a file, get info associated with a handle, close a file), some regression tests and cycles of glite-put and glite-get of several files.
The Catalogs test suite covers the creation and removal of directories, list entries in a directory, and the creation of entries in a directory through single and bulk operations.
The WMS test suite contains 9 tests:
The gLite I/O test suite depends on the glite-data-io-client RPM, so it is recommended to install and execute the I/O tests from a UI machine. The I/O test suite depends on CppUnit too, that should also be installed in the machine. It can be downloaded from the gLite external dependencies web page.
This test suite is installed using the glite-testsuites-data-io-server-1.0.5 rpm that can be obtained from the gLite web site using wget plus the URL of the rpm. The installation of the rpm will deploy the tests under $GLITE_LOCATION/test/glite-io-server directory.
Before running the test suite, check the following points:
· The user account that runs the tests must have these environment variables set:
GLITE_LOCATION (usually under /opt/glite)
GLOBUS_LOCATION (usually under /opt/globus)
LD_LIBRARY_PATH (including: $GLITE_LOCATION/lib:$GLOBUS_LOCATION/lib)
PATH (including: $GLITE_LOCATION/bin:$GLOBUS_LOCATION/bin)
· The user distinguish name that runs the tests must be included in the '/etc/grid-security/grid-mapfile' file of the gLite I/O server machine. This should be already the case if the configuration of your io-client is pointing to a valid io-server.
· Also, the user must have a voms-proxy before running the tests, typing: voms-proxy-init –voms your_vo_name
Note: if all the tests that you try to run fail, check if the problem is in the configuration of your io-client, io-server or catalog. If all is correctly configured, you should be able to put a file in a SE using the glite-put command.
You can run the tests from the command line or using QMTest:
a) From the command line, you can execute 12 different binaries that are located at $GLITE_LOCATION/test/glite-io-server/bin, so you can run them executing: $GLITE_LOCATION/test/glite-io-server/bin/gLite-io-****
These tests check the basic I/O functionality: open a remote file, create a remote file, read a file, write to a file, set a file read/write pointer, get information about the file associated with the given handle and close a file. There are also 5 regression tests that check some of the bugs reported in Savannah.
Apart from those tests, you can also run a Perl test that is located at $GLITE_LOCATION/test/glite-io-server/scripts/run_gliteIO_test.pl to do cycles of glite-put and glite-get of several files. As an example, to do a glite-put and glite-get of 1000 files of a maximum size of 1MB in 1000 cycles (only one file per cycle), you should type: $GLITE_LOCATION/test/glite-io-server/scripts/run_gliteIO_test.pl -l /tmp -c 1 -f 1M -n 1 -s 1000M -o your_vo_name
Where –l specifies the log directory, -c the number of cycles to run, -f the maximal file size, -n the number of files to be transferred in a cycle, and –s the maximal total file size.
b) Using QMTest:
- If you don't have QMTest installed in your machine, you can download it from http://www.codesourcery.com/qmtest/download.html
- Make sure that 'qmtest' is in your PATH.
QMTEST_CLASS_PATH=$GLITE_LOCATION/test/config/qmtest/test-classes/
qmtest -D $GLITE_LOCATION/test/config/qmtest gui --address "<your hostname>" --port <port number> --no-browser &
a) From the command line:
The test results can be visualized in stdout or in an XML file generated in the directory where the tests are called tests.xml
b) Using QMTest:
Using QMTest you can see the tests’ output clicking on the right side of each test where it says “Details”.
The gLite Catalog test suite depends on the glite-data-catalog-interface and glite-data-catalog-fireman-api-c RPMs, so it is recommended to install and execute the tests from a UI machine.
This test suite is installed using the glite-testsuites-data-catalog-fireman-1.0.5-1 rpm that can be obtained from the gLite web site using wget plus the URL of the rpm. The installation of the rpm will deploy the tests under $GLITE_LOCATION/test/glite-data-catalog-fireman directory.
Before running the test suite, check the following points:
· The user account that runs the tests must have these environment variables set:
GLITE_LOCATION (usually under /opt/glite)
GLOBUS_LOCATION (usually under /opt/globus)
LD_LIBRARY_PATH (including: $GLITE_LOCATION/lib:$GLOBUS_LOCATION/lib)
PATH (including: $GLITE_LOCATION/bin:$GLOBUS_LOCATION/bin)
· The user must have a voms-proxy before running the tests, typing: voms-proxy-init –voms your_vo_name
You can run the tests from the command line or using QMTest:
a) From the command line, you can execute the binaries that are located at $GLITE_LOCATION/test/glite-data-catalog-fireman/bin
The gLite-fireman-create-test creates a number of entries in the catalog in one single operation. This binary accepts the following parameters:
An example of calling this test may be:
$GLITE_LOCATION/test/glite-data-catalog-fireman/bin/gLite-fireman-create-test -e "http://lxb2081.cern.ch:8080/egtest/glite-data-catalog-service-fr-mysql/services/FiremanCatalog" -n 1000 -p "/TestsDir/02_"
On the other hand, the gLite-fireman-create-bulk-test creates entries in bulk operations. The parameters accepted are:
As an example, we could execute:
$GLITE_LOCATION/test/glite-data-catalog-fireman/bin/gLite-fireman-create-bulk-test -l -e "http://lxb2081.cern.ch:8080/egtest/glite-data-catalog-service-fr-mysql/services/FiremanCatalog" -n 1000 -s 100 -p "/TestsDir/01_"
Note: For both tests, it is supposed that the ‘TestsDir’ directory already exists in the catalog.
b) Using QMTest:
- If you don't have QMTest installed in your machine, you can download it from http://www.codesourcery.com/qmtest/download.html
- Make sure that 'qmtest' is in your PATH.
QMTEST_CLASS_PATH=$GLITE_LOCATION/test/config/qmtest/test-classes/
qmtest -D $GLITE_LOCATION/test/config/qmtest gui --address "<your hostname>" --port <port number> --no-browser &
Note that for this release you should execute these tests in a particular order, so for example, in this way, to run the remove directory test you must first have run the create directory test. The recommended order to run these tests in QMTest is this one:
1) glite-fireman-getversion-fireman-mysql
2) glite-fireman-getversion-fireman-oracle
3) glite-fireman-mkdir-fireman-mysql
4) glite-fireman-mkdir-fireman-oracle
5) glite-fireman-rmdir-fireman-mysql
6) glite-fireman-rmdir-fireman-oracle
7) glite-fireman-mkdir-fireman-mysql
8) glite-fireman-mkdir-fireman-oracle
9) arda-create-1000-entries-bulk-100-fireman-oracle
10) arda-create-1000-entries-per-call-fireman-oracle
11) arda-create-1000-entries-per-call
12) glite-fireman-put-lfn-800-chars
13) glite-fireman-readdir-fireman-mysql
14) glite-fireman-readdir-fireman-oracle
This situation will be improved in a next release.
a) From the command line:
The test results can be visualized in stdout.
b) Using QMTest:
Using QMTest you can see the output of each test clicking on “Details”.
Need to have access to a gLite UI in order to install the testsuite RPM
This test suite is installed using the glite-testsuites-wms-2.0.1 rpm that can be obtained from the gLite web site (e.g. ../../../../../../glite-web/egee/packages/**release**/bin/rhel30/i386/RPMS).
The installation of the rpm will deploy the tests under $GLITE_LOCATION/test/glite-wms directory.
This test suite should be run from the UI.
Before running the test suite, check the following points:
· Export the variable GSI_PASSWORD to the value of the actual password for your proxy file (required during the creation of the proxy)
Bash: export GSI_PASSWORD=myPerSonalSecreForProxy1243
Tcsh setenv GSI_PASSWORD myPerSonalSecreForProxy1243
· Export the variable REFVO to the name of the reference VO you want to use for the test
Bash: export REFVO=egtest
Tcsh: setenv REFVO egtest
· Define the Regression Test file (regressionTest.file )
· Write in a file all the single commands for the tests you want to be executed.
Run the set of tests by launching the MainScript (located at $GLITE_LOCATION /test/glite-wms/opt/edg/bin/MainScript) with the following options:
opt/edg/bin/MainScript --forcingVO=egtest --verbose
--regFile=regressionTest.file RTest
To keep the log in a file you can also do :
opt/edg/bin/MainScript --forcingVO=egtest
--verbose --regFile=regressionTest.file RTest | tee MyLogFile
The output of the test suite is written under /tmp/<username> in a file specified by the suite itself.
The name of the actual index.html and the tarzipped file with all required HTML for all tests is stated at the end of the test execution in the standard output.
For example the suite shows the following 2 lines at the end of its execution:
HTML in: /tmp/reale/050401-003320_LocalTB/index.html
TarBall in: lxb1409.cern.ch /tmp/reale/050401-003320_LocalTB/tarex.tgz
Normally this needs to be put in the doc root of your Web Server, and to be unzipped and untared there.
The log file of the execution should normally be copied to the “annex” subdir of the directory structure you get by unzipping and untaring the tarex.tgz, and be renamed there as “MainLog".
The HTML output allows for the monitor of the test execution, examination of the test log files, contains a detailed description of each test performed and displays the time required for the execution of the test itself.
This is an example of local service configuration file for a Computing Element node using PBS as batch system.
<!-- Default configuration parameters for the gLite CE Service -->
<config>
<parameters>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- User-defined parameters - Please change them -->
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- VOs configuration
These parameters are matching arrays of values containing one value
for each VO served by this CE node -->
<voms.voname
description="The names of the VOs that this CE node can serve">
<value>EGEE</value>
</voms.voname>
<voms.vomsnode
description="The full hostname of the VOMS server responsible for each VO.
Even if the same server is reponsible for more than one VO, there must
be exactly one entry for each VO listed in the 'voms.voname' parameter.
For example: 'host.domain.org'">
<value>lxb000.cern.ch</value>
</voms.vomsnode>
<voms.vomsport
description="The port on the VOMS server listening for request for each VO
This is used in the vomses configuration file
For example: '15000'">
<value>17001</value>
</voms.vomsport>
<voms.vomscertsubj
description="The subject of the host certificate of the VOMS
server for each VO. For example: '/C=ORG/O=DOMAIN/OU=GRID/CN=host.domain.org'">
<value>/C=CH/O=CERN/OU=GRID/CN=lxb000.cern.ch'</value>
</voms.vomscertsubj>
<!-- Pool accounts configuration
The following parameters must be set for both LSF and PBS/Torque systems
The pool accounts are created and configured by default if these parameters
are defined. You can remove these parameters to skip pool accounts configuration,
however it is better to configure the parameters and let the script verify
the correctness of the installation.
These parameters are matching arrays of values containing one value
for each VO served by this CE node. The list must match
the corresponding lists in the VO configuration section -->
<pool.account.basename
description="The prefix of the set of pool accounts to be created for each VO.
Existing pool accounts with this prefix are not recreated">
<value>egee</value>
</pool.account.basename>
<pool.account.group
description="The group name of the pool accounts to be used for each VO.
For some batch systems like LSF, this group may need a specific gid. The gid can be
set using the pool.lsfgid parameter in the LSF configuration section">
<value>egeegr</value>
</pool.account.group>
<pool.account.number
description="The number of pool accounts to create for each VO. Each account
will be created with a username of the form prefixXXX where prefix
is the value of the pool.account.basename parameter. If matching pool accounts already
exist, they are not recreated.
The range of values for this parameter is from 1 to 999">
<value>40</value>
</pool.account.number>
<!-- CE Monitor configuration
These parameters are required to configure the CE Plugin for the
CE Monitor web service. More information about the following
parameters can be found in $GLITE_LOCATION/share/doc/glite-ce-ce-plugin/ce-info-readme.txt
or in the CE chapter of the gLite User Manual -->
<cemon.wms.host
description="The hostname of the WMS server that receives notifications from this CE"
value="lxb0001.cern.ch"/>
<cemon.wms.port
description="The port number on which the WMS server receiving notifications from this CE
is listening"
value="8500"/>
<cemon.lrms
description="The type of Local Resource Managment System. It can be 'lsf' or 'pbs'
If this parameter is absent or empty, the default type is 'pbs'"
value="pbs"/>
<cemon.cetype
description="The type of Computing Element. It can be 'condorc' or 'gram'
If this parameter is absent or empty, the default type is 'condorc'"
value="condorc"/>
<cemon.cluster
description="The cluster entry point host name. Normally this is the CE host itself"
value="lxb0002.cern.ch"/>
<cemon.static
description="The name of the configuration file containing static information"
value="${GLITE_LOCATION}/etc/glite-ce-ce-plugin/ce-static.ldif"/>
<cemon.cluster-batch-system-bin-path
description="The path of the lrms commands. For example: '/usr/pbs/bin' or '/usr/local/lsf/bin'
This value is also used to set the PBS_BIN_PATH or LSF_BIN_PATH variables depending on the value
of the 'cemon.lrms' parameter"
value="/usr/pbs/bin"/>
<cemon.cesebinds
description="The CE-SE bindings for this CE node. There are three possible format:
configfile
'queue[|queue]' se
'queue[|queue]'se se entry point
A . character for the queue list means all queues
Example: '.' EGEE::SE::Castor /tmp">
<value>'.' EGEE::SE::Castor /tmp </value>
</cemon.cesebinds>
<cemon.queues
description="A space-separated list of the queues defined on this CE node
Example: blah-pbs-egee-high"
value=" blah-pbs-egee-high "/>
<!-- <!-- LSF configuration
The following parameters are specific to LSF. They may have to be set
depending on your local LSF configuration.
If LSF is not used, remove this section -->
<pool.lsfgid
description="The gid of the groups to be used for the pool accounts on some LSF installations,
on per each pool account group. This parameter is an array of values containing one value
for each VO served by this CE node. The list must match
the corresponding lists in the VOMS configuration section
If this is not required by your local LSF system remove this parameter or leave the values empty">
<value>changeme</value>
</pool.lsfgid>
-->
<!-- Condor configuration -->
<condor.wms.user
description="The username of the condor user under which
the Condor daemons run on the WMS nodes that this CE serves"
value="wmsegee"/>
<!-- Logging and Bookkeeping -->
<lb.user
description="The account name of the user that runs the local logger daemon
If the user doesn't exist it is created. In the current version, the
host certificate and key are used as service certificate and key and are
copied in this user's home in the directory specified by the global
parameter 'user.certificate.path' in the glite-global.cfg.xml file"
value="lbegee"/>
<!-- Firewall configuration -->
<iptables.chain
description="The name of the chain to be used for configuring the local firewall.
If the chain doesn't exist, it is created and the rules are assigned to this chain.
If the chain exists, the rules are appended to the existing chain"
value="EGEE-DEFAULT-INPUT"/>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- Advanced parameters - Change them if you know what you're doing -->
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- gLite configuration -->
<glite.installer.verbose
description="Enable verbose output"
value="true"/>
<glite.installer.checkcerts
description="Enable check of host certificates"
value="true"/>
<!-- PBS configuration
The following parameters are specific to PBS. They may have to be set
depending on your local PBS configuration.
If PBS is not used, remove this section -->
<PBS_SPOOL_DIR
description="The PBS spool directory"
value="/usr/spool/PBS"/>
<!-- LSF configuration
The following parameters are specific to LSF. They may have to be set
depending on your local LSF configuration.
If LSF is not used, remove this section -->
<LSF_CONF_PATH
description="The directory where the LSF configuration file is located"
value="/etc"/>
<!-- Globus configuration -->
<globus.osversion
description="The kernel id string identifying the system installed on this node.
For example: '2.4.21-20.ELsmp'. This parameter is normally automatically detected,
but it can be set here"
value=""/>
<globus.hostdn
description="The host distinguished name (DN) of this node. This is mormally automatically
read from the server host certificate. However it can be set here. For example:
'C=ORG, O=DOMAIN, OU=GRID, CN=host/server.domain.org'"
value=""/>
<!-- Condor configuration -->
<condor.version
description="The version of the installed Condor-C libraries"
value="6.7.3"/>
<condor.user
description="The username of the condor user under which
the Condor daemons must run"
value="condor"/>
<condor.releasedir
description="The location of the Condor package. This path is internally simlinked
to /opt/condor-c. This is currently needed by the Condor-C software"
value="/opt/condor-6.7.3"/>
<CONDOR_CONFIG
description="Environment variable pointing to the Condor
configuration file"
value="${condor.releasedir}/etc/condor_config"/>
<condor.scheddinterval
description="How often should the schedd send an update to the central manager?"
value="10"/>
<condor.localdir
description="Where is the local condor directory for each host?
This is where the local config file(s), logs and
spool/execute directories are located"
value="/var/local/condor"/>
<condor.blahgahp
description="The path of the gLite blahp daemon"
value="$GLITE_LOCATION/bin/blahpd"/>
<condor.daemonlist
description="The Condor daemons to configure and monitor"
value="MASTER, SCHEDD"/>
<condor.blahpollinterval
description="How often should blahp poll for new jobs?"
value="120"/>
<gatekeeper.port
description="The gatekeeper listen port"
value="2119"/>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- System parameters - You should leave these alone -->
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
</parameters>
</config>
This is an example of site configuration file for the same CE node as in Appendix A. In order to propagate the full configuration from the central configuration server, the configuration file in Appendix A can be simply replaced with the following single line:
<config/>
Alternatively, any parameter left in local service file and properly defined in the case of user-defined parameters will override the values set in the site configuration file. The following file also contains a default parameters section with the parameters required by the gLite Security Utilities module. This default section is inherited by all nodes.
<!-- Default configuration parameters for the gLite CE Service -->
<siteconfig>
<parameters>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- User-defined parameters - Please change them -->
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<cron.mailto
description="E-mail address for sending cron job notifications"
value="egee-admin@cern.ch"/>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- Advanced parameters - Change them if you know what you're doing -->
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- Installer configuration -->
<glite.installer.verbose
description="Enable verbose output"
value="true"/>
<install.fetch-crl.cron
description="Install the glite-fetch-crl cron job. Possible values are
'true' (install the cron job) or 'false' (do not install the cron job)"
value="true"/>
<install.mkgridmap.cron
description="Install the glite-mkgridmap cron job and run it once.
Possible values are 'true' (install the cron job) or 'false' (do
not install the cron job)"
value="false"/>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- System parameters - You should leave these alone -->
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
</parameters>
<node name="lxb0002.cern.ch">
<parameters>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- User-defined parameters - Please change them -->
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- VOs configuration
These parameters are matching arrays of values containing one value
for each VO served by this CE node -->
<voms.voname
description="The names of the VOs that this CE node can serve">
<value>EGEE</value>
</voms.voname>
<voms.vomsnode
description="The full hostname of the VOMS server responsible for each VO.
Even if the same server is reponsible for more than one VO, there must
be exactly one entry for each VO listed in the 'voms.voname' parameter.
For example: 'host.domain.org'">
<value>lxb0000.cern.ch</value>
</voms.vomsnode>
<voms.vomsport
description="The port on the VOMS server listening for request for each VO
This is used in the vomses configuration file
For example: '170001'">
<value>15001</value>
</voms.vomsport>
<voms.vomscertsubj
description="The subject of the host certificate of the VOMS
server for each VO. For example: '/C=ORG/O=DOMAIN/OU=GRID/CN=host.domain.org'">
<value>/C=CH/O=CERN/OU=GRID/CN=lxb0000.cern.ch </value>
</voms.vomscertsubj>
<!-- Pool accounts configuration
The following parameters must be set for both LSF and PBS/Torque systems
The pool accounts are created and configured by default if these parameters
are defined. You can remove these parameters to skip pool accounts configuration,
however it is better to configure the parameters and let the script verify
the correctness of the installation.
These parameters are matching arrays of values containing one value
for each VO served by this CE node. The list must match
the corresponding lists in the VO configuration section -->
<pool.account.basename
description="The prefix of the set of pool accounts to be created for each VO.
Existing pool accounts with this prefix are not recreated">
<value>egee</value>
</pool.account.basename>
<pool.account.group
description="The group name of the pool accounts to be used for each VO.
For some batch systems like LSF, this group may need a specific gid. The gid can be
set using the pool.lsfgid parameter in the LSF configuration section">
<value>egeegr</value>
</pool.account.group>
<pool.account.number
description="The number of pool accounts to create for each VO. Each account
will be created with a username of the form prefixXXX where prefix
is the value of the pool.account.basename parameter. If matching pool accounts already
exist, they are not recreated.
The range of values for this parameter is from 1 to 999">
<value>40</value>
</pool.account.number>
<!-- CE Monitor configuration
These parameters are required to configure the CE Plugin for the
CE Monitor web service. More information about the following
parameters can be found in $GLITE_LOCATION/share/doc/glite-ce-ce-plugin/ce-info-readme.txt
or in the CE chapter of the gLite User Manual -->
<cemon.wms.host
description="The hostname of the WMS server that receives notifications from this CE"
value="lxb0001.cern.ch"/>
<cemon.wms.port
description="The port number on which the WMS server receiving notifications from this CE
is listening"
value="8500"/>
<cemon.lrms
description="The type of Local Resource Managment System. It can be 'lsf' or 'pbs'
If this parameter is absent or empty, the default type is 'pbs'"
value="pbs"/>
<cemon.cetype
description="The type of Computing Element. It can be 'condorc' or 'gram'
If this parameter is absent or empty, the default type is 'condorc'"
value="condorc"/>
<cemon.cluster
description="The cluster entry point host name. Normally this is the CE host itself"
value="lxb0002.cern.ch"/>
<cemon.static
description="The name of the configuration file containing static information"
value="${GLITE_LOCATION}/etc/glite-ce-ce-plugin/ce-static.ldif"/>
<cemon.cluster-batch-system-bin-path
description="The path of the lrms commands. For example: '/usr/pbs/bin' or '/usr/local/lsf/bin'
This value is also used to set the PBS_BIN_PATH or LSF_BIN_PATH variables depending on the value
of the 'cemon.lrms' parameter"
value="/usr/pbs/bin"/>
<cemon.cesebinds
description="The CE-SE bindings for this CE node. There are three possible format:
configfile
'queue[|queue]' se
'queue[|queue]'se se entry point
A . character for the queue list means all queues
Example: '.' EGEE::SE::Castor /tmp">
<value>'.' EGEE::SE::Castor /tmp</value>
</cemon.cesebinds>
<cemon.queues
description="A space-separated list of the queues defined on this CE node
Example: blah-pbs-egee-high"
value="blah-pbs-egee-high"/>
<!-- LSF configuration
The following parameters are specific to LSF. They may have to be set
depending on your local LSF configuration.
If LSF is not used, remove this section -->
<!-- <pool.lsfgid
description="The gid of the groups to be used for the pool accounts on some LSF installations,
on per each pool account group. This parameter is an array of values containing one value
for each VO served by this CE node. The list must match
the corresponding lists in the VOMS configuration section
If this is not required by your local LSF system remove this parameter or leave the values empty">
<value></value>
</pool.lsfgid>
-->
<!-- Condor configuration -->
<condor.wms.user
description="The username of the condor user under which
the Condor daemons run on the WMS nodes that this CE serves"
value="wmsegee"/>
<!-- Logging and Bookkeeping -->
<lb.user
description="The account name of the user that runs the local logger daemon
If the user doesn't exist it is created. In the current version, the
host certificate and key are used as service certificate and key and are
copied in this user's home in the directory specified by the global
parameter 'user.certificate.path' in the glite-global.cfg.xml file"
value="lbegee"/>
<!-- Firewall configuration -->
<iptables.chain
description="The name of the chain to be used for configuring the local firewall.
If the chain doesn't exist, it is created and the rules are assigned to this chain.
If the chain exists, the rules are appended to the existing chain"
value="EGEE-DEFAULT-INPUT"/>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- Advanced parameters - Change them if you know what you're doing -->
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- gLite configuration -->
<glite.installer.verbose
description="Enable verbose output"
value="true"/>
<glite.installer.checkcerts
description="Enable check of host certificates"
value="true"/>
<!-- PBS configuration
The following parameters are specific to PBS. They may have to be set
depending on your local PBS configuration.
If PBS is not used, remove this section -->
<PBS_SPOOL_DIR
description="The PBS spool directory"
value="/usr/spool/PBS"/>
<!-- LSF configuration
The following parameters are specific to LSF. They may have to be set
depending on your local LSF configuration.
If LSF is not used, remove this section -->
<LSF_CONF_PATH
description="The directory where the LSF configuration file is located"
value="/etc"/>
<!-- Globus configuration -->
<globus.osversion
description="The kernel id string identifying the system installed on this node.
For example: '2.4.21-20.ELsmp'. This parameter is normally automatically detected,
but it can be set here"
value=""/>
<!-- Condor configuration -->
<condor.version
description="The version of the installed Condor-C libraries"
value="6.7.3"/>
<condor.user
description="The username of the condor user under which
the Condor daemons must run"
value="condor"/>
<condor.releasedir
description="The location of the Condor package. This path is internally simlinked
to /opt/condor-c. This is currently needed by the Condor-C software"
value="/opt/condor-6.7.3"/>
<CONDOR_CONFIG
description="Environment variable pointing to the Condor
configuration file"
value="${condor.releasedir}/etc/condor_config"/>
<condor.scheddinterval
description="How often should the schedd send an update to the central manager?"
value="10"/>
<condor.localdir
description="Where is the local condor directory for each host?
This is where the local config file(s), logs and
spool/execute directories are located"
value="/var/local/condor"/>
<condor.blahgahp
description="The path of the gLite blahp daemon"
value="$GLITE_LOCATION/bin/blahpd"/>
<condor.daemonlist
description="The Condor daemons to configure and monitor"
value="MASTER, SCHEDD"/>
<condor.blahpollinterval
description="How often should blahp poll for new jobs?"
value="10"/>
<gatekeeper.port
description="The gatekeeper listen port"
value="2119"/>
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
<!-- System parameters - You should leave these alone -->
<!-- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx -->
</parameters>
</node>
</siteconfig>