| Document identifier: | LCG-GIS-CR-SE_dpm_mysql |
|---|---|
| Date: | 16 January 2006 |
| Author: | LCG Deployment - GIS team;Retico,Antonio;Vidic,Valentin; |
| Version: |
The configuration has been tested on a standard Scientific Linux 3.0
Installation.
Link to this document:
This document is available on the Grid Deployment web site
http://www.cern.ch/grid-deployment/gis/lcg-GCR/index.html
This chapter describes the configuration steps done by the yaim
function 'config_ldconf'.
In order to allow the middleware libraries to be looked up and dinamically linked, the relevant paths need to be configured.
<INSTALL_ROOT>/globus/lib <INSTALL_ROOT>/edg/lib <INSTALL_ROOT>/lcg/lib /usr/local/lib /usr/kerberos/lib /usr/X11R6/lib /usr/lib/qt-3.1/lib /opt/gcc-3.2.2/libwhere <INSTALL_ROOT> is the installation root of the lcg middleware (/opt by default).
> /sbin/ldconfig -v(this command produces a huge amount of output)
The function 'config_ldconf' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_ldconf
The code is reproduced also in 25.1..
Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_sysconfig_edg'.
The EDG configuration file is parsed by EDG daemons to locate the EDG root
directory and various other global properties.
Create and edit the file /etc/sysconfig/edg as follows:
EDG_LOCATION=<INSTALL_ROOT>/edg EDG_LOCATION_VAR=<INSTALL_ROOT>/edg/var EDG_TMP=/tmp X509_USER_CERT=/etc/grid-security/hostcert.pem X509_USER_KEY=/etc/grid-security/hostkey.pem GRIDMAP=/etc/grid-security/grid-mapfile GRIDMAPDIR=/etc/grid-security/gridmapdir/where <INSTALL_ROOT> is the installation root of the lcg middleware (/opt by default).
NOTE: it might be observed that some of the variables above listed dealing with the GSI (Grid Security Interface) are needed just on service nodes (e.g. CE, RB) and not on others. Nevertheless, for sake of simplicity, yaim uses the same definitions on all node types, which has been proven not to hurt.
The function 'config_sysconfig_edg' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_sysconfig_edg
The code is reproduced also in 25.2..
Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_sysconfig_globus'.
Create and edit the file /etc/sysconfig/globus as follows:
GLOBUS_LOCATION=<INSTALL_ROOT>/globus GLOBUS_CONFIG=/etc/globus.conf GLOBUS_TCP_PORT_RANGE="20000 25000" export LANG=Cwhere <INSTALL_ROOT> is the installation root of the lcg middleware (/opt by default).
The function 'config_sysconfig_globus' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_sysconfig_globus
The code is reproduced also in 25.3..
Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_sysconfig_lcg'.
Create and edit the file /etc/sysconfig/lcg as follows:
LCG_LOCATION=<INSTALL_ROOT>/lcg LCG_LOCATION_VAR=<INSTALL_ROOT>/lcg/var LCG_TMP=/tmpwhere <INSTALL_ROOT> is the installation root of the lcg middleware (/opt by default).
The function 'config_sysconfig_lcg' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_sysconfig_lcg
The code is reproduced also in 25.4..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_crl'.
Cron script is installed to fetch new versions of CRLs four times a day. The
time when the script is run is randomized in order to distribute the load on
CRL servers. If the configuration is run as root, the cron entry is installed
in /etc/cron.d/edg-fetch-crl, otherwise it is installed as a user cron
entry.
CRLs are also updated immediately by running the update script (<INSTALL_ROOT>/edg/etc/cron/edg-fetch-crl-cron).
Logrotate script is installed as /etc/logrotate.d/edg-fetch-crl to prevent the logs from growing indefinitely.
The function 'config_crl' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_crl
The code is reproduced also in 25.5..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_rfio'.
rfiod is configured on SE_classic nodes by adding the appropriate ports
(5001 TCP and UDP) to /etc/services and restarting the daemon.
For SE_dpm nodes, rfiod is configured by config_DPM_rfio so no
configuration is done here.
All other nodes don't run rfiod. However, rfiod might still be installed from CASTOR-client RPM. If this is the case, we make sure it's stopped and disabled.
The function 'config_rfio' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_rfio
The code is reproduced also in 25.6..
Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_host_certs'.
The SE_dpm_mysql node requires the host certificate/key files to be put in
place before you start the installation.
Contact your national Certification Authority (CA) to understand how to
obtain a host certificate if you do not have one already.
Instruction to obtain a CA list can be found in
http://markusw.home.cern.ch/markusw/lcg2CAlist.html
From the CA list so obtained you should choose a CA close to you.
Once you have obtained a valid certificate, i.e. a file
hostcert.pem
containing the machine public key and a file
hostkey.pem
containing the machine private key, make sure to place the two files
into the directory
/etc/grid-security
with the following permissions
> chmod 400 /etc/grid-security/hostkey.pem > chmod 644 /etc/grid-security/hostcert.pemIt is IMPORTANT that permissions be set as shown, as otherwise certification errors will occur.
If the certificates don't exist, the function exits with an error message and the calling process is interrupted.
The function 'config_host_certs' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_host_certs
The code is reproduced also in 25.7..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_users'.
config_users creates pool accounts for grid users defined in users.conf. Each line in this file describes one user:
UID:LOGIN:GID:GROUP:VO:SGM_FLAG:
First, the format of the users.conf file is checked (VO and SGM fields
were added recently).
Groups are then created for the supported VOs (listed in <VOS>
variable) using groupadd.
For each of the lines in users.conf, a user account is created (with useradd) if that user's VO is supported.
Finally, grid users are denied access to cron and at by adding their usernames to /etc/at.deny and /etc/cron.deny.
The function 'config_users' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_users
The code is reproduced also in 25.8..
Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_edgusers'.
Many of the services running on LCG service nodes are owned by the user
edguser. The user edguser belongs to the group edguser and it has
got a home directory in /home.
The user edginfo is required on all the nodes publishing
information on the Information System. The user belongs to the group
edginfo and it has got a home directory in /home.
No special requirements exists for the ID of the above mentioned users and
groups.
The function creates bothedguser and edginfo groups and users.
The function 'config_edgusers' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_edgusers
The code is reproduced also in 25.9..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_mkgridmap'.
Format of the users.conf file is checked first. This file should have six
colon separated fields. Using this file, /etc/grid-security/gridmapdir
pool directory is created and initialized with pool accounts.
Next, configuration for edg-mkgridmap is generated in <INSTALL_ROOT>/edg/etc/edg-mkgridmap.conf. edg-mkgridmap generates /etc/grid-security/grid-mapfile using VO membership information in VOMS and/or LDAP. The following lines are generated for each of the supported VOs:
group <VO_<vo>_SERVER>/Role=lcgadmin sgmuser group <VO_<vo>_SERVER>/<VO_<vo>_VOMS_EXTRA_MAPS> group <VO_<vo>_SERVER><VO_<vo>_VOMS_POOL_PATH> .user_prefix group <VO_<vo>_SGM> sgmuser group <VO_<vo>_USERS> .user_prefixwhere sgmuser is SGM for the <vo> and user_prefix is the prefix for <vo> pool accounts (both values are inferred from users.conf). Multiple VOMS servers and extra maps can be defined.
Authentication URLs and site specific mappings are appended to the end of the file:
auth <GRIDMAP_AUTH> gmf_local <INSTALL_ROOT>/edg/etc/grid-mapfile-local
If authentication URLs are not defined in <GRIDMAP_AUTH>, ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org is used.
Site specific grid user mappings can be defined in
<INSTALL_ROOT>/edg/etc/grid-mapfile-local. Contents of this file are
included verbatim in the output of edg-mkgridmap.
<INSTALL_ROOT>/edg/etc/lcmaps/gridmapfile is generated with the following contents for each supported VO:
/VO=<vo>/GROUP=/<vo>/ROLE=lcgadmin sgmuser /VO=<vo>/GROUP=/<vo> .user_prefixThis file defines local account mappings for VOMS enabled proxy certificates.
<INSTALL_ROOT>/edg/etc/lcmaps/groupmapfile is generated with the following contents for each supported VO:
/VO=<vo>/GROUP=/<vo>/ROLE=lcgadmin vo_group /VO=<vo>/GROUP=/<vo> vo_groupThis file defines local group mappings for VOMS enabled proxy certificates.
After the configuration is finished, edg-mkgridmap is run with the new
configuration to generate the /etc/grid-security/grid-mapfile. Cron job
for regenerating grid-mapfile is installed to run four times a day.
A cron job for expiring gridmapdir pool accounts is installed to run once a day on all nodes except nodes running dpm. This is a temporary fix to avoid users losing access to their files after the mapping expires and they are mapped to a different local user. By default, pool accounts expire if they are not used for more than 2 days, except on RB where they are expired after 10 days.
The function 'config_mkgridmap' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_mkgridmap
The code is reproduced also in 25.10..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_java'.
Since Java is not included in the LCG distribution, Java location needs to be
configured with yaim.
If <JAVA_LOCATION> is not defined in site-info.def, it is determined
from installed Java RPMs (if available).
In relocatable distribution, JAVA_HOME environment variable is defined in
<INSTALL_ROOT>/etc/profile.d/grid_env.sh and
<INSTALL_ROOT>/etc/profile.d/grid_env.csh.
Otherwise, JAVA_HOME is defined in /etc/java/java.conf and /etc/java.conf and Java binaries added to PATH in <INSTALL_ROOT>/edg/etc/profile.d/j2.sh and <INSTALL_ROOT>/edg/etc/profile.d/j2.csh.
The function 'config_java' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_java
The code is reproduced also in 25.11..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_rgma_client'.
R-GMA client configuration is generated in <INSTALL_ROOT>/glite/etc/rgma/rgma.conf by running:
<INSTALL_ROOT>/glite/share/rgma/scripts/rgma-setup.py --secure=no --server=<MON_HOST> --registry=<REG_HOST> --schema=<REG_HOST>
<INSTALL_ROOT>/edg/etc/profile.d/edg-rgma-env.sh and <INSTALL_ROOT>/edg/etc/profile.d/edg-rgma-env.csh with the following functionality:
These files are sourced into the users environment from <INSTALL_ROOT>/etc/profile.d/z_edg_profile.sh and <INSTALL_ROOT>/etc/profile.d/z_edg_profile.csh.
The function 'config_rgma_client' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_rgma_client
The code is also reproduced in 25.12..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_gip'.
Generic Information Provider (GIP) is configured through <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf. The start of this file is common for all types of nodes:
ldif_file=<INSTALL_ROOT>/lcg/var/gip/lcg-info-static.ldif generic_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-generic wrapper_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-wrapper temp_path=<INSTALL_ROOT>/lcg/var/gip/tmp template=<INSTALL_ROOT>/lcg/etc/GlueSite.template template=<INSTALL_ROOT>/lcg/etc/GlueCE.template template=<INSTALL_ROOT>/lcg/etc/GlueCESEBind.template template=<INSTALL_ROOT>/lcg/etc/GlueSE.template template=<INSTALL_ROOT>/lcg/etc/GlueService.template # Common for all GlueInformationServiceURL: ldap://<hostname>:2135/mds-vo-name=local,o=grid
<hostname> is determined by running hostname -f.
For CE the following is added:
dn: GlueSiteUniqueID=<SITE_NAME>,mds-vo-name=local,o=grid GlueSiteName: <SITE_NAME> GlueSiteDescription: LCG Site GlueSiteUserSupportContact: mailto: <SITE_EMAIL> GlueSiteSysAdminContact: mailto: <SITE_EMAIL> GlueSiteSecurityContact: mailto: <SITE_EMAIL> GlueSiteLocation: <SITE_LOC> GlueSiteLatitude: <SITE_LAT> GlueSiteLongitude: <SITE_LONG> GlueSiteWeb: <SITE_WEB> GlueSiteOtherInfo: <SITE_TIER> GlueSiteOtherInfo: <SITE_SUPPORT_SITE> GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueForeignKey: GlueClusterUniqueID=<CE_HOST> GlueForeignKey: GlueSEUniqueID=<SE_HOST> dynamic_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-ce dynamic_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-software <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf # CE Information Provider GlueCEHostingCluster: <CE_HOST> GlueCEInfoGatekeeperPort: 2119 GlueCEInfoHostName: <CE_HOST> GlueCEInfoLRMSType: <CE_BATCH_SYS> GlueCEInfoLRMSVersion: not defined GlueCEInfoTotalCPUs: 0 GlueCEPolicyMaxCPUTime: 0 GlueCEPolicyMaxRunningJobs: 0 GlueCEPolicyMaxTotalJobs: 0 GlueCEPolicyMaxWallClockTime: 0 GlueCEPolicyPriority: 1 GlueCEStateEstimatedResponseTime: 0 GlueCEStateFreeCPUs: 0 GlueCEStateRunningJobs: 0 GlueCEStateStatus: Production GlueCEStateTotalJobs: 0 GlueCEStateWaitingJobs: 0 GlueCEStateWorstResponseTime: 0 GlueHostApplicationSoftwareRunTimeEnvironment: <ce_runtimeenv> GlueHostArchitectureSMPSize: <CE_SMPSIZE> GlueHostBenchmarkSF00: <CE_SF00> GlueHostBenchmarkSI00: <CE_SI00> GlueHostMainMemoryRAMSize: <CE_MINPHYSMEM> GlueHostMainMemoryVirtualSize: <CE_MINVIRTMEM> GlueHostNetworkAdapterInboundIP: <CE_INBOUNDIP> GlueHostNetworkAdapterOutboundIP: <CE_OUTBOUNDIP> GlueHostOperatingSystemName: <CE_OS> GlueHostOperatingSystemRelease: <CE_OS_RELEASE> GlueHostOperatingSystemVersion: 3 GlueHostProcessorClockSpeed: <CE_CPU_SPEED> GlueHostProcessorModel: <CE_CPU_MODEL> GlueHostProcessorVendor: <CE_CPU_VENDOR> GlueSubClusterPhysicalCPUs: 0 GlueSubClusterLogicalCPUs: 0 GlueSubClusterTmpDir: /tmp GlueSubClusterWNTmpDir: /tmp GlueCEInfoJobManager: <JOB_MANAGER> GlueCEStateFreeJobSlots: 0 GlueCEPolicyAssignedJobSlots: 0 GlueCESEBindMountInfo: none GlueCESEBindWeight: 0 dn: GlueClusterUniqueID=<CE_HOST>, mds-vo-name=local,o=grid GlueClusterName: <CE_HOST} GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueClusterService: <CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue> GlueForeignKey: GlueCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue> dn: GlueSubClusterUniqueID=<CE_HOST>, GlueClusterUniqueID=<CE_HOST>, mds-vo-name=local,o=grid GlueChunkKey: GlueClusterUniqueID=<CE_HOST> GlueSubClusterName: <CE_HOST> dn: GlueCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>, mds-vo-name=local,o=grid GlueCEName: <queue> GlueForeignKey: GlueClusterUniqueID=<CE_HOST> GlueCEInfoContactString: <CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue> GlueCEAccessControlBaseRule: VO:<vo> dn: GlueVOViewLocalID=<vo>,GlueCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>,mds-vo-name=local,o=grid GlueCEAccessControlBaseRule: VO:<vo> GlueCEInfoDefaultSE: <VO_<vo>_DEFAULT_SE> GlueCEInfoApplicationDir: <VO_<vo>_SW_DIR> GlueCEInfoDataDir: <VO_<vo>_STORAGE_DIR> GlueChunkKey: GlueCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue> dn: GlueCESEBindGroupCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>, mds-vo-name=local,o=grid GlueCESEBindGroupSEUniqueID: <se_list> dn: GlueCESEBindSEUniqueID=<se>, GlueCESEBindGroupCEUniqueID=<CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>, mds-vo-name=local,o=grid GlueCESEBindCEAccesspoint: <accesspoint> GlueCESEBindCEUniqueID: <CE_HOST>:2119/jobmanager-<JOB_MANAGER>-<queue>where <accesspoint> is:
For each of the supported VOs, a directory is created in
<INSTALL_ROOT>/edg/var/info/<vo>. These are used by SGMs to
publish information on experiment software installed on the cluster.
For the nodes running GridICE server (usually SE) the following is added:
dn: GlueServiceUniqueID=<GRIDICE_SERVER_HOST>:2136,Mds-vo-name=local,o=grid GlueServiceName: <SITE_NAME>-gridice GlueServiceType: gridice GlueServiceVersion: 1.1.0 GlueServiceEndpoint: ldap://<GRIDICE_SERVER_HOST>:2136/mds-vo-name=local,o=grid GlueServiceStatus: OK GlueServiceStatusInfo: No Problems GlueServiceStartTime: 2002-10-09T19:00:00Z GlueServiceOwner: LCG GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueServiceAccessControlRule:<vo>
For PX nodes the following is added:
dn: GlueServiceUniqueID=<PX_HOST>:7512,Mds-vo-name=local,o=grid GlueServiceName: <SITE_NAME>-myproxy GlueServiceType: myproxy GlueServiceVersion: 1.1.0 GlueServiceEndpoint: <PX_HOST>:7512 GlueServiceStatus: OK GlueServiceStatusInfo: No Problems GlueServiceStartTime: 2002-10-09T19:00:00Z GlueServiceOwner: LCG GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueServiceAccessControlRule: <grid_trusted_broker>
For nodes running RB the following is added:
dn: GlueServiceUniqueID=<RB_HOST>:7772,Mds-vo-name=local,o=grid GlueServiceName: <SITE_NAME>-rb GlueServiceType: ResourceBroker GlueServiceVersion: 1.2.0 GlueServiceEndpoint: <RB_HOST>:7772 GlueServiceStatus: OK GlueServiceStatusInfo: No Problems GlueServiceStartTime: 2002-10-09T19:00:00Z GlueServiceOwner: LCG GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueServiceAccessControlRule: <vo> dn: GlueServiceDataKey=HeldJobs,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid GlueServiceDataKey: HeldJobs GlueServiceDataValue: 0 GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772 dn: GlueServiceDataKey=IdleJobs,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid GlueServiceDataKey: IdleJobs GlueServiceDataValue: 0 GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772 dn: GlueServiceDataKey=JobController,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid GlueServiceDataKey: JobController GlueServiceDataValue: 0 GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772 dn: GlueServiceDataKey=Jobs,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid GlueServiceDataKey: Jobs GlueServiceDataValue: 0 GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772 dn: GlueServiceDataKey=LogMonitor,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid GlueServiceDataKey: LogMonitor GlueServiceDataValue: 0 GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772 dn: GlueServiceDataKey=RunningJobs,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid GlueServiceDataKey: RunningJobs GlueServiceDataValue: 14 GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772 dn: GlueServiceDataKey=WorkloadManager,GlueServiceUniqueID=gram://<RB_HOST>:7772,Mds-vo-name=local,o=grid GlueServiceDataKey: WorkloadManager GlueServiceDataValue: 0 GlueChunkKey: GlueServiceUniqueID=gram://<RB_HOST>:7772
For central LFC the following is added:
dn: GlueServiceUniqueID=http://<LFC_HOST>:8085/,mds-vo-name=local,o=grid GlueServiceName: <SITE_NAME>-lfc-dli GlueServiceType: data-location-interface GlueServiceVersion: 1.0.0 GlueServiceEndpoint: http://<LFC_HOST>:8085/ GlueServiceURI: http://<LFC_HOST}:8085/ GlueServiceAccessPointURL: http://<LFC_HOST>:8085/ GlueServiceStatus: running GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueServiceOwner: <vo> GlueServiceAccessControlRule: <vo> dn: GlueServiceUniqueID=<LFC_HOST>,mds-vo-name=local,o=grid GlueServiceName: <SITE_NAME>-lfc GlueServiceType: lcg-file-catalog GlueServiceVersion: 1.0.0 GlueServiceEndpoint: <LFC_HOST> GlueServiceURI: <LFC_HOST> GlueServiceAccessPointURL: <LFC_HOST> GlueServiceStatus: running GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueServiceOwner: <vo> GlueServiceAccessControlRule: <vo>
For local LFC the following is added:
dn: GlueServiceUniqueID=<LFC_HOST>,mds-vo-name=local,o=grid GlueServiceName: <SITE_NAME>-lfc GlueServiceType: lcg-local-file-catalog GlueServiceVersion: 1.0.0 GlueServiceEndpoint: <LFC_HOST> GlueServiceURI: <LFC_HOST> GlueServiceAccessPointURL: <LFC_HOST> GlueServiceStatus: running GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueServiceOwner: <vo> GlueServiceAccessControlRule: <vo>
For dcache and dpm nodes the following is added:
dn: GlueServiceUniqueID=httpg://<SE_HOST>:8443/srm/managerv1,Mds-Vo-name=local,o=grid GlueServiceAccessPointURL: httpg://<SE_HOST>:8443/srm/managerv1 GlueServiceEndpoint: httpg://<SE_HOST>:8443/srm/managerv1 GlueServiceType: srm_v1 GlueServiceURI: httpg://<SE_HOST>:8443/srm/managerv1 GlueServicePrimaryOwnerName: LCG GlueServicePrimaryOwnerContact: mailto:<SITE_EMAIL> GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueServiceVersion: 1.0.0 GlueServiceAccessControlRule: <vo> GlueServiceInformationServiceURL: MDS2GRIS:ldap://<BDII_HOST>:2170/mds-voname=local,mds-vo-name=<SITE_NAME>,mds-vo-name=local,o=grid GlueServiceStatus: running
For all types of SE the following is added:
dynamic_script=<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-se GlueSEType: <se_type> GlueSEPort: 2811 GlueSESizeTotal: 0 GlueSESizeFree: 0 GlueSEArchitecture: <se_type> GlueSAType: permanent GlueSAPolicyFileLifeTime: permanent GlueSAPolicyMaxFileSize: 10000 GlueSAPolicyMinFileSize: 1 GlueSAPolicyMaxData: 100 GlueSAPolicyMaxNumFiles: 10 GlueSAPolicyMaxPinDuration: 10 GlueSAPolicyQuota: 0 GlueSAStateAvailableSpace: 1 GlueSAStateUsedSpace: 1 dn: GlueSEUniqueID=<SE_HOST>,mds-vo-name=local,o=grid GlueSEName: <SITE_NAME>:<se_type> GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> dn: GlueSEAccessProtocolLocalID=gsiftp, GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid GlueSEAccessProtocolType: gsiftp GlueSEAccessProtocolPort: 2811 GlueSEAccessProtocolVersion: 1.0.0 GlueSEAccessProtocolSupportedSecurity: GSI GlueChunkKey: GlueSEUniqueID=<SE_HOST> dn: GlueSEAccessProtocolLocalID=rfio, GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid GlueSEAccessProtocolType: rfio GlueSEAccessProtocolPort: 5001 GlueSEAccessProtocolVersion: 1.0.0 GlueSEAccessProtocolSupportedSecurity: RFIO GlueChunkKey: GlueSEUniqueID=<SE_HOST>where <se_type> is srm_v1 for DPM and dCache and disk otherwise.
For SE_dpm the following is added:
dn: GlueSALocalID=<vo>,GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid GlueSARoot: <vo>:/dpm/<domain>/home/<vo> GlueSAPath: <vo>:/dpm/<domain>/home/<vo> GlueSAAccessControlBaseRule: <vo> GlueChunkKey: GlueSEUniqueID=<SE_HOST>
For SE_dcache the following is added:
dn: GlueSALocalID=<vo>,GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid GlueSARoot: <vo>:/pnfs/<domain>/home/<vo> GlueSAPath: <vo>:/pnfs/<domain>/home/<vo> GlueSAAccessControlBaseRule: <vo> GlueChunkKey: GlueSEUniqueID=<SE_HOST>
For other types of SE the following is used:
dn: GlueSALocalID=<vo>,GlueSEUniqueID=<SE_HOST>,Mds-Vo-name=local,o=grid GlueSARoot: <vo>:<vo> GlueSAPath: <VO_<vo>_STORAGE_DIR> GlueSAAccessControlBaseRule: <vo> GlueChunkKey: GlueSEUniqueID=<SE_HOST>
For VOBOX the following is added:
dn: GlueServiceUniqueID=gsissh://<VOBOX_HOST>:<VOBOX_PORT>,Mds-vo-name=local,o=grid GlueServiceAccessPointURL: gsissh://<VOBOX_HOST>:<VOBOX_PORT> GlueServiceName: <SITE_NAME>-vobox GlueServiceType: VOBOX GlueServiceEndpoint: gsissh://<VOBOX_HOST>:<VOBOX_PORT> GlueServicePrimaryOwnerName: LCG GlueServicePrimaryOwnerContact: <SITE_EMAIL> GlueForeignKey: GlueSiteUniqueID=<SITE_NAME> GlueServiceVersion: 1.0.0 GlueServiceInformationServiceURL: ldap://<VOBOX_HOST>:2135/mds-vo-name=local,o=grid GlueServiceStatus: running GlueServiceAccessControlRule: <vo>
Configuration script is run:
<INSTALL_ROOT>/lcg/sbin/lcg-info-generic-config <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.confConfiguration script generates a ldif file (<INSTALL_ROOT>/lcg/var/gip/lcg-info-static.ldif) by merging templates from <INSTALL_ROOT>/lcg/etc/ and data from <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf. Wrapper script is also created in <INSTALL_ROOT>/lcg/libexec/lcg-info-wrapper.
<INSTALL_ROOT>/globus/libexec/edg.info is created:
#!/bin/bash # # info-globus-ldif.sh # #Configures information providers for MDS # cat << EOF dn: Mds-Vo-name=local,o=grid objectclass: GlobusTop objectclass: GlobusActiveObject objectclass: GlobusActiveSearch type: exec path: <INSTALL_ROOT>/lcg/libexec base: lcg-info-wrapper args: cachetime: 60 timelimit: 20 sizelimit: 250 EOF
<INSTALL_ROOT>/globus/libexec/edg.info is created:
#!/bin/bash cat <<EOF <INSTALL_ROOT>/globus/etc/openldap/schema/core.schema <INSTALL_ROOT>/glue/schema/ldap/Glue-CORE.schema <INSTALL_ROOT>/glue/schema/ldap/Glue-CE.schema <INSTALL_ROOT>/glue/schema/ldap/Glue-CESEBind.schema <INSTALL_ROOT>/glue/schema/ldap/Glue-SE.schema EOF
These two scripts are used to generate slapd configuration for Globus
MDS.
<INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-ce is generated to call the information provider appropriate for the LRMS. For Torque the file has these contents:
#!/bin/sh <INSTALL_ROOT>/lcg/libexec/lcg-info-dynamic-pbs <INSTALL_ROOT>/lcg/var/gip/lcg-info-generic.conf <TORQUE_SERVER>
R-GMA GIN periodically queries MDS and inserts the data into R-GMA. GIN is configured on all nodes except UI and WN by copying host certificate to <INSTALL_ROOT>/glite/var/rgma/.certs and updating the configuration file appropriately (<INSTALL_ROOT>/glite/etc/rgma/ClientAuthentication.props). Finally, GIN configuration script (<INSTALL_ROOT>/glite/bin/rgma-gin-config) is run to configure the mapping between Glue schema in MDS and Glue tables in R-GMA. rgma-gin service is restarted and configured to start on boot.
The function 'config_gip' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_gip
The code is also reproduced in 25.13..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_globus'.
The Globus configuration file /etc/globus.conf is parsed by Globus daemon
startup scripts to locate the Globus root directory and other global/daemon
specific properties. The contents of the configuration file depend on the type
of the node. The following table contains information on daemon to node
mapping:
| node/daemon | MDS | GridFTP | Gatekeeper |
| CE | yes | yes | yes |
| VOBOX | yes | yes | yes |
| SE_* | yes | yes | no |
| SE_dpm | yes | no | no |
| PX | yes | no | no |
| RB | yes | no | no |
| LFC | yes | no | no |
| GridICE | yes | no | no |
The configuration file is divided into sections:
Logrotate scripts globus-gatekeeper and gridftp are installed in
/etc/logrotate.d/.
Globus initialization script
(<INSTALL_DIR>/globus/sbin/globus-initialization.sh) is run next.
Finally, the appropriate daemons (globus-mds, globus-gatekeeper, globus-gridftp, lcg-mon-gridftp) are started (and configured to start on boot).
The function 'config_globus' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_globus
The code is reproduced also in 25.14..
Author(s): Retico,Antonio
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_lcgenv'.
The LCG middleware needs some environment variables to be set up at boot time.
The variable should be available both in 'bash-like' shells and in
'csh-like' shells.
This can be obtained in different ways:
The simplest way, if you have 'root' permissions, is to put a shell script
for each of the supported shell 'families' in the directory
/etc/profile.d. The script will be automatically sourced at start up.
If instead you are not a superuser and you are doing the installation in a
private directory (e.g. you are installing a Re-locatable Distribution of a
Worker Node or a User Interface in the <INSTALL_ROOT> directory),
you could create the scripts in the directory
<INSTALL_ROOT>/etc/profile.d, in order to have the variables
automatically set up by LCG tools.
The list of the environment variables to be set up follows:
The examples given hereafter refer to the simple configuration method
described above. In the following description we will refer to the two
possible locations as to the <LCG_ENV_LOC>.
So, according to the cases
above described, either
<LCG_ENV_LOC>=/etc/profile.d
or
<LCG_ENV_LOC>=<INSTALL_ROOT>/etc/profile.d
Examples:
#!/bin/sh
export LCG_GFAL_INFOSYS=lxb1769.cern.ch:2170
export MYPROXY_SERVER=lxb1774.cern.ch
export PATH="${PATH}:/opt/d-cache-client/bin"
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/d-cache-client/dcap
export SRM_PATH=/opt/d-cache-client/srm
export VO_ATLAS_SW_DIR=lxb1780.cern.ch
export VO_ATLAS_DEFAULT_SE=lxb1780.cern.ch
export VO_ALICE_SW_DIR=lxb1780.cern.ch
export VO_ALICE_DEFAULT_SE=lxb1780.cern.ch
export VO_LHCB_SW_DIR=lxb1780.cern.ch
export VO_LHCB_DEFAULT_SE=lxb1780.cern.ch
export VO_CMS_SW_DIR=lxb1780.cern.ch
export VO_CMS_DEFAULT_SE=lxb1780.cern.ch
export VO_DTEAM_SW_DIR=lxb1780.cern.ch
export VO_DTEAM_DEFAULT_SE=lxb1780.cern.ch
#!/bin/csh
setenv LCG_GFAL_INFOSYS lxb1769.cern.ch:2170
setenv MYPROXY_SERVER lxb1774.cern.ch
setenv PATH "${PATH}:/opt/d-cache-client/bin"
setenv LD_LIBRARY_PATH ${LD_LIBRARY_PATH}:/opt/d-cache-client/dcap
setenv SRM_PATH /opt/d-cache-client/srm
setenv VO_ATLAS_SW_DIR lxb1780.cern.ch
setenv VO_ATLAS_DEFAULT_SE lxb1780.cern.ch
setenv VO_ALICE_SW_DIR lxb1780.cern.ch
setenv VO_ALICE_DEFAULT_SE lxb1780.cern.ch
setenv VO_LHCB_SW_DIR lxb1780.cern.ch
setenv VO_LHCB_DEFAULT_SE lxb1780.cern.ch
setenv VO_CMS_SW_DIR lxb1780.cern.ch
setenv VO_CMS_DEFAULT_SE lxb1780.cern.ch
setenv VO_DTEAM_SW_DIR lxb1780.cern.ch
setenv VO_DTEAM_DEFAULT_SE lxb1780.cern.ch
WARNING: The two scripts must be executable by all users.
> chmod a+x ${LCG_ENV_LOC}/lcgenv.csh
> chmod a+x ${LCG_ENV_LOC}/lcgenv.sh
The function 'config_lcgenv' needs the following variables to be set in the configuration file:
The function does exit with return code 1 if they
are not set.
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_lcgenv
The code is reproduced also in 25.15..
Author(s): LCG Deployment - GIS team
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_DPM_upgrade'.
A description of the configuration steps done by this function has not been
provided yet.
If you need specific information, please get in touch with
the author.
The function 'config_DPM_upgrade' needs the following variables to be set in the configuration file:
No further specification of the function 'config_DPM_upgrade' has been
provided up till now.
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_DPM_upgrade
The code is also reproduced in 25.16..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_DPM_rfio'.
rfiod ports (5001 TCP and UDP) are added to /etc/services.
The following is added to /etc/shift.conf:
RFIOD TRUST <DPM_HOST> RFIOD WTRUST <DPM_HOST> RFIOD RTRUST <DPM_HOST> RFIOD XTRUST <DPM_HOST> RFIOD FTRUST <DPM_HOST>Both short and FQDN of the <DPM_HOST> are listed.
On the SE_dpm_mysql node the following is also added to /etc/shift.conf:
DPM TRUST <dpmpool_node> DPNS TRUST <dpmpool_node>For each of the pool nodes both short and FQDN are listed.
/etc/sysconfig/rfiod is created from /etc/sysconfig/rfiod.templ by
setting the DPNS_HOST variable to <DPM_HOST>.
Finally, the RFIO daemon is restarted and configured to start on boot.
The function 'config_DPM_rfio' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_DPM_rfio
The code is also reproduced in 25.17..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_DPM_user'.
DPM Manager user (as defined by dpm flag in <USERS_CONF>) is created as a system account with home in /home/dpmmgr and /bin/tcsh as a default shell.
The function 'config_DPM_user' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_DPM_user
The code is also reproduced in 25.18..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_DPM_mgr'.
/etc/grid-security/dpmmgr/ is created and host certificates are copied
into it as dpmcert.pem and dpmkey.pem.
dpmmgr group is given write access to
/etc/grid-security/gridmapdir/.
Host certificate is also installed as user certificate for edginfo user. This is so that edginfo user can run authenticated dpm-qryconf command.
The function 'config_DPM_mgr' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_DPM_mgr
The code is also reproduced in 25.19..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_DPM_mysql'.
MySQL password for root is first set to <MYSQL_PASSWORD>. The
password for DPM database user (<DPMMGR>) is set to
<DPMUSER_PWD>.
This password is also saved to <INSTALL_ROOT>/lcg/etc/DPMCONFIG:
<DPMMGR>/<DPMUSER_PWD>@localhost
DPM databases (cns_db and dpm_db) are created using
The function 'config_DPM_mysql' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_DPM_mysql
The code is also reproduced in 25.20..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_DPM_gsiftp'.
/etc/sysconfig/dpm-gsiftp is created from the template in
/etc/sysconfig/dpm-gsiftp.templ by setting the DPM and DPNS host to
<DPM_HOST>.
DPM GSIFTP daemon is started.
The function 'config_DPM_gsiftp' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_DPM_gsiftp
The code is also reproduced in 25.21..
Author(s): Vidic,Valentin
Email : support-lcg-manual-install@cern.ch
This chapter describes the configuration steps done by the yaim
function 'config_sedpm'.
For each of the DPM daemons (dpnsdaemon, dpm, srmv1, srmv2), an environment is set-up in /etc/sysconfig/<daemon>. These files are generated from templates (/etc/sysconfig/<daemon>.templ) by setting the following variables:
export DPNS_HOST=<DPM_HOST> export DPM_HOST=<DPM_HOST> NSCONFIGFILE=<INSTALL_ROOT>/lcg/etc/DPMCONFIG DPMCONFIGFILE=<INSTALL_ROOT>/lcg/etc/DPMCONFIGThe daemons are then restarted and configured to start on boot.
DPM pool is created:
mkdir -p <DPMDATA> chown dpmmgr:dpmmgr <DPMDATA> <INSTALL_ROOT>/lcg/bin/dpm-addpool --poolname <DPMPOOL> --def_filesize <DPMFSIZE> <INSTALL_ROOT>/lcg/bin/dpm-addfs --poolname <DPMPOOL> --server <DPM_HOST> --fs <DPMDATA>
DPM filesystem is initialized:
<INSTALL_ROOT>/lcg/bin/dpns-mkdir /dpm <INSTALL_ROOT>/lcg/bin/dpns-mkdir -m 755 -p /dpm/<MY_DOMAIN> <INSTALL_ROOT>/lcg/bin/dpns-mkdir -m 755 -p /dpm/<MY_DOMAIN>/home <INSTALL_ROOT>/lcg/bin/dpns-chmod 775 /dpm <INSTALL_ROOT>/lcg/bin/dpns-chmod 775 /dpm/<MY_DOMAIN> <INSTALL_ROOT>/lcg/bin/dpns-chmod 775 /dpm/<MY_DOMAIN>/home <INSTALL_ROOT>/lcg/bin/dpns-setacl -m d:u::7,d:g::7,d:o:5 /dpm <INSTALL_ROOT>/lcg/bin/dpns-setacl -m d:u::7,d:g::7,d:o:5 /dpm/<MY_DOMAIN> <INSTALL_ROOT>/lcg/bin/dpns-setacl -m d:u::7,d:g::7,d:o:5 /dpm/<MY_DOMAIN>/home
For each of the supported VOs, a directory is set-up in DPM:
<INSTALL_ROOT>/lcg/bin/dpns-mkdir -m 775 /dpm/<MY_DOMAIN>/home/<vo> <INSTALL_ROOT>/lcg/bin/dpns-chmod 775 /dpm/<MY_DOMAIN>/home/<vo> <INSTALL_ROOT>/lcg/bin/dpns-chown root:<vo> /dpm/<MY_DOMAIN>/home/<vo> <INSTALL_ROOT>/lcg/bin/dpns-setacl -m d:u::7,d:g::7,d:o:5 /dpm/<MY_DOMAIN>/home/<vo>The last command sets default ACL for a directory. Default ACL is inherited as access ACL by the files or sub-directories created in that directory. Sub-directories also inherit the default ACL as default ACL.
The function 'config_sedpm' needs the following variables to be set in the configuration file:
The original code of the function can be found in:
/opt/lcg/yaim/functions/config_sedpm
The code is also reproduced in 25.22..
config_ldconf () {
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
cp -p /etc/ld.so.conf /etc/ld.so.conf.orig
LIBDIRS="${INSTALL_ROOT}/globus/lib \
${INSTALL_ROOT}/edg/lib \
${INSTALL_ROOT}/edg/externals/lib/ \
/usr/local/lib \
${INSTALL_ROOT}/lcg/lib \
/usr/kerberos/lib \
/usr/X11R6/lib \
/usr/lib/qt-3.1/lib \
${INSTALL_ROOT}/gcc-3.2.2/lib \
${INSTALL_ROOT}/glite/lib \
${INSTALL_ROOT}/glite/externals/lib"
if [ -f /etc/ld.so.conf.add ]; then
rm -f /etc/ld.so.conf.add
fi
for libdir in ${LIBDIRS}; do
if ( ! grep -q $libdir /etc/ld.so.conf && [ -d $libdir ] ); then
echo $libdir >> /etc/ld.so.conf.add
fi
done
if [ -f /etc/ld.so.conf.add ]; then
sort -u /etc/ld.so.conf.add >> /etc/ld.so.conf
rm -f /etc/ld.so.conf.add
fi
/sbin/ldconfig
return 0
}
config_sysconfig_edg(){
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
cat <<EOF > /etc/sysconfig/edg
EDG_LOCATION=$INSTALL_ROOT/edg
EDG_LOCATION_VAR=$INSTALL_ROOT/edg/var
EDG_TMP=/tmp
X509_USER_CERT=/etc/grid-security/hostcert.pem
X509_USER_KEY=/etc/grid-security/hostkey.pem
GRIDMAP=/etc/grid-security/grid-mapfile
GRIDMAPDIR=/etc/grid-security/gridmapdir/
EDG_WL_BKSERVERD_ADDOPTS=--rgmaexport
EDG_WL_RGMA_FILE=/var/edgwl/logging/status.log
EOF
return 0
}
config_sysconfig_globus() {
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
# If GLOBUS_TCP_PORT_RANGE is unset, give it a good default
# Leave it alone if it is set but empty
GLOBUS_TCP_PORT_RANGE=${GLOBUS_TCP_PORT_RANGE-"20000 25000"}
cat <<EOF > /etc/sysconfig/globus
GLOBUS_LOCATION=$INSTALL_ROOT/globus
GLOBUS_CONFIG=/etc/globus.conf
export LANG=C
EOF
# Set GLOBUS_TCP_PORT_RANGE, but not for nodes which are only WNs
if [ "$GLOBUS_TCP_PORT_RANGE" ] && ( ! echo $NODE_TYPE_LIST | egrep -q '^ *WN_?[[:alpha:]]* *$' ); then
echo "GLOBUS_TCP_PORT_RANGE=\"$GLOBUS_TCP_PORT_RANGE\"" >> /etc/sysconfig/globus
fi
(
# HACK to avoid complaints from services that do not need it,
# but get started via a login shell before the file is created...
f=$INSTALL_ROOT/globus/libexec/globus-script-initializer
echo '' > $f
chmod 755 $f
)
return 0
}
config_sysconfig_lcg(){
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
cat <<EOF > /etc/sysconfig/lcg
LCG_LOCATION=$INSTALL_ROOT/lcg
LCG_LOCATION_VAR=$INSTALL_ROOT/lcg/var
LCG_TMP=/tmp
export SITE_NAME=$SITE_NAME
EOF
return 0
}
config_crl(){
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
let minute="$RANDOM%60"
let h1="$RANDOM%24"
let h2="($h1+6)%24"
let h3="($h1+12)%24"
let h4="($h1+18)%24"
if !( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then
if [ ! -f /etc/cron.d/edg-fetch-crl ]; then
echo "Now updating the CRLs - this may take a few minutes..."
$INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> /var/log/edg-fetch-crl-cron.log 2>&1
fi
cron_job edg-fetch-crl root "$minute $h1,$h2,$h3,$h4 * * * $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> /var/log/edg-fetch-crl-cron.log 2>&1"
cat <<EOF > /etc/logrotate.d/edg-fetch
/var/log/edg-fetch-crl-cron.log {
compress
monthly
rotate 12
missingok
ifempty
create
}
EOF
else
cron_job edg-fetch-crl `whoami` "$minute $h1,$h2,$h3,$h4 * * * $INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> $INSTALL_ROOT/edg/var/log/edg-fetch-crl-cron.log 2>&1"
if [ ! -d $INSTALL_ROOT/edg/var/log ]; then
mkdir -p $INSTALL_ROOT/edg/var/log
fi
echo "Now updating the CRLs - this may take a few minutes..."
$INSTALL_ROOT/edg/etc/cron/edg-fetch-crl-cron >> $INSTALL_ROOT/edg/var/log/edg-fetch-crl-cron.log 2>&1
fi
return 0
}
config_rfio() {
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
# This function turns rfio on where necessary and
# just as important, turns it off where it isn't necessary
if ( echo "${NODE_TYPE_LIST}" | grep -q SE_classic ); then
if [ "x`grep rfio /etc/services | grep tcp`" = "x" ]; then
echo "rfio 5001/tcp" >> /etc/services
fi
if [ "x`grep rfio /etc/services | grep udp`" = "x" ]; then
echo "rfio 5001/udp" >> /etc/services
fi
/sbin/service rfiod restart
elif ( echo "${NODE_TYPE_LIST}" | grep -q SE_dpm ); then
return 0
elif ( rpm -qa | grep -q CASTOR-client ); then
/sbin/service rfiod stop
/sbin/chkconfig --level 2345 rfiod off
fi
return 0
}
config_host_certs(){
if [ -f /etc/grid-security/hostkey.pem ]; then
chmod 400 /etc/grid-security/hostkey.pem
elif [ -f /etc/grid-security/hostcert.pem ]; then
chmod 644 /etc/grid-security/hostcert.pem
else
echo "Please copy the hostkey.pem and hostcert.pem to /etc/grid-security"
return 1
fi
return 0
}
config_users(){
#
# Creates the Pool Users.
#
# Takes the users, groups and ids from a configuration file (USERS_CONF).
# File format:
#
# UserId:User:GroupId:Group
#
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
requires USERS_CONF VOS
if [ ! -e $USERS_CONF ]; then
echo "$USERS_CONF not found."
return 1
fi
check_users_conf_format
# Add each group required by $VOS
awk -F: '{print $3, $4, $5}' ${USERS_CONF} | sort -u | while read gid groupname virtorg; do
if ( [ "$virtorg" ] && echo $VOS | grep -w "$virtorg" > /dev/null ); then
groupadd -g $gid $groupname 2> /dev/null
fi
done
grid_accounts=
newline='
'
# Add all the users for each VO in ${VOS}
for x in `cat $USERS_CONF`; do
# ensure that this VO is in the $VOS list
virtorg=`echo $x | cut -d":" -f5`
if ( [ "$virtorg" ] && echo $VOS | grep -w "$virtorg" > /dev/null ); then
user=`echo $x | cut -d":" -f2`
id=`echo $x | cut -d":" -f1`
group=`echo $x | cut -d":" -f3`
if ( ! id $user > /dev/null 2>&1 ); then
useradd -c "mapped user for group ID $group" -u $id -g $group $user
fi
# grid users shall not be able to submit at or cron jobs
for deny in /etc/at.deny /etc/cron.deny; do
tmp=$deny.$$
touch $deny
(grep -v "^$user\$" $deny; echo "$user") > $tmp && mv $tmp $deny
done
grid_accounts="$grid_accounts$newline$user"
fi
done
(
cga=$INSTALL_ROOT/lcg/etc/cleanup-grid-accounts.conf
cga_tmp=$cga.$$
[ -r $cga ] || exit
(
sed '/YAIM/,$d' $cga
echo "# next lines added by YAIM on `date`"
echo "ACCOUNTS='$grid_accounts$newline'"
) > $cga_tmp
mv $cga_tmp $cga
)
let minute="$RANDOM%60"
let h="$RANDOM%6"
f=/var/log/cleanup-grid-accounts.log
if ( echo "${NODE_TYPE_LIST}" | grep '\<CE' > /dev/null ); then
cron_job cleanup-grid-accounts root "$minute $h * * * \
$INSTALL_ROOT/lcg/sbin/cleanup-grid-accounts.sh -v -F >> $f 2>&1"
cat <<EOF > /etc/logrotate.d/cleanup-grid-accounts
$f {
compress
daily
rotate 30
missingok
}
EOF
elif ( echo "${NODE_TYPE_LIST}" | grep '\<WN' > /dev/null ); then
cron_job cleanup-grid-accounts root "$minute $h * * * \
$INSTALL_ROOT/lcg/sbin/cleanup-grid-accounts.sh -v >> $f 2>&1"
cat <<EOF > /etc/logrotate.d/cleanup-grid-accounts
$f {
compress
daily
rotate 30
missingok
}
EOF
fi
return 0
}
config_edgusers(){
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
check_users_conf_format
if ( ! id edguser > /dev/null 2>&1 ); then
useradd -r -c "EDG User" edguser
mkdir -p /home/edguser
chown edguser:edguser /home/edguser
fi
if ( ! id edginfo > /dev/null 2>&1 ); then
useradd -r -c "EDG Info user" edginfo
mkdir -p /home/edginfo
chown edginfo:edginfo /home/edginfo
fi
if ( ! id rgma > /dev/null 2>&1 ); then
useradd -r -c "RGMA user" -m -d ${INSTALL_ROOT}/glite/etc/rgma rgma
fi
# Make sure edguser is a member of each group
awk -F: '{print $3, $4, $5}' ${USERS_CONF} | sort -u | while read gid groupname virtorg; do
if ( [ "$virtorg" ] && echo $VOS | grep -w "$virtorg" > /dev/null ); then
# On some nodes the users are not created, so the group will not exist
# Isn't there a better way to check for group existance??
if ( grep "^${groupname}:" /etc/group > /dev/null ); then
gpasswd -a edguser $groupname > /dev/null
fi
fi
done
return 0
}
config_mkgridmap(){
requires USERS_CONF GROUPS_CONF VOS
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
if [ ! -e $USERS_CONF ]; then
echo "$USERS_CONF not found."
return 1
fi
if [ ! -e $GROUPS_CONF ]; then
echo "$GROUPS_CONF not found."
return 1
fi
check_users_conf_format
gmc=$INSTALL_ROOT/edg/etc/edg-mkgridmap.conf
gmd=/etc/grid-security/gridmapdir
mkdir -p $gmd
chown root:edguser $gmd
chmod 775 $gmd
for user in `awk -F: '$6==""{print $2}' $USERS_CONF`; do
f=$gmd/$user
[ -f $f ] || touch $f
done
if ( echo "${NODE_TYPE_LIST}" | egrep -q 'dpm|LFC' ); then
gmc_dm=$INSTALL_ROOT/lcg/etc/lcgdm-mkgridmap.conf
else
gmc_dm=/dev/null
fi
cat << EOF > $gmc
##############################################################################
#
# edg-mkgridmap.conf generated by YAIM on `date`
#
##############################################################################
EOF
cat << EOF > $gmc_dm
##############################################################################
#
# lcgdm-mkgridmap.conf generated by YAIM on `date`
#
##############################################################################
EOF
lcmaps=${INSTALL_ROOT}/edg/etc/lcmaps
lcmaps_gridmapfile=$lcmaps/gridmapfile
lcmaps_groupmapfile=$lcmaps/groupmapfile
mkdir -p $lcmaps
rm -f $lcmaps_gridmapfile $lcmaps_groupmapfile
for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
# Set some variables
VO_lower=`echo $VO | tr '[:upper:]' '[:lower:]'`
vo_user_prefix=`users_getvoprefix $VO`
[ -z "$vo_user_prefix" ] && vo_user_prefix=$VO_lower
vo_group=`users_getvogroup $VO`
sgmuser=`users_getsgmuser $VO`
prduser=`users_getprduser $VO`
eval voms_pool='$'VO_${VO}_VOMS_POOL_PATH
test -z "$voms_pool" || voms_pool=/${voms_pool#/}
eval voms_servers='$'VO_${VO}_VOMS_SERVERS
vo_match=/VO=$VO_lower/GROUP=/$VO_lower
role_match=$vo_match/ROLE=
echo "# $VO" >> $gmc
### VOMS sgm
if [ "$sgmuser" -a "$voms_servers" ]; then
#
# "/VO=dteam/GROUP=/dteam/ROLE=lcgadmin":::sgm:
#
role=`sed -n 's|^"'"$role_match"'\(.*\)":.*:sgm:* *$|\1|p' $GROUPS_CONF`
echo "# Map VO members (Role) $sgmuser" >> $gmc
split_quoted_variable $voms_servers | while read server; do
echo "group ${server%/}/Role=$role $sgmuser" >> $gmc
echo "group ${server%/}/Role=$role $VO_lower" >> $gmc_dm
done
echo >> $gmc
fi
### VOMS prd
if [ "$prduser" -a "$voms_servers" ]; then
#
# "/VO=dteam/GROUP=/dteam/ROLE=production":::prd:
#
role=`sed -n 's|^"'"$role_match"'\(.*\)":.*:prd:* *$|\1|p' $GROUPS_CONF`
echo "# Map VO members (Role) $prduser" >> $gmc
split_quoted_variable $voms_servers | while read server; do
echo "group ${server%/}/Role=$role $prduser" >> $gmc
echo "group ${server%/}/Role=$role $VO_lower" >> $gmc_dm
done
echo >> $gmc
fi
### VOMS pool
if [ "$voms_servers" ]; then
echo "# Map VO members (root Group) $VO_lower" >> $gmc
split_quoted_variable $voms_servers | while read server; do
echo "group ${server%/}${voms_pool} .$vo_user_prefix" >> $gmc
echo "group ${server%/}${voms_pool} $VO_lower" >> $gmc_dm
done
echo >> $gmc
fi
echo "# LDAP lines for ${VO}" >> $gmc
### LDAP sgm
if [ "$sgmuser" ]; then
eval ldap_sgm='$'VO_${VO}_SGM
test -z "$ldap_sgm" || {
echo "group $ldap_sgm $sgmuser" >> $gmc
echo "group $ldap_sgm $VO_lower" >> $gmc_dm
}
fi
### LDAP pool
eval ldap_users='$'VO_${VO}_USERS
test -z "$ldap_users" || {
echo "group $ldap_users .$vo_user_prefix" >> $gmc
echo "group $ldap_users $VO_lower" >> $gmc_dm
}
echo >> $gmc
echo >> $gmc
### VOMS gridmapfile and groupmapfile
#
# "/VO=cms/GROUP=/cms/ROLE=lcgadmin":::sgm:
# "/VO=cms/GROUP=/cms/ROLE=production":::prd:
# "/VO=cms/GROUP=/cms/GROUP=HeavyIons":cms01:1340::
# "/VO=cms/GROUP=/cms/GROUP=Higgs":cms02:1341::
# "/VO=cms/GROUP=/cms/GROUP=StandardModel":cms03:1342::
# "/VO=cms/GROUP=/cms/GROUP=Susy":cms04:1343::
# "/VO=cms/GROUP=/cms"::::
#
sed -n '/^"\/VO='"$VO_lower"'\//p' $GROUPS_CONF | while read line; do
fqan=` echo "$line" | sed 's/":.*/"/' `
line=` echo "$line" | sed 's/.*"://' `
group=`echo "$line" | sed 's/:.*//' `
line=` echo "$line" | sed 's/[^:]*://'`
gid=` echo "$line" | sed 's/:.*//' `
line=` echo "$line" | sed 's/[^:]*://'`
flag=` echo "$line" | sed 's/:.*//' `
if [ "$flag" = sgm ]; then
u=$sgmuser
g=$vo_group
elif [ "$flag" = prd ]; then
u=$prduser
g=$vo_group
elif [ "$group" ]; then
groupadd ${gid:+"-g"} ${gid:+"$gid"} "$group" 2>&1 | grep -v exists
u=.$vo_user_prefix
g=$group
else
u=.$vo_user_prefix
g=$vo_group
fi
echo "$fqan $u" >> $lcmaps_gridmapfile
echo "$fqan $g" >> $lcmaps_groupmapfile
done
done # End of VO loop
cat << EOF >> $gmc
#############################################################################
# List of auth URIs
# eg 'auth ldap://marianne.in2p3.fr/ou=People,o=testbed,dc=eu-datagrid,dc=org'
# If these are defined then users must be authorised in one of the following
# auth servers.
# A list of authorised users.
EOF
GRIDMAP_AUTH=${GRIDMAP_AUTH:-\
ldap://lcg-registrar.cern.ch/ou=users,o=registrar,dc=lcg,dc=org}
for i in $GRIDMAP_AUTH; do
echo "auth $i" >> $gmc
echo "auth $i" >> $gmc_dm
echo >> $gmc
done
f=$INSTALL_ROOT/edg/etc/grid-mapfile-local
[ -f $f ] || touch $f
cat << EOF >> $gmc
#############################################################################
# DEFAULT_LCLUSER: default_lcluser lcluser
# default_lcuser .
#############################################################################
# ALLOW and DENY: deny|allow pattern_to_match
# allow *INFN*
#############################################################################
# Local grid-mapfile to import and overide all the above information.
# eg, gmf_local $f
gmf_local $f
EOF
if [ ${gmc_dm:-/dev/null} != /dev/null ]; then
f=${INSTALL_ROOT}/lcg/etc/lcgdm-mapfile-local
[ -f $f ] || touch $f
fi
cat << EOF >> $gmc_dm
gmf_local $f
EOF
#
# bootstrap the grid-mapfile
#
cmd="$INSTALL_ROOT/edg/sbin/edg-mkgridmap \
--output=/etc/grid-security/grid-mapfile --safe"
echo "Now creating the grid-mapfile - this may take a few minutes..."
$cmd 2>> $YAIM_LOG
let minute="$RANDOM%60"
let h1="$RANDOM%6"
let h2="$h1+6"
let h3="$h2+6"
let h4="$h3+6"
cron_job edg-mkgridmap root "$minute $h1,$h2,$h3,$h4 * * * $cmd"
if ( echo "${NODE_TYPE_LIST}" | egrep -q 'dpm|LFC' ); then
cmd="$INSTALL_ROOT/edg/libexec/edg-mkgridmap/edg-mkgridmap.pl \
--conf=$gmc_dm --output=$INSTALL_ROOT/lcg/etc/lcgdm-mapfile --safe"
echo "Now creating the lcgdm-mapfile - this may take a few minutes..."
$cmd 2>> $YAIM_LOG
let minute="$RANDOM%60"
let h1="$RANDOM%6"
let h2="$h1+6"
let h3="$h2+6"
let h4="$h3+6"
cron_job lcgdm-mkgridmap root "$minute $h1,$h2,$h3,$h4 * * * $cmd"
fi
if ( echo "${NODE_TYPE_LIST}" | grep -q '\<'RB ); then
cron_job lcg-expiregridmapdir root "5 * * * * \
${INSTALL_ROOT}/edg/sbin/lcg-expiregridmapdir.pl -e 240 -v >> \
/var/log/lcg-expiregridmapdir.log 2>&1"
elif ( echo "${NODE_TYPE_LIST}" | egrep -q 'dpm|LFC' ); then
# No expiry
rm -f ${CRON_DIR}/lcg-expiregridmapdir
else
cron_job lcg-expiregridmapdir root "5 * * * * \
${INSTALL_ROOT}/edg/sbin/lcg-expiregridmapdir.pl -v >> \
/var/log/lcg-expiregridmapdir.log 2>&1"
fi
return 0
}
function config_java () {
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
# If JAVA_LOCATION is not set by the admin, take a guess
if [ -z "$JAVA_LOCATION" ]; then
java=`rpm -qa | grep j2sdk-` || java=`rpm -qa | grep j2re`
if [ "$java" ]; then
JAVA_LOCATION=`rpm -ql $java | egrep '/bin/java$' | sort | head -1 | sed 's#/bin/java##'`
fi
fi
if [ ! "$JAVA_LOCATION" -o ! -d "$JAVA_LOCATION" ]; then
echo "Please check your value for JAVA_LOCATION"
return 1
fi
if ( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then
# We're configuring a relocatable distro
if [ ! -d ${INSTALL_ROOT}/edg/etc/profile.d ]; then
mkdir -p ${INSTALL_ROOT}/edg/etc/profile.d/
fi
cat > $INSTALL_ROOT/edg/etc/profile.d/j2.sh <<EOF
JAVA_HOME=$JAVA_LOCATION
export JAVA_HOME
EOF
cat > $INSTALL_ROOT/edg/etc/profile.d/j2.csh <<EOF
setenv JAVA_HOME $JAVA_LOCATION
EOF
chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.sh
chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.csh
return 0
fi # end of relocatable stuff
# We're root and it's not a relocatable
if [ ! -d /etc/java ]; then
mkdir /etc/java
fi
echo "export JAVA_HOME=$JAVA_LOCATION" > /etc/java/java.conf
echo "export JAVA_HOME=$JAVA_LOCATION" > /etc/java.conf
chmod +x /etc/java/java.conf
#This hack is here due to SL and the java profile rpms, Laurence Field
if [ ! -d ${INSTALL_ROOT}/edg/etc/profile.d ]; then
mkdir -p ${INSTALL_ROOT}/edg/etc/profile.d/
fi
cat << EOF > $INSTALL_ROOT/edg/etc/profile.d/j2.sh
if [ -z "\$PATH" ]; then
export PATH=${JAVA_LOCATION}/bin
else
export PATH=${JAVA_LOCATION}/bin:\${PATH}
fi
EOF
chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.sh
cat << EOF > $INSTALL_ROOT/edg/etc/profile.d/j2.csh
if ( \$?PATH ) then
setenv PATH ${JAVA_LOCATION}/bin:\${PATH}
else
setenv PATH ${JAVA_LOCATION}/bin
endif
EOF
chmod a+rx $INSTALL_ROOT/edg/etc/profile.d/j2.csh
return 0
}
config_rgma_client(){
requires MON_HOST REG_HOST
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
# NB java stuff now in config_java, which must be run before
export RGMA_HOME=${INSTALL_ROOT}/glite
# in order to use python from userdeps.tgz we need to source the env
if ( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then
. $INSTALL_ROOT/etc/profile.d/grid_env.sh
fi
${RGMA_HOME}/share/rgma/scripts/rgma-setup.py --secure=yes --server=${MON_HOST} --registry=${REG_HOST} --schema=${REG_HOST}
cat << EOF > ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.sh
export RGMA_HOME=${INSTALL_ROOT}/glite
export APEL_HOME=${INSTALL_ROOT}/glite
echo \$PYTHONPATH | grep -q ${INSTALL_ROOT}/glite/lib/python && isthere=1 || isthere=0
if [ \$isthere = 0 ]; then
if [ -z \$PYTHONPATH ]; then
export PYTHONPATH=${INSTALL_ROOT}/glite/lib/python
else
export PYTHONPATH=\$PYTHONPATH:${INSTALL_ROOT}/glite/lib/python
fi
fi
echo \$LD_LIBRARY_PATH | grep -q ${INSTALL_ROOT}/glite/lib && isthere=1 || isthere=0
if [ \$isthere = 0 ]; then
if [ -z \$LD_LIBRARY_PATH ]; then
export LD_LIBRARY_PATH=${INSTALL_ROOT}/glite/lib
else
export LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:${INSTALL_ROOT}/glite/lib
fi
fi
EOF
chmod a+rx ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.sh
cat << EOF > ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.csh
setenv RGMA_HOME ${INSTALL_ROOT}/glite
setenv APEL_HOME ${INSTALL_ROOT}/glite
echo \$PYTHONPATH | grep -q ${INSTALL_ROOT}/glite/lib/python && set isthere=1 || set isthere=0
if ( \$isthere == 0 ) then
if ( -z \$PYTHONPATH ) then
setenv PYTHONPATH ${INSTALL_ROOT}/glite/lib/python
else
setenv PYTHONPATH \$PYTHONPATH\:${INSTALL_ROOT}/glite/lib/python
endif
endif
echo \$LD_LIBRARY_PATH | grep -q ${INSTALL_ROOT}/glite/lib && set isthere=1 || set isthere=0
if ( \$isthere == 0 ) then
if ( -z \$LD_LIBRARY_PATH ) then
setenv LD_LIBRARY_PATH ${INSTALL_ROOT}/glite/lib
else
setenv LD_LIBRARY_PATH \$LD_LIBRARY_PATH\:${INSTALL_ROOT}/glite/lib
endif
endif
EOF
chmod a+rx ${INSTALL_ROOT}/edg/etc/profile.d/edg-rgma-env.csh
return 0
}
config_gip () {
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
requires CE_HOST RB_HOST PX_HOST
#check_users_conf_format
#set some vars for storage elements
if ( echo "${NODE_TYPE_LIST}" | grep '\<SE' > /dev/null ); then
requires VOS SITE_EMAIL SITE_NAME BDII_HOST VOS SITE_NAME
if ( echo "${NODE_TYPE_LIST}" | grep SE_dpm > /dev/null ); then
requires DPM_HOST
se_host=$DPM_HOST
se_type="srm_v1"
control_protocol=srm_v1
control_endpoint=httpg://${se_host}
elif ( echo "${NODE_TYPE_LIST}" | grep SE_dcache > /dev/null ); then
requires DCACHE_ADMIN
se_host=$DCACHE_ADMIN
se_type="srm_v1"
control_protocol=srm_v1
control_endpoint=httpg://${se_host}
else
requires CLASSIC_STORAGE_DIR CLASSIC_HOST VO__STORAGE_DIR
se_host=$CLASSIC_HOST
se_type="disk"
control_protocol=classic
control_endpoint=classic
fi
fi
if ( echo "${NODE_TYPE_LIST}" | grep '\<CE' > /dev/null ); then
# GlueSite
requires SITE_EMAIL SITE_NAME SITE_LOC SITE_LAT SITE_LONG SITE_WEB \
SITE_TIER SITE_SUPPORT_SITE SE_LIST
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-site.conf
# set default SEs if they're currently undefined
default_se=`set x $SE_LIST; echo "$2"`
if [ "$default_se" ]; then
for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
if [ "x`eval echo '$'VO_${VO}_DEFAULT_SE`" = "x" ]; then
eval VO_${VO}_DEFAULT_SE=$default_se
fi
done
fi
cat << EOF > $outfile
dn: GlueSiteUniqueID=$SITE_NAME
GlueSiteUniqueID: $SITE_NAME
GlueSiteName: $SITE_NAME
GlueSiteDescription: LCG Site
GlueSiteUserSupportContact: mailto: $SITE_EMAIL
GlueSiteSysAdminContact: mailto: $SITE_EMAIL
GlueSiteSecurityContact: mailto: $SITE_EMAIL
GlueSiteLocation: $SITE_LOC
GlueSiteLatitude: $SITE_LAT
GlueSiteLongitude: $SITE_LONG
GlueSiteWeb: $SITE_WEB
GlueSiteSponsor: none
GlueSiteOtherInfo: $SITE_TIER
GlueSiteOtherInfo: $SITE_SUPPORT_SITE
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueSite.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-Site.ldif
# GlueCluster
requires JOB_MANAGER CE_BATCH_SYS VOS QUEUES CE_BATCH_SYS CE_CPU_MODEL \
CE_CPU_VENDOR CE_CPU_SPEED CE_OS CE_OS_RELEASE CE_MINPHYSMEM \
CE_MINVIRTMEM CE_SMPSIZE CE_SI00 CE_SF00 CE_OUTBOUNDIP CE_INBOUNDIP \
CE_RUNTIMEENV
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-cluster.conf
for VO in $VOS; do
dir=${INSTALL_ROOT}/edg/var/info/$VO
mkdir -p $dir
f=$dir/$VO.list
[ -f $f ] || touch $f
# work out the sgm user for this VO
sgmuser=`users_getsgmuser $VO`
sgmgroup=`id -g $sgmuser`
chown -R ${sgmuser}:${sgmgroup} $dir
chmod -R go-w $dir
done
cat <<EOF > $outfile
dn: GlueClusterUniqueID=${CE_HOST}
GlueClusterName: ${CE_HOST}
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
GlueInformationServiceURL: ldap://`hostname -f`:2135/mds-vo-name=local,o=grid
EOF
for QUEUE in $QUEUES; do
echo "GlueClusterService: ${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE" >> $outfile
done
for QUEUE in $QUEUES; do
echo "GlueForeignKey:" \
"GlueCEUniqueID=${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE" >> $outfile
done
cat << EOF >> $outfile
dn: GlueSubClusterUniqueID=${CE_HOST}, GlueClusterUniqueID=${CE_HOST}
GlueChunkKey: GlueClusterUniqueID=${CE_HOST}
GlueHostArchitectureSMPSize: $CE_SMPSIZE
GlueHostBenchmarkSF00: $CE_SF00
GlueHostBenchmarkSI00: $CE_SI00
GlueHostMainMemoryRAMSize: $CE_MINPHYSMEM
GlueHostMainMemoryVirtualSize: $CE_MINVIRTMEM
GlueHostNetworkAdapterInboundIP: $CE_INBOUNDIP
GlueHostNetworkAdapterOutboundIP: $CE_OUTBOUNDIP
GlueHostOperatingSystemName: $CE_OS
GlueHostOperatingSystemRelease: $CE_OS_RELEASE
GlueHostOperatingSystemVersion: 3
GlueHostProcessorClockSpeed: $CE_CPU_SPEED
GlueHostProcessorModel: $CE_CPU_MODEL
GlueHostProcessorVendor: $CE_CPU_VENDOR
GlueSubClusterName: ${CE_HOST}
GlueSubClusterPhysicalCPUs: 0
GlueSubClusterLogicalCPUs: 0
GlueSubClusterTmpDir: /tmp
GlueSubClusterWNTmpDir: /tmp
GlueInformationServiceURL: ldap://`hostname -f`:2135/mds-vo-name=local,o=grid
EOF
for x in $CE_RUNTIMEENV; do
echo "GlueHostApplicationSoftwareRunTimeEnvironment: $x" >> $outfile
done
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueCluster.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-Cluster.ldif
# GlueCE
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-ce.conf
cat /dev/null > $outfile
for QUEUE in $QUEUES; do
cat <<EOF >> $outfile
dn: GlueCEUniqueID=${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE
GlueCEHostingCluster: ${CE_HOST}
GlueCEName: $QUEUE
GlueCEInfoGatekeeperPort: 2119
GlueCEInfoHostName: ${CE_HOST}
GlueCEInfoLRMSType: $CE_BATCH_SYS
GlueCEInfoLRMSVersion: not defined
GlueCEInfoTotalCPUs: 0
GlueCEInfoJobManager: ${JOB_MANAGER}
GlueCEInfoContactString: ${CE_HOST}:2119/jobmanager-${JOB_MANAGER}-${QUEUE}
GlueCEInfoApplicationDir: ${VO_SW_DIR}
GlueCEInfoDataDir: ${CE_DATADIR:-unset}
GlueCEInfoDefaultSE: $default_se
GlueCEStateEstimatedResponseTime: 0
GlueCEStateFreeCPUs: 0
GlueCEStateRunningJobs: 0
GlueCEStateStatus: Production
GlueCEStateTotalJobs: 0
GlueCEStateWaitingJobs: 0
GlueCEStateWorstResponseTime: 0
GlueCEStateFreeJobSlots: 0
GlueCEPolicyMaxCPUTime: 0
GlueCEPolicyMaxRunningJobs: 0
GlueCEPolicyMaxTotalJobs: 0
GlueCEPolicyMaxWallClockTime: 0
GlueCEPolicyPriority: 1
GlueCEPolicyAssignedJobSlots: 0
GlueForeignKey: GlueClusterUniqueID=${CE_HOST}
GlueInformationServiceURL: ldap://`hostname -f`:2135/mds-vo-name=local,o=grid
EOF
for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
for VO_QUEUE in `eval echo '$'VO_${VO}_QUEUES`; do
if [ "${QUEUE}" = "${VO_QUEUE}" ]; then
echo "GlueCEAccessControlBaseRule:" \
"VO:`echo $VO | tr '[:upper:]' '[:lower:]'`" >> $outfile
fi
done
done
for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
for VO_QUEUE in `eval echo '$'VO_${VO}_QUEUES`; do
if [ "${QUEUE}" = "${VO_QUEUE}" ]; then
cat << EOF >> $outfile
dn: GlueVOViewLocalID=`echo $VO | tr '[:upper:]' '[:lower:]'`,\
GlueCEUniqueID=${CE_HOST}:2119/jobmanager-${JOB_MANAGER}-${QUEUE}
GlueCEAccessControlBaseRule: VO:`echo $VO | tr '[:upper:]' '[:lower:]'`
GlueCEStateRunningJobs: 0
GlueCEStateWaitingJobs: 0
GlueCEStateTotalJobs: 0
GlueCEStateFreeJobSlots: 0
GlueCEStateEstimatedResponseTime: 0
GlueCEStateWorstResponseTime: 0
GlueCEInfoDefaultSE: `eval echo '$'VO_${VO}_DEFAULT_SE`
GlueCEInfoApplicationDir: `eval echo '$'VO_${VO}_SW_DIR`
GlueCEInfoDataDir: ${CE_DATADIR:-unset}
GlueChunkKey: GlueCEUniqueID=${CE_HOST}:2119/jobmanager-${JOB_MANAGER}-${QUEUE}
EOF
fi
done
done
done
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueCE.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-CE.ldif
# GlueCESEBind
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-cesebind.conf
echo "" > $outfile
for QUEUE in $QUEUES
do
echo "dn: GlueCESEBindGroupCEUniqueID=${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE" \
>> $outfile
for se in $SE_LIST
do
echo "GlueCESEBindGroupSEUniqueID: $se" >> $outfile
done
done
for se in $SE_LIST; do
case "$se" in
"$DPM_HOST") accesspoint=$DPMDATA;;
"$DCACHE_ADMIN") accesspoint="/pnfs/`hostname -d`/data";;
*) accesspoint=$CLASSIC_STORAGE_DIR ;;
esac
for QUEUE in $QUEUES; do
cat <<EOF >> $outfile
dn: GlueCESEBindSEUniqueID=$se,\
GlueCESEBindGroupCEUniqueID=${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE
GlueCESEBindCEAccesspoint: $accesspoint
GlueCESEBindCEUniqueID: ${CE_HOST}:2119/jobmanager-$JOB_MANAGER-$QUEUE
GlueCESEBindMountInfo: $accesspoint
GlueCESEBindWeight: 0
EOF
done
done
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueCESEBind.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-CESEBind.ldif
# Set some vars based on the LRMS
case "$CE_BATCH_SYS" in
condor|CONDOR) plugin="${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-condor /opt/condor/bin/ $INSTALL_ROOT/lcg/etc/lcg-info-generic.conf";;
lsf|LSF) plugin="${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-lsf /usr/local/lsf/bin/ $INSTALL_ROOT/lcg/etc/lcg-info-generic.conf";;
pbs|PBS) plugin="${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-pbs /opt/lcg/var/gip/ldif/static-file-CE.ldif ${TORQUE_SERVER}"
vo_max_jobs_cmd="";;
*) plugin="${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-pbs /opt/lcg/var/gip/ldif/static-file-CE.ldif ${TORQUE_SERVER}"
vo_max_jobs_cmd="$INSTALL_ROOT/lcg/libexec/vomaxjobs-maui";;
esac
# Configure the dynamic plugin appropriate for the batch sys
cat << EOF > ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-ce
#!/bin/sh
$plugin
EOF
chmod +x ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-ce
# Configure the ERT plugin
cat << EOF > ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-scheduler-wrapper
#!/bin/sh
${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-scheduler -c ${INSTALL_ROOT}/lcg/etc/lcg-info-dynamic-scheduler.conf
EOF
chmod +x ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-scheduler-wrapper
if ( echo $CE_BATCH_SYS | egrep -qi 'pbs|torque' ); then
cat <<EOF > $INSTALL_ROOT/lcg/etc/lcg-info-dynamic-scheduler.conf
[Main]
static_ldif_file: $INSTALL_ROOT/lcg/var/gip/ldif/static-file-CE.ldif
vomap :
EOF
for vo in $VOS; do
vo_group=`users_getvogroup $vo`
if [ $vo_group ]; then
echo " $vo_group:$vo" >> $INSTALL_ROOT/lcg/etc/lcg-info-dynamic-scheduler.conf
fi
done
cat <<EOF >> $INSTALL_ROOT/lcg/etc/lcg-info-dynamic-scheduler.conf
module_search_path : ../lrms:../ett
[LRMS]
lrms_backend_cmd: $INSTALL_ROOT/lcg/libexec/lrmsinfo-pbs
[Scheduler]
vo_max_jobs_cmd: $vo_max_jobs_cmd
cycle_time : 0
EOF
fi
# Configure the provider for installed software
if [ -f $INSTALL_ROOT/lcg/libexec/lcg-info-provider-software ]; then
cat <<EOF > $INSTALL_ROOT/lcg/var/gip/provider/lcg-info-provider-software-wrapper
#!/bin/sh
$INSTALL_ROOT/lcg/libexec/lcg-info-provider-software -p $INSTALL_ROOT/edg/var/info -c $CE_HOST
EOF
chmod +x $INSTALL_ROOT/lcg/var/gip/provider/lcg-info-provider-software-wrapper
fi
fi #endif for CE_HOST
if [ "$GRIDICE_SERVER_HOST" = "`hostname -f`" ]; then
requires VOS SITE_NAME SITE_EMAIL
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-gridice.conf
cat <<EOF > $outfile
dn: GlueServiceUniqueID=${GRIDICE_SERVER_HOST}:2136
GlueServiceName: ${SITE_NAME}-gridice
GlueServiceType: gridice
GlueServiceVersion: 1.1.0
GlueServiceEndpoint: ldap://${GRIDICE_SERVER_HOST}:2136/mds-vo-name=local,o=grid
GlueServiceURI: unset
GlueServiceAccessPointURL: not_used
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
for VO in $VOS; do
echo "GlueServiceAccessControlRule: $VO" >> $outfile
echo "GlueServiceOwner: $VO" >> $outfile
done
FMON='--fmon=yes'
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueService.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-GRIDICE.ldif
fi #endif for GRIDICE_SERVER_HOST
if ( echo "${NODE_TYPE_LIST}" | grep -w PX > /dev/null ); then
requires GRID_TRUSTED_BROKERS SITE_EMAIL SITE_NAME
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-px.conf
cat << EOF > $outfile
dn: GlueServiceUniqueID=${PX_HOST}:7512
GlueServiceName: ${SITE_NAME}-myproxy
GlueServiceType: myproxy
GlueServiceVersion: 1.1.0
GlueServiceEndpoint: ${PX_HOST}:7512
GlueServiceURI: unset
GlueServiceAccessPointURL: myproxy://${PX_HOST}
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueServiceOwner: LCG
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
split_quoted_variable $GRID_TRUSTED_BROKERS | while read x; do
echo "GlueServiceAccessControlRule: $x" >> $outfile
done
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueService.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-PX.ldif
fi #endif for PX_HOST
if ( echo "${NODE_TYPE_LIST}" | grep -w RB > /dev/null ); then
requires VOS SITE_EMAIL SITE_NAME
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-rb.conf
cat <<EOF > $outfile
dn: GlueServiceUniqueID=${RB_HOST}:7772
GlueServiceName: ${SITE_NAME}-rb
GlueServiceType: ResourceBroker
GlueServiceVersion: 1.2.0
GlueServiceEndpoint: ${RB_HOST}:7772
GlueServiceURI: unset
GlueServiceAccessPointURL: not_used
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
for VO in $VOS; do
echo "GlueServiceAccessControlRule: $VO" >> $outfile
echo "GlueServiceOwner: $VO" >> $outfile
done
cat <<EOF >> $outfile
dn: GlueServiceDataKey=HeldJobs,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: HeldJobs
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772
dn: GlueServiceDataKey=IdleJobs,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: IdleJobs
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772
dn: GlueServiceDataKey=JobController,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: JobController
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772
dn: GlueServiceDataKey=Jobs,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: Jobs
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772
dn: GlueServiceDataKey=LogMonitor,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: LogMonitor
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772
dn: GlueServiceDataKey=RunningJobs,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: RunningJobs
GlueServiceDataValue: 14
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772
dn: GlueServiceDataKey=WorkloadManager,GlueServiceUniqueID=gram://${RB_HOST}:7772
GlueServiceDataKey: WorkloadManager
GlueServiceDataValue: 0
GlueChunkKey: GlueServiceUniqueID=gram://${RB_HOST}:7772
EOF
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueService.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-RB.ldif
fi #endif for RB_HOST
if ( echo "${NODE_TYPE_LIST}" | grep '\<LFC' > /dev/null ); then
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-lfc.conf
cat /dev/null > $outfile
requires VOS SITE_EMAIL SITE_NAME BDII_HOST LFC_HOST
if [ "$LFC_LOCAL" ]; then
lfc_local=$LFC_LOCAL
else
# populate lfc_local with the VOS which are not set to be central
unset lfc_local
for i in $VOS; do
if ( ! echo $LFC_CENTRAL | grep -qw $i ); then
lfc_local="$lfc_local $i"
fi
done
fi
if [ "$LFC_CENTRAL" ]; then
cat <<EOF >> $outfile
dn: GlueServiceUniqueID=http://${LFC_HOST}:8085/
GlueServiceName: ${SITE_NAME}-lfc-dli
GlueServiceType: data-location-interface
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: http://${LFC_HOST}:8085/
GlueServiceURI: http://${LFC_HOST}:8085/
GlueServiceAccessPointURL: http://${LFC_HOST}:8085/
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
for VO in $LFC_CENTRAL; do
echo "GlueServiceOwner: $VO" >> $outfile
echo "GlueServiceAccessControlRule: $VO" >> $outfile
done
echo >> $outfile
cat <<EOF >> $outfile
dn: GlueServiceUniqueID=${LFC_HOST}
GlueServiceName: ${SITE_NAME}-lfc
GlueServiceType: lcg-file-catalog
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: ${LFC_HOST}
GlueServiceURI: ${LFC_HOST}
GlueServiceAccessPointURL: ${LFC_HOST}
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
for VO in $LFC_CENTRAL; do
echo "GlueServiceOwner: $VO" >> $outfile
echo "GlueServiceAccessControlRule: $VO" >> $outfile
done
echo >> $outfile
fi
if [ "$lfc_local" ]; then
cat <<EOF >> $outfile
dn: GlueServiceUniqueID=http://${LFC_HOST}:8085/,o=local
GlueServiceName: ${SITE_NAME}-lfc-dli
GlueServiceType: local-data-location-interface
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: http://${LFC_HOST}:8085/
GlueServiceURI: http://${LFC_HOST}:8085/
GlueServiceAccessPointURL: http://${LFC_HOST}:8085/
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
for VO in $lfc_local; do
echo "GlueServiceOwner: $VO" >> $outfile
echo "GlueServiceAccessControlRule: $VO" >> $outfile
done
echo >> $outfile
cat <<EOF >> $outfile
dn: GlueServiceUniqueID=${LFC_HOST},o=local
GlueServiceName: ${SITE_NAME}-lfc
GlueServiceType: lcg-local-file-catalog
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: ${LFC_HOST}
GlueServiceURI: ${LFC_HOST}
GlueServiceAccessPointURL: ${LFC_HOST}
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
for VO in $lfc_local; do
echo "GlueServiceOwner: $VO" >> $outfile
echo "GlueServiceAccessControlRule: $VO" >> $outfile
done
fi
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueService.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-LFC.ldif
fi # end of LFC
if ( echo "${NODE_TYPE_LIST}" | egrep -q 'dcache|dpm_(mysql|oracle)' ); then
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-dse.conf
cat <<EOF > $outfile
dn: GlueServiceUniqueID=httpg://${se_host}:8443/srm/managerv1
GlueServiceName: ${SITE_NAME}-srm
GlueServiceType: srm_v1
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: httpg://${se_host}:8443/srm/managerv1
GlueServiceURI: httpg://${se_host}:8443/srm/managerv1
GlueServiceAccessPointURL: httpg://${se_host}:8443/srm/managerv1
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueServiceOwner: LCG
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
for VO in $VOS; do
echo "GlueServiceAccessControlRule: $VO" >> $outfile
done
cat <<EOF >> $outfile
GlueServiceInformationServiceURL: \
MDS2GRIS:ldap://${BDII_HOST}:2170/mds-vo-name=${SITE_NAME},o=grid
GlueServiceStatus: OK
EOF
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueService.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-dSE.ldif
fi # end of dcache,dpm
if ( echo "${NODE_TYPE_LIST}" | egrep -q 'SE_dpm_(mysql|oracle)' ); then
# Install dynamic script pointing to gip plugin
cat << EOF > ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-se
#! /bin/sh
${INSTALL_ROOT}/lcg/libexec/lcg-info-dynamic-dpm ${INSTALL_ROOT}/lcg/var/gip/ldif/static-file-SE.ldif
EOF
chmod +x ${INSTALL_ROOT}/lcg/var/gip/plugin/lcg-info-dynamic-se
fi # end of dpm
if ( echo "${NODE_TYPE_LIST}" | grep '\<SE' > /dev/null ); then
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-se.conf
# dynamic_script points to the script generated by config_info_dynamic_se<se_type>
# echo "">> $outfile
# echo "dynamic_script=${INSTALL_ROOT}/lcg/libexec5A/lcg-info-dynamic-se" >> $outfile
# echo >> $outfile # Empty line to separate it form published info
cat <<EOF > $outfile
dn: GlueSEUniqueID=${se_host}
GlueSEName: $SITE_NAME:${se_type}
GlueSEPort: 2811
GlueSESizeTotal: 0
GlueSESizeFree: 0
GlueSEArchitecture: multidisk
GlueInformationServiceURL: ldap://`hostname -f`:2135/mds-vo-name=local,o=grid
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
dn: GlueSEAccessProtocolLocalID=gsiftp, GlueSEUniqueID=${se_host}
GlueSEAccessProtocolType: gsiftp
GlueSEAccessProtocolEndpoint: gsiftp://${se_host}
GlueSEAccessProtocolCapability: file transfer
GlueSEAccessProtocolVersion: 1.0.0
GlueSEAccessProtocolPort: 2811
GlueSEAccessProtocolSupportedSecurity: GSI
GlueChunkKey: GlueSEUniqueID=${se_host}
dn: GlueSEAccessProtocolLocalID=rfio, GlueSEUniqueID=${se_host}
GlueSEAccessProtocolType: rfio
GlueSEAccessProtocolEndpoint:
GlueSEAccessProtocolCapability:
GlueSEAccessProtocolVersion: 1.0.0
GlueSEAccessProtocolPort: 5001
GlueSEAccessProtocolSupportedSecurity: RFIO
GlueChunkKey: GlueSEUniqueID=${se_host}
dn: GlueSEControlProtocolLocalID=$control_protocol, GlueSEUniqueID=${se_host}
GlueSEControlProtocolType: $control_protocol
GlueSEControlProtocolEndpoint: $control_endpoint
GlueSEControlProtocolCapability:
GlueSEControlProtocolVersion: 1.0.0
GlueChunkKey: GlueSEUniqueID=${se_host}
EOF
for VO in $VOS; do
if ( echo "${NODE_TYPE_LIST}" | grep SE_dpm > /dev/null ); then
storage_path="/dpm/`hostname -d`/home/${VO}"
storage_root="${VO}:${storage_path}"
elif ( echo "${NODE_TYPE_LIST}" | grep SE_dcache > /dev/null ); then
storage_path="/pnfs/`hostname -d`/data/${VO}"
storage_root="${VO}:${storage_path}"
else
storage_path=$( eval echo '$'VO_`echo ${VO} | tr '[:lower:]' '[:upper:]'`_STORAGE_DIR )
storage_root="${VO}:${storage_path#${CLASSIC_STORAGE_DIR}}"
fi
cat <<EOF >> $outfile
dn: GlueSALocalID=$VO,GlueSEUniqueID=${se_host}
GlueSARoot: $storage_root
GlueSAPath: $storage_path
GlueSAType: permanent
GlueSAPolicyMaxFileSize: 10000
GlueSAPolicyMinFileSize: 1
GlueSAPolicyMaxData: 100
GlueSAPolicyMaxNumFiles: 10
GlueSAPolicyMaxPinDuration: 10
GlueSAPolicyQuota: 0
GlueSAPolicyFileLifeTime: permanent
GlueSAStateAvailableSpace: 1
GlueSAStateUsedSpace: 1
GlueSAAccessControlBaseRule: $VO
GlueChunkKey: GlueSEUniqueID=${se_host}
EOF
done
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueSE.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-SE.ldif
fi #endif for SE_HOST
if ( echo "${NODE_TYPE_LIST}" | grep -w VOBOX > /dev/null ); then
outfile=$INSTALL_ROOT/lcg/var/gip/lcg-info-static-vobox.conf
for x in VOS SITE_EMAIL SITE_NAME VOBOX_PORT; do
if [ "x`eval echo '$'$x`" = "x" ]; then
echo "\$$x not set"
return 1
fi
done
for VO in $VOS; do
dir=${INSTALL_ROOT}/edg/var/info/$VO
mkdir -p $dir
f=$dir/$VO.list
[ -f $f ] || touch $f
# work out the sgm user for this VO
sgmuser=`users_getsgmuser $VO`
sgmgroup=`id -g $sgmuser`
chown -R ${sgmuser}:${sgmgroup} $dir
chmod -R go-w $dir
done
cat <<EOF > $outfile
dn: GlueServiceUniqueID=gsissh://${VOBOX_HOST}:${VOBOX_PORT}
GlueServiceName: ${SITE_NAME}-vobox
GlueServiceType: VOBOX
GlueServiceVersion: 1.0.0
GlueServiceEndpoint: gsissh://${VOBOX_HOST}:${VOBOX_PORT}
GlueServiceURI: unset
GlueServiceAccessPointURL: gsissh://${VOBOX_HOST}:${VOBOX_PORT}
GlueServiceStatus: OK
GlueServiceStatusInfo: No Problems
GlueServiceWSDL: unset
GlueServiceSemantics: unset
GlueServiceStartTime: 1970-01-01T00:00:00Z
GlueServiceOwner: LCG
GlueForeignKey: GlueSiteUniqueID=${SITE_NAME}
EOF
for VO in $VOS; do
echo "GlueServiceAccessControlRule: $VO" >> $outfile
done
echo >> $outfile
$INSTALL_ROOT/lcg/sbin/lcg-info-static-create -c $outfile -t \
$INSTALL_ROOT/lcg/etc/GlueService.template > \
$INSTALL_ROOT/lcg/var/gip/ldif/static-file-VOBOX.ldif
fi #endif for VOBOX_HOST
cat << EOT > $INSTALL_ROOT/globus/libexec/edg.info
#!/bin/bash
#
# info-globus-ldif.sh
#
#Configures information providers for MDS
#
cat << EOF
dn: Mds-Vo-name=local,o=grid
objectclass: GlobusTop
objectclass: GlobusActiveObject
objectclass: GlobusActiveSearch
type: exec
path: $INSTALL_ROOT/lcg/libexec/
base: lcg-info-wrapper
args:
cachetime: 60
timelimit: 20
sizelimit: 250
EOF
EOT
chmod a+x $INSTALL_ROOT/globus/libexec/edg.info
if [ ! -d "$INSTALL_ROOT/lcg/libexec" ]; then
mkdir -p $INSTALL_ROOT/lcg/libexec
fi
cat << EOF > $INSTALL_ROOT/lcg/libexec/lcg-info-wrapper
#!/bin/sh
export LANG=C
$INSTALL_ROOT/lcg/bin/lcg-info-generic $INSTALL_ROOT/lcg/etc/lcg-info-generic.conf
EOF
chmod a+x $INSTALL_ROOT/lcg/libexec/lcg-info-wrapper
cat << EOT > $INSTALL_ROOT/globus/libexec/edg.schemalist
#!/bin/bash
cat <<EOF
${INSTALL_ROOT}/globus/etc/openldap/schema/core.schema
${INSTALL_ROOT}/glue/schema/ldap/Glue-CORE.schema
${INSTALL_ROOT}/glue/schema/ldap/Glue-CE.schema
${INSTALL_ROOT}/glue/schema/ldap/Glue-CESEBind.schema
${INSTALL_ROOT}/glue/schema/ldap/Glue-SE.schema
EOF
EOT
chmod a+x $INSTALL_ROOT/globus/libexec/edg.schemalist
# Configure gin
if ( ! echo "${NODE_TYPE_LIST}" | egrep -q '^UI$|^WN[A-Za-z_]*$' ); then
if [ ! -d ${INSTALL_ROOT}/glite/var/rgma/.certs ]; then
mkdir -p ${INSTALL_ROOT}/glite/var/rgma/.certs
fi
cp -pf /etc/grid-security/hostcert.pem /etc/grid-security/hostkey.pem \
${INSTALL_ROOT}/glite/var/rgma/.certs
chown rgma:rgma ${INSTALL_ROOT}/glite/var/rgma/.certs/host*
(
egrep -v 'sslCertFile|sslKey' \
${INSTALL_ROOT}/glite/etc/rgma/ClientAuthentication.props
echo "sslCertFile=${INSTALL_ROOT}/glite/var/rgma/.certs/hostcert.pem"
echo "sslKey=${INSTALL_ROOT}/glite/var/rgma/.certs/hostkey.pem"
) > /tmp/props.$$
mv -f /tmp/props.$$ ${INSTALL_ROOT}/glite/etc/rgma/ClientAuthentication.props
#Turn on Gin for the GIP and maybe FMON
export RGMA_HOME=${INSTALL_ROOT}/glite
${RGMA_HOME}/bin/rgma-gin-config --gip=yes ${FMON}
/sbin/chkconfig rgma-gin on
/etc/rc.d/init.d/rgma-gin restart 2>${YAIM_LOG}
fi
return 0
}
config_globus(){
# $Id: config_globus,v 1.34 2006/01/06 13:45:51 maart Exp $
requires CE_HOST PX_HOST RB_HOST SITE_NAME
GLOBUS_MDS=no
GLOBUS_GRIDFTP=no
GLOBUS_GATEKEEPER=no
if ( echo "${NODE_TYPE_LIST}" | grep '\<'CE > /dev/null ); then
GLOBUS_MDS=yes
GLOBUS_GRIDFTP=yes
GLOBUS_GATEKEEPER=yes
fi
if ( echo "${NODE_TYPE_LIST}" | grep VOBOX > /dev/null ); then
GLOBUS_MDS=yes
if ! ( echo "${NODE_TYPE_LIST}" | grep '\<'RB > /dev/null ); then
GLOBUS_GRIDFTP=yes
fi
fi
if ( echo "${NODE_TYPE_LIST}" | grep '\<'SE > /dev/null ); then
GLOBUS_MDS=yes
GLOBUS_GRIDFTP=yes
fi
# DPM has its own ftp server
if ( echo "${NODE_TYPE_LIST}" | grep SE_dpm > /dev/null ); then
GLOBUS_GRIDFTP=no
fi
if ( echo "${NODE_TYPE_LIST}" | grep '\<'PX > /dev/null ); then
GLOBUS_MDS=yes
fi
if ( echo "${NODE_TYPE_LIST}" | grep '\<'RB > /dev/null ); then
GLOBUS_MDS=yes
fi
if ( echo "${NODE_TYPE_LIST}" | grep '\<'LFC > /dev/null ); then
GLOBUS_MDS=yes
fi
if ( echo "${NODE_TYPE_LIST}" | grep SE_dpm > /dev/null ); then
X509_DPM1="x509_user_cert=/home/edginfo/.globus/usercert.pem"
X509_DPM2="x509_user_key=/home/edginfo/.globus/userkey.pem"
else
X509_DPM1=""
X509_DPM2=""
fi
if [ "$GRIDICE_SERVER_HOST" = "`hostname -f`" ]; then
GLOBUS_MDS=yes
fi
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
cat <<EOF > /etc/globus.conf
########################################################################
#
# Globus configuraton.
#
########################################################################
[common]
GLOBUS_LOCATION=${INSTALL_ROOT}/globus
globus_flavor_name=gcc32dbg
x509_user_cert=/etc/grid-security/hostcert.pem
x509_user_key=/etc/grid-security/hostkey.pem
gridmap=/etc/grid-security/grid-mapfile
gridmapdir=/etc/grid-security/gridmapdir/
EOF
if [ "$GLOBUS_MDS" = "yes" ]; then
cat <<EOF >> /etc/globus.conf
[mds]
globus_flavor_name=gcc32dbgpthr
user=edginfo
$X509_DPM1
$X509_DPM2
[mds/gris/provider/edg]
EOF
cat <<EOF >> /etc/globus.conf
[mds/gris/registration/site]
regname=$SITE_NAME
reghn=$CE_HOST
EOF
else
echo "[mds]" >> /etc/globus.conf
fi
if [ "$GLOBUS_GRIDFTP" = "yes" ]; then
cat <<EOF >> /etc/globus.conf
[gridftp]
log=/var/log/globus-gridftp.log
EOF
cat <<EOF > /etc/logrotate.d/gridftp
/var/log/globus-gridftp.log /var/log/gridftp-lcas_lcmaps.log {
missingok
daily
compress
rotate 31
create 0644 root root
sharedscripts
}
EOF
else
echo "[gridftp]" >> /etc/globus.conf
fi
if [ "$GLOBUS_GATEKEEPER" = "yes" ]; then
if [ "x`grep globus-gatekeeper /etc/services`" = "x" ]; then
echo "globus-gatekeeper 2119/tcp" >> /etc/services
fi
cat <<EOF > /etc/logrotate.d/globus-gatekeeper
/var/log/globus-gatekeeper.log {
nocompress
copy
rotate 1
prerotate
killall -s USR1 -e /opt/edg/sbin/edg-gatekeeper
endscript
postrotate
find /var/log/globus-gatekeeper.log.20????????????.*[0-9] -mtime +7 -exec gzip {} \;
endscript
}
EOF
cat <<EOF >> /etc/globus.conf
[gatekeeper]
default_jobmanager=fork
job_manager_path=\$GLOBUS_LOCATION/libexec
globus_gatekeeper=${INSTALL_ROOT}/edg/sbin/edg-gatekeeper
extra_options=\"-lcas_db_file lcas.db -lcas_etc_dir ${INSTALL_ROOT}/edg/etc/lcas/ -lcasmod_dir \
${INSTALL_ROOT}/edg/lib/lcas/ -lcmaps_db_file lcmaps.db -lcmaps_etc_dir ${INSTALL_ROOT}/edg/etc/lcmaps -lcmapsmod_dir ${INSTALL_ROOT}/edg/lib/lcmaps\"
logfile=/var/log/globus-gatekeeper.log
jobmanagers="fork ${JOB_MANAGER}"
[gatekeeper/fork]
type=fork
job_manager=globus-job-manager
[gatekeeper/${JOB_MANAGER}]
type=${JOB_MANAGER}
EOF
else
cat <<EOF >> /etc/globus.conf
[gatekeeper]
default_jobmanager=fork
job_manager_path=${GLOBUS_LOCATION}/libexec
jobmanagers="fork "
[gatekeeper/fork]
type=fork
job_manager=globus-job-manager
EOF
fi
$INSTALL_ROOT/globus/sbin/globus-initialization.sh 2>> $YAIM_LOG
if [ "$GLOBUS_MDS" = "yes" ]; then
/sbin/chkconfig globus-mds on
/sbin/service globus-mds stop
/sbin/service globus-mds start
fi
if [ "$GLOBUS_GATEKEEPER" = "yes" ]; then
/sbin/chkconfig globus-gatekeeper on
/sbin/service globus-gatekeeper stop
/sbin/service globus-gatekeeper start
fi
if [ "$GLOBUS_GRIDFTP" = "yes" ]; then
/sbin/chkconfig globus-gridftp on
/sbin/service globus-gridftp stop
/sbin/service globus-gridftp start
/sbin/chkconfig lcg-mon-gridftp on
/etc/rc.d/init.d/lcg-mon-gridftp restart
fi
return 0
}
config_lcgenv() {
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
if !( echo "${NODE_TYPE_LIST}" | grep TAR > /dev/null ); then
LCG_ENV_LOC=/etc/profile.d
else
LCG_ENV_LOC=${INSTALL_ROOT}/etc/env.d
fi
requires BDII_HOST SITE_NAME CE_HOST
if ( ! echo "${NODE_TYPE_LIST}" | grep -q UI ); then
requires VOS VO__SW_DIR SE_LIST
fi
default_se="${SE_LIST%% *}"
if [ "$default_se" ]; then
for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
if [ "x`eval echo '$'VO_${VO}_DEFAULT_SE`" = "x" ]; then
eval VO_${VO}_DEFAULT_SE=$default_se
fi
done
fi
########## sh ##########
cat << EOF > ${LCG_ENV_LOC}/lcgenv.sh
#!/bin/sh
if test "x\${LCG_ENV_SET+x}" = x; then
export LCG_GFAL_INFOSYS=$BDII_HOST:2170
EOF
if [ "$PX_HOST" ]; then
echo "export MYPROXY_SERVER=$PX_HOST" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
if ( echo "${NODE_TYPE_LIST}" | egrep -q 'WN|VOBOX' ); then
if [ "$SITE_NAME" ]; then
echo "export SITE_NAME=$SITE_NAME" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
if [ "$CE_HOST" ]; then
echo "export SITE_GIIS_URL=$CE_HOST" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
fi
if [ -d ${INSTALL_ROOT}/d-cache/srm/bin ]; then
echo export PATH="\${PATH}:${INSTALL_ROOT}/d-cache/srm/bin:${INSTALL_ROOT}/d-cache/dcap/bin" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
if [ -d ${INSTALL_ROOT}/d-cache/dcap/lib ]; then
echo export LD_LIBRARY_PATH=\${LD_LIBRARY_PATH}:${INSTALL_ROOT}/d-cache/dcap/lib >> ${LCG_ENV_LOC}/lcgenv.sh
fi
if [ -d ${INSTALL_ROOT}/d-cache/srm ]; then
echo export SRM_PATH=${INSTALL_ROOT}/d-cache/srm >> ${LCG_ENV_LOC}/lcgenv.sh
fi
if [ "$EDG_WL_SCRATCH" ]; then
echo "export EDG_WL_SCRATCH=$EDG_WL_SCRATCH" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
default_se=`eval echo '$'VO_${VO}_DEFAULT_SE`
if [ "$default_se" ]; then
echo "export VO_${VO}_DEFAULT_SE=$default_se" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
if ( ! echo "${NODE_TYPE_LIST}" | grep -q UI ); then
sw_dir=`eval echo '$'VO_${VO}_SW_DIR`
if [ "$sw_dir" ]; then
echo "export VO_${VO}_SW_DIR=$sw_dir" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
fi
done
if [ "$VOBOX_HOST" ]; then
requires GSSKLOG
if [ "${GSSKLOG}x" == "yesx" ]; then
requires GSSKLOG_SERVER
echo "export GSSKLOG_SERVER=$GSSKLOG_SERVER" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
fi
if [ "${DPM_HOST}" ]; then
echo "export DPNS_HOST=${DPM_HOST}" >> ${LCG_ENV_LOC}/lcgenv.sh
echo "export DPM_HOST=${DPM_HOST}" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
if [ "$GLOBUS_TCP_PORT_RANGE" ]; then
echo "export MYPROXY_TCP_PORT_RANGE=\"${GLOBUS_TCP_PORT_RANGE/ /,}\"" >> ${LCG_ENV_LOC}/lcgenv.sh
fi
if ( echo $NODE_TYPE_LIST | egrep -q UI ); then
cat << EOF >> ${LCG_ENV_LOC}/lcgenv.sh
if [ "x\$X509_USER_PROXY" = "x" ]; then
export X509_USER_PROXY=/tmp/x509up_u\$(id -u)
fi
EOF
fi
echo fi >> ${LCG_ENV_LOC}/lcgenv.sh
########## sh ##########
########## csh ##########
cat << EOF > ${LCG_ENV_LOC}/lcgenv.csh
#!/bin/csh
if ( ! \$?LCG_ENV_SET ) then
setenv LCG_GFAL_INFOSYS $BDII_HOST:2170
EOF
if [ "$PX_HOST" ]; then
echo "setenv MYPROXY_SERVER $PX_HOST" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
if ( echo "${NODE_TYPE_LIST}" | egrep -q 'WN|VOBOX' ); then
if [ "$SITE_NAME" ]; then
echo "setenv SITE_NAME $SITE_NAME" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
if [ "$CE_HOST" ]; then
echo "setenv SITE_GIIS_URL $CE_HOST" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
fi
if [ -d ${INSTALL_ROOT}/d-cache/srm/bin ]; then
echo setenv PATH "\${PATH}:${INSTALL_ROOT}/d-cache/srm/bin:${INSTALL_ROOT}/d-cache/dcap/bin" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
if [ -d ${INSTALL_ROOT}/d-cache/dcap/lib ]; then
echo setenv LD_LIBRARY_PATH \${LD_LIBRARY_PATH}:${INSTALL_ROOT}/d-cache/dcap/lib >> ${LCG_ENV_LOC}/lcgenv.csh
fi
if [ -d ${INSTALL_ROOT}/d-cache/srm ]; then
echo setenv SRM_PATH ${INSTALL_ROOT}/d-cache/srm >> ${LCG_ENV_LOC}/lcgenv.csh
fi
if [ "$EDG_WL_SCRATCH" ]; then
echo "setenv EDG_WL_SCRATCH $EDG_WL_SCRATCH" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
for VO in `echo $VOS | tr '[:lower:]' '[:upper:]'`; do
default_se=`eval echo '$'VO_${VO}_DEFAULT_SE`
if [ "$default_se" ]; then
echo "setenv VO_${VO}_DEFAULT_SE $default_se" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
if ( ! echo "${NODE_TYPE_LIST}" | grep -q UI ); then
sw_dir=`eval echo '$'VO_${VO}_SW_DIR`
if [ "$sw_dir" ]; then
echo "setenv VO_${VO}_SW_DIR $sw_dir" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
fi
done
if [ "$VOBOX_HOST" ]; then
requires GSSKLOG
if [ "${GSSKLOG}x" == "yesx" ]; then
requires GSSKLOG_SERVER
echo "setenv GSSKLOG_SERVER $GSSKLOG_SERVER" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
fi
if [ "${DPM_HOST}" ]; then
echo "setenv DPNS_HOST ${DPM_HOST}" >> ${LCG_ENV_LOC}/lcgenv.csh
echo "setenv DPM_HOST ${DPM_HOST}" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
if [ "$GLOBUS_TCP_PORT_RANGE" ]; then
echo "setenv MYPROXY_TCP_PORT_RANGE \"${GLOBUS_TCP_PORT_RANGE/ /,}\"" >> ${LCG_ENV_LOC}/lcgenv.csh
fi
if ( echo $NODE_TYPE_LIST | egrep -q UI ); then
cat << EOF >> ${LCG_ENV_LOC}/lcgenv.csh
if ( ! \$?X509_USER_PROXY ) then
setenv X509_USER_PROXY /tmp/x509up_u\`id -u\`
endif
EOF
fi
echo endif >> ${LCG_ENV_LOC}/lcgenv.csh
########## csh ##########
chmod a+xr ${LCG_ENV_LOC}/lcgenv.csh
chmod a+xr ${LCG_ENV_LOC}/lcgenv.sh
return 0
}
function config_DPM_upgrade () {
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
requires DPMMGR DPMUSER_PWD DPMUSER_PWD MY_DOMAIN DPM_HOST
schv=`mysql --skip-column-names -s -u $DPMMGR --pass=$DPMUSER_PWD -h localhost -e "use dpm_db ; select * from schema_version ;"`
schversion=`echo $schv | awk '{print $1"."$2"."$3}'`
if [ -z "$schversion" ]; then
echo "Cannot find DPM schema version, will not try to upgrade"
return 0
else
echo -n "Mysql database version used: $schversion !"
fi
if [ "x$schversion" = "x2.1.0" ]; then
echo "No need to upgrade the DPM database schema"
else
echo "Upgrading database schema to 2.1.0 !"
cd ${INSTALL_ROOT}/lcg/share/DPM/multiple-domains/
tmpfile=`mktemp`
echo "$DPMUSER_PWD" > $tmpfile
if [ -r ./updateDomainName ]; then
./updateDomainName --domain-name $MY_DOMAIN --db-vendor MySQL --db $DPM_HOST --user $DPMMGR --pwd-file $tmpfile --dpns-db cns_db --dpm-db dpm_db --verbose
else
echo "./updateDomainName does not exist !"
return 1
fi
rm -rf $tmpfile
cd -
mysql -u $DPMMGR --pass=$DPMUSER_PWD -h localhost --database dpm_db < ${INSTALL_ROOT}/lcg/share/DPM/migrate-mysql-schema-to-2-1-0.sql
echo "If may you see the 'Duplicate key name 'G_SURL_IDX'' error message, it is harmless. The indexes are already created."
fi
return 0
}
config_DPM_rfio() {
# $Id: config_DPM_rfio,v 1.13 2006/01/10 12:37:03 okeeble Exp $
## Requires: DPM_HOST
requires DPM_HOST
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
if [ "x`grep rfio /etc/services | grep tcp`" = "x" ]; then
echo "rfio 5001/tcp" >> /etc/services
fi
if [ "x`grep rfio /etc/services | grep udp`" = "x" ]; then
echo "rfio 5001/udp" >> /etc/services
fi
touch /etc/shift.conf
function add_to_shift () {
trust="$1"
host="$2"
line=`grep "^$trust" /etc/shift.conf`
if [ -z "$line" ]; then
echo "$trust $host" >> /etc/shift.conf
elif ( ! echo $line | egrep -q "$host[^.]|$host$" ); then
ex /etc/shift.conf <<EOF
/^$trust/s/$/ $host/
.
wq
EOF
fi
return 0
}
for i in 'RFIOD TRUST' 'RFIOD WTRUST' 'RFIOD RTRUST' 'RFIOD XTRUST' 'RFIOD FTRUST'; do
add_to_shift "$i" ${DPM_HOST%%.*}
add_to_shift "$i" ${DPM_HOST}
done
if ( echo $NODE_TYPE_LIST | grep -qw SE_dpm_mysql ); then
for i in $DPMPOOL_NODES; do
add_to_shift 'DPM TRUST' ${i%%:*}
add_to_shift 'DPM TRUST' ${i%%.*}
add_to_shift 'DPNS TRUST' ${i%%:*}
add_to_shift 'DPNS TRUST' ${i%%.*}
done
fi
grep -v DPNS_HOST /etc/sysconfig/rfiod.templ > /etc/sysconfig/rfiod
echo "export DPNS_HOST=${DPM_HOST}" >> /etc/sysconfig/rfiod
/sbin/service rfiod stop
sleep 1s
/sbin/service rfiod start
/sbin/chkconfig rfiod on
return 0
}
function config_DPM_user () {
requires USERS_CONF
if ( ! id dpmmgr > /dev/null 2>&1 ); then
dpmuid=`awk -F: '$6=="dpm"{print $1}' $USERS_CONF`
dpmgid=`awk -F: '$6=="dpm"{print $3}' $USERS_CONF`
if [ "$dpmuid" -a "$dpmgid" ]; then
groupadd -g $dpmgid dpmmgr 2>/dev/null
useradd -c "DPM Manager" -m -d /home/dpmmgr -s /bin/tcsh -u $dpmuid -g $dpmgid dpmmgr 2>/dev/null
else
echo "Unable to find an entry for the dpmmgr in $USERS_CONF"
echo "Please add an entry with the flag 'dpm'"
return 1
fi
fi
return 0
}
config_DPM_mgr(){
# $Id: config_DPM_mgr,v 1.7 2006/01/10 12:37:03 okeeble Exp $
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
if [ ! -d /etc/grid-security/dpmmgr ]; then
mkdir /etc/grid-security/dpmmgr
fi
cp -pf /etc/grid-security/hostcert.pem /etc/grid-security/dpmmgr
cp -pf /etc/grid-security/hostkey.pem /etc/grid-security/dpmmgr
mv -f /etc/grid-security/dpmmgr/hostcert.pem /etc/grid-security/dpmmgr/dpmcert.pem
mv -f /etc/grid-security/dpmmgr/hostkey.pem /etc/grid-security/dpmmgr/dpmkey.pem
chown -R dpmmgr:dpmmgr /etc/grid-security/dpmmgr
chown root:dpmmgr /etc/grid-security/gridmapdir
chmod 775 /etc/grid-security/gridmapdir
## This is required to allow special user edginfo
## to get a host cert (to be authenticated
## to allow it to use the dpm-qryconf command to fetch
## storage global info from the DPM server itself
mkdir -p ~edginfo/.globus
chown edginfo:edginfo ~edginfo/.globus
if [ -f /etc/grid-security/hostkey.pem ]; then
cp -p /etc/grid-security/hos* ~edginfo/.globus/
cd ~edginfo/.globus
mv hostcert.pem usercert.pem
mv hostkey.pem userkey.pem
chown edginfo:edginfo user*
fi
return 0
}
config_DPM_mysql(){
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
# $Id: config_DPM_mysql,v 1.11 2006/01/10 12:37:03 okeeble Exp $
## Requires: MYSQL_PASSWORD DPMUSER_PWD VOS DPMPOOL DPMDATA DPMFSIZE DPM_HOST DPMMGR
requires MYSQL_PASSWORD DPMUSER_PWD VOS DPMPOOL DPMDATA DPMFSIZE DPM_HOST DPMMGR
DPMCONFIG=${INSTALL_ROOT}/lcg/etc/DPMCONFIG
set_mysql_passwd || return 1 # the function uses $MYSQL_PASSWORD
mysql --pass=$MYSQL_PASSWORD --exec "grant all on *.* to '$DPMMGR'@'%' identified by '$DPMUSER_PWD' with grant option"
mysql --pass=$MYSQL_PASSWORD --exec "grant all on *.* to '$DPMMGR'@'localhost' identified by '$DPMUSER_PWD' with grant option"
mysql --pass=$MYSQL_PASSWORD --exec "grant all on *.* to '$DPMMGR'@'$DPM_HOST' identified by '$DPMUSER_PWD' with grant option"
mysql -u $DPMMGR --pass=$DPMUSER_PWD -h localhost < ${INSTALL_ROOT}/lcg/share/DPM/create_dpns_tables_mysql.sql
mysql -u $DPMMGR --pass=$DPMUSER_PWD -h localhost < ${INSTALL_ROOT}/lcg/share/DPM/create_dpm_tables_mysql.sql
cat << EOC > $DPMCONFIG
$DPMMGR/$DPMUSER_PWD@localhost
EOC
chown dpmmgr:dpmmgr $DPMCONFIG
chmod 600 $DPMCONFIG
return 0
}
function config_DPM_gsiftp () {
requires DPM_HOST
if [ ! -f /etc/sysconfig/dpm-gsiftp.templ ]; then
echo "Cannot find /etc/sysconfig/dpm-gsiftp.templ"
exit 1
fi
sed -e "s/<DPNS_hostname>/${DPM_HOST}/" -e "s/<DPM_hostname>/${DPM_HOST}/" /etc/sysconfig/dpm-gsiftp.templ > /etc/sysconfig/dpm-gsiftp
/sbin/service dpm-gsiftp start
return 0
}
config_sedpm () {
# $Id: config_sedpm,v 1.24 2006/01/10 12:37:03 okeeble Exp $
## Requires: $MY_DOMAIN $VOS $USERS_CONF $DPM_HOST $SITE_EMAIL
requires MY_DOMAIN VOS USERS_CONF DPM_HOST SITE_EMAIL VOS
DPMCONFIG=${INSTALL_ROOT}/lcg/etc/DPMCONFIG
INSTALL_ROOT=${INSTALL_ROOT:-/opt}
check_users_conf_format
for x in dpns dpm srmv1 srmv2; do
echo
echo ' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>'
xd=$x;
if [ "$x" = "dpns" ]; then
xd="dpnsdaemon"
fi
grep -v -E 'DPNS_HOST|DPM_HOST|SRMV2CONFIGFILE|DPMCONFIGFILE' /etc/sysconfig/${xd}.templ > /etc/sysconfig/$xd
echo "export DPNS_HOST=${DPM_HOST}" >> /etc/sysconfig/$xd
echo "export DPM_HOST=${DPM_HOST}" >> /etc/sysconfig/$xd
if [ "$x" = "dpns" ]; then
echo NSCONFIGFILE=$DPMCONFIG >> /etc/sysconfig/$xd
else
echo DPMCONFIGFILE=$DPMCONFIG >> /etc/sysconfig/$xd
fi
/sbin/service $xd stop
pid="X"
while [ "x$pid" != "x" ]; do
sleep 1s
pid=`ps h -o pid -C $xd | awk '{print $1}'`
echo " stopping: $xd $pid "
done
echo
echo '=================================='
ps -wedf|grep -E 'dpmm|rfiod'
echo '=================================='
echo
/sbin/service $xd start
/sbin/chkconfig $xd on
sleep 1s
tail -20 /var/log/${x}/log
echo ' >>>>>>>>>>>>>>>>>>>>>>>>>>>>>'
echo
done
export DPNS_HOST=$DPM_HOST
export DPM_HOST=$DPM_HOST
if [ ! -d $DPMDATA ]; then
mkdir -p $DPMDATA
fi
chown dpmmgr:dpmmgr $DPMDATA
${INSTALL_ROOT}/lcg/bin/dpm-addpool --poolname $DPMPOOL --def_filesize $DPMFSIZE
${INSTALL_ROOT}/lcg/bin/dpm-addfs --poolname $DPMPOOL --server $DPM_HOST --fs $DPMDATA
${INSTALL_ROOT}/lcg/bin/dpm-qryconf
if ( ! ${INSTALL_ROOT}/lcg/bin/dpns-ls /dpm > /dev/null 2>&1 ); then
${INSTALL_ROOT}/lcg/bin/dpns-mkdir /dpm
fi
if ( ! ${INSTALL_ROOT}/lcg/bin/dpns-ls /dpm/${MY_DOMAIN} > /dev/null 2>&1 ); then
${INSTALL_ROOT}/lcg/bin/dpns-mkdir -m 755 -p /dpm/${MY_DOMAIN}
fi
if ( ! ${INSTALL_ROOT}/lcg/bin/dpns-ls /dpm/${MY_DOMAIN}/home > /dev/null 2>&1 ); then
${INSTALL_ROOT}/lcg/bin/dpns-mkdir -m 755 -p /dpm/${MY_DOMAIN}/home
fi
${INSTALL_ROOT}/lcg/bin/dpns-chmod 775 /dpm
${INSTALL_ROOT}/lcg/bin/dpns-chmod 775 /dpm/${MY_DOMAIN}
${INSTALL_ROOT}/lcg/bin/dpns-chmod 775 /dpm/${MY_DOMAIN}/home
${INSTALL_ROOT}/lcg/bin/dpns-setacl -m d:u::7,d:g::7,d:o:5 /dpm
${INSTALL_ROOT}/lcg/bin/dpns-setacl -m d:u::7,d:g::7,d:o:5 /dpm/${MY_DOMAIN}
${INSTALL_ROOT}/lcg/bin/dpns-setacl -m d:u::7,d:g::7,d:o:5 /dpm/${MY_DOMAIN}/home
for vo in `echo $VOS | tr '[:upper:]' '[:lower:]'`; do
vo_group=`users_getvogroup $vo`
if ( ! ${INSTALL_ROOT}/lcg/bin/dpns-ls /dpm/${MY_DOMAIN}/home/$vo > /dev/null 2>&1 ); then
${INSTALL_ROOT}/lcg/bin/dpns-mkdir -m 775 /dpm/${MY_DOMAIN}/home/$vo
fi
${INSTALL_ROOT}/lcg/bin/dpns-chmod 775 /dpm/${MY_DOMAIN}/home/$vo
${INSTALL_ROOT}/lcg/bin/dpns-chown root:$vo_group /dpm/${MY_DOMAIN}/home/$vo
${INSTALL_ROOT}/lcg/bin/dpns-setacl -m d:u::7,d:g::7,d:o:5 /dpm/${MY_DOMAIN}/home/$vo
done
# Setup hostproxy for RGMA to run GIP plugin
rgma_user=rgma
job="8 * * * * ${INSTALL_ROOT}/globus/bin/grid-proxy-init -q \
-cert /etc/grid-security/hostcert.pem \
-key /etc/grid-security/hostkey.pem -out ${INSTALL_ROOT}/lcg/hostproxy.\$$ && \
chown $rgma_user ${INSTALL_ROOT}/lcg/hostproxy.\$$ && \
mv -f ${INSTALL_ROOT}/lcg/hostproxy.\$$ ${INSTALL_ROOT}/lcg/hostproxy.$rgma_user"
cron_job rgma-proxy root "$job"
echo "$job" | sed 's|[^/]*||' | sh
return 0
}
This document was generated using the LaTeX2HTML translator Version 2002 (1.62)
Copyright © 1993, 1994, 1995, 1996,
Nikos Drakos,
Computer Based Learning Unit, University of Leeds.
Copyright © 1997, 1998, 1999,
Ross Moore,
Mathematics Department, Macquarie University, Sydney.
The command line arguments were:
latex2html -split 0 -html_version 4.0 -no_navigation -address 'GRID deployment' SE_dpm_mysql.drv_html
The translation was initiated by Oliver KEEBLE on 2006-01-16