gLite > gLite 3.1 > glite-SGE_utils > Update to glite-SGE_utils 3.1.12-0  
 
 

 

 

gLite 3.1

glite-SGE_utils - Update to version 3.1.12-0


Date 06.07.2009
Priority Normal

Description



Updated yaim lcg-ce

New version of the yaim module for the lcg CE containing configuration changes as requested in:

  • The WN Working Group. See https://twiki.cern.ch/twiki/bin/view/EGEE/WNWorkingGroup (Key item B)
  • The Installed Capacity Document. See https://twiki.cern.ch/twiki/pub/LCG/WLCGCommonComputingReadinessChallenges/WLCG_GlueSchemaUsage-1.8.pdf (page 13)
It also fixes a series of bugs mainly dealing with:
  • Service provider configuration
  • LDIF file fixes
New YAIM variables
===============

The following variables need to be defined by the sys admins (examples are distributed under site-info.def in yaim core but already required by the lcg CE configuration functions)
  • CE_OTHERDESCR: This YAIM variable is used to set the GlueHostProcessorOtherDescription attribute. The value of this variable MUST be set to: Cores=<typical number of cores per CPU> [, Benchmark=<value>-HEP-SPEC06] where <typical number of cores per CPU> is equal to the number of cores per CPU of a typical Worker Node in a SubCluster. The second value of this attribute MUST be published only in the case the CPU power of the SubCluster is computed using the Benchmark HEP-SPEC06.
  • CE_CAPABILITY: This YAIM variable is a blank separated list and is used to set the GlueCECapability attribute. In particular, site administrators MUST define the following values: CPUScalingReferenceSI00=<referenceCPU SI00>; the reference CPU SI00 is the internal batch scaling factor used to normalize the GlueCEMaxCPUTime. The <referenceCPU SI00> is expressed in SI00. If internal scaling is not done this capability MUST be published and its value set to the minimum value of the corresponding SubClusters GlueHostBenchmarkSI00. Share=<VO>:<share>; this value is used to express specific VO shares if set. If there is no special share, this value MUST NOT be published. <VO> is the VO name and <share> can assume values between 1 and 100 (it represents a percentage). Please note that the sum of the shares over all WLCG VOs MUST BE less than or equal to 100. The syntax is CPUScalingFactorSI00=value [Share=vo-name1:value [Share=vo-name2:value [...]]]
  • SE_MOUNT_INFO_LIST: This YAIM variable is used to set the GlueCESEBindMountInfo attribute for each defined SE. The variable is a space separated list of SE hosts from SE_LIST with the export directory from the Storage Element and the mount directory common to worker nodes part of the Computing Element like SE1:export_dir1,mount_dir1. If any SE from SE_LIST doesn't support he mount concept, don't define anything for that SE in this variable. If this is the case for all the SEs in SE_LIST, put the value none. The GlueCESEBindMountInfo will be in both cases "n.a".
The following are default variables that do not need to be defined, only for advanced configurations:
  • CLUSTER_HOST: this variable must be set to CE_HOST for the time being. It's defined under INSTALL_ROOT/glite/yaim/defaults/lcg-ce.post.



Please also have a look at the list of known issues.

This update fixes various bugs. For the full list of bugs, please see list below.

Fixed bugs

Number Description
 #33210 [yaim-lcg-ce] Several variables (if missing in site-info.def) are not reported in config_gip_ce
 #38985 [ yaim-lcg-ce ] clean lcg-ce.pre variables
 #39800 [ yaim-lcg-ce ] config_gip_service_release should be included
 #40560 [ yaim-lcg-ce ] Implement config_info_service_lcg-ce
 #43983 [ yaim-lcg-ce ] YAIM packages
 #44533 [ yaim-lcg-ce ] lcg-ce gip defaults should change
 #45886 [ yaim-lcg-ce ] RTEpublisher configuration should be added into the lcg CE
 #45980 [ yaim-lcg-ce ] New variables for the information system

Updated rpms

Name Version Full RPM name Description
glite-SGE_utils 3.1.12-0 glite-SGE_utils-3.1.12-0.i386.rpm gLite metapackage (glite-SGE_utils)
glite-yaim-lcg-ce 4.0.5-6 glite-yaim-lcg-ce-4.0.5-6.noarch.rpm org.glite.yaim.lcg-ce v. 4.0.5-6

The RPMs can be updated using yum via

Service reconfiguration after update

Not needed.

Service restart after update

Not needed.

How to apply the fix

  1. Update the RPMs (see above)
  2. Update configuration (see above)
  3. Restart the service if necessary (see above)