LCG-0 release note
Feb/28, 2003
The first version of the LCG Grid software has been released on Feb/28, 2003.
Purpose:
The main goal of this release is to setup the Grid deployment process (the distribution, installation and configuration) rather than the Grid itself. Consequently, the stability and the feature list were secondary considerations.
Though the software is publicly available (which is a part of the distribution goals), it is not meant to be for the general public. It is intended to be used by LCG early deployment sites and access to the LCG-0 Grid will be controlled. Only sites identified as part of this process will be allowed to connect. This is to ensure that as a first priority there is a well controlled environment suitable for correcting problems in the Grid deployment process, rather than an environment for finding problems in the functioning of the Grid.
LCG will not be able to provide any support of this release that is outside of this goal or any support for sites that are not part of the official GDB agreed selection list.
LCG-0
release content:
The LCG-0 release is composed of the VDT 1.1.6 (Globus and Condor), EDG 1.4.3 (work management services and data management services) and the EDT Glue schema and information providers.
Prerequisites:
The LCG-0
release has been tested on machines running Linux RH 7.3 server distribution or
the CERN RH 7.3.1; it will not run on RH 6.x release. It is assumed that the
batch system (PBS for now) is already installed and configured.
Supported
architecture:
The architecture supported on remote sites (outside of CERN) is a combination of the UI (User Interface), SE (Storage Element), CE (Computing Element) and WN (Worker Node). If only access to the Grid is required, the UI machine is enough. The CE and/or SE are required if local resources are also to be part of the Grid. If installed, each of the UI, CE, SE has to be a separate physical machine. As many worker nodes can be installed as desired.
Currently all worker nodes have to be on a public network
The high level services supported are the workload services, RB (Resource Broker), JSS (Job Submission Service), LB (Logging and Bookkeeping) and LL (Local Logger). Data Management services supported are RC (Replica Catalog) and RM (Replica Manager). For the batch system we have tested only OpenPBS so far.
The sites have
to connect to the high level services installed at CERN to be able to run jobs.
Installation:
Installation instructions can be obtained from http://cern.ch/grid-deployment/
Verification:
Sample jobs
are provided to verify basic functionality of the installed software. The
scripts can be obtained from http://cern.ch/grid-deployment/
Contact:
Contact http://cern.ch/grid-deployment/ with problems.