The Grid in IPM

The Institute for Research in Fundamental Sciences (IPM) is divided into many research schools. Each school provides its researchers with computing and storage facilities. The computation power of IPM-Grid infrastructure is approximately equivalent to the summation of all the computing resources provided by all research departments. This is also true for storage capacity of IPM-Grid infrastructure. At the global level, IPM-Grid infrastructure consists of five main services:

User Interface (UI) [ui.ipm.ac.ir]:

The access point to the IPM-Grid is the User Interface. The UI machine keeps user accounts and user's certificates. From a UI, a user can be authenticated and authorized to use the IPM Grid resources, and can access the functionalities offered by the Information, Workload and Data management systems. It provides command line interface tools to perform some basic Grid operations:



list all the resources suitable to execute a given job
submit jobs for execution
cancel jobs;
retrieve the output of finished jobs;
show the status of submitted jobs;
retrieve the logging and bookkeeping information of jobs;
copy, replicate and delete files from the Grid;
retrieve the status of different resources from the Information System.

Resource Broker or Workload Management Service (RB, WMS) [wms.ipm.ac.ir]:
The purpose of the Resource Broker is to accept user jobs, to assign them to the most appropriate Computing Element, to record their status and retrieve their output, without exposing the user to the complexity of the Grid. In fact, Resource Broker allows users to submit jobs, and performs all tasks required to execute them, without exposing the user to the complexity of the IPM-Grid. It is the responsibility of the user to describe his jobs and their requirements, and to retrieve the output when the jobs are finished.

Information System (top-bdii) [top-bdii.ipm.ac.ir]:
The top level BDII aggregates all the information from all the site level BDIIs an hence contains information about all grid services. There are multiple instances of the top level BDII in order to provide a fault tolerant, load balanced service. The information system clients query a top level BDII to find the information that they require. The

Virtual Organization Membership Server (VOMS) [voms.ipm.ac.ir]:
VOMS manages information about the roles and privileges of users within a VO. This information is presented to services via an extension to the proxy. At the time the proxy is created this VOMS server is contacted, and it return a mini certificate known as an Attribute Certificate (AC) which is signed by the VO and contains information about group membership and any associated roles within the VO. Briefly, the VOMS system allows a proxy to have extensions containing information about the VO, the groups the user belongs to in the VO, and any roles the user is entitled to have. The groups and roles are defined by each VO; they may be assigned to a user at the initial Registeration, or added subsequently. To map groups and roles to specific privileges, what counts is the group/role combination, which is referred to as an FQAN (short form for Fully Qualified Attribute Name). The format is: FQAN = [/Role=] For example, /cms/HeavyIons/Role=production.

LCG File Catalog (LFC) [lfc.ipm.ac.ir]:
The LFC is a catalog containing logical to physical file mappings. Depending on the VO deployment model, the LFC is installed centrally or locally. The LFC is a secure catalog, supporting GSI security and VOMS. In the LFC, a given file is represented by a Grid Unique IDentifier (GUID). A given file replicated at different sites is then considered as the same file with the help of GUID, but can appear as a unique logical entry in the LFC catalog. The LFC allows to see the logical file names in a hierarchical structure. In the BDII, LFC servers can have either the attribute 'central' or 'local'. But this has absolutely NO impact on the fact that each LFC server with R/W access has its own namespace, and enforces uniqueness of LFNs only in its own namespace.
Furthermore, at the site level or research school level, IPM-Grid infrastructure consists of five services:

User Interface (UI):
Its purpose is the same as above description.

Computing Element (CE):
A Computing Element, in Grid terminology, is some set of computing resources localized at a site (i.e. a cluster, a computing farm). A CE includes a Grid Gate, which acts as a interface to the cluster; a Local Resource Management System (LRMS) (sometimes called batch system), and the cluster itself, a collection of Worker Nodes (WNs), the nodes where the jobs are run. The GG is responsible for accepting jobs and dispatching them for execution on the WNs via the LRMS.

Worker Node (WN):
As mentioned in CE description, Worker Nodes are computing resources which jobs will run on them.

Storage Element (SE):
The Disk Pool Manager (DPM) has been deployed as a lightweight solution for disk storage management. A priori, there is no limitation on the amount of disk space that the DPM can handle. The DPM offers an implementation of the Storage Resource Manager (SRM) specifications, for version 1.1, version 2 and version 2.2. The DPM handles the storage on disk servers. In fact, it handles pools : a pool is a group of file systems, located on one or more disk servers. The way file systems are grouped to form a pool is up to the DPM administrator.

Information System (site-bdii):
Grid information systems are mission−critical components in today's production grid infrastructures. They provide detailed information about grid services which is needed for various different tasks. The gLite information system has a hierarchical structure of three levels. The fundamental building block used in this hierarchy is the Berkley Database Information Index (BDII). Although the BDII has additional complexity, it can be visualized as an LDAP database. The resource level BDII is usually co−located with the grid service and provides information about that service. Each grid site runs a site level BDII. This aggregates the information from all the resource level BDIIs running at that site.

A Virtual Organization (VO):
is an entity which typically corresponds to a particular organization or group of people in the real world who may be connected only via the VO. More specifically, a VO is a group of individuals or institutions who share the computing resources for a common goal. The VO support the work of the researchers in the same area, being able to share results, experience, and resources as desired. Membership of a VO grants specific privileges to a user. For example, a user belonging to the ATLAS VO will be able to read ATLAS files or to exploit resources reserved for the ATLAS collaboration at CERN.
Becoming a member of a VO usually requires membership of the corresponding experiment at CERN however other membership rules apply to other VO’s; in any case a user must comply with the rules of the VO to gain membership. A user may be expelled from a VO if he fails to comply with these rules.
A group is a subset of the VO containing members who share some responsibilities or privileges in the project. Groups are organized hierarchically like a directory tree, starting from a VO-wide root group. A user can be a member of any number of groups.
Also, a role is an attribute which typically allows a user to acquire special privileges to perform specific tasks.
In principle, groups are associated with privileges that the user always has, while roles are associated with privileges that a user needs to have only from time to time. In addition, roles are attached to groups, i.e. roles in different groups with the same role names are distinct.




footer
 

webmaster | ipmic@ipm.ir   Copyright © 2012, All rights reserved.