User Tools

Site Tools


hpc:storage_on_hpc

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
hpc:storage_on_hpc [2025/03/14 10:42] – [NASAC] Gaël Rossignolhpc:storage_on_hpc [2026/02/13 15:36] (current) – [Cluster storage] Adrien Albert
Line 12: Line 12:
 This is the storage space we offer on our clusters This is the storage space we offer on our clusters
  
-^ Cluster   ^ Path                     ^ Total storage size  ^ Nb of servers ^ Nb of targets per servers  ^ Backup     ^ Quota size         ^ Quota number files ^ +^ Cluster   ^ Path                     ^ Total storage size  ^ Disk Type   ^ Backup     ^ Quota size         ^ Quota number files ^ 
-| Baobab    | ''/home/''               | 138 TB              | 4             | 1 meta, 2 storage          | Yes (tape) | 1 TB               | -                  | +| Baobab    | ''/home/''               | 138 TB              | HDD         | Yes (tape) | 1 TB               | -                  | 
-| :::       | ''/srv/beegfs/scratch/'' | 1.0 PB              | 2             | 1 meta, 6 storage          | No         | -                  | 10 M               | +| :::       | ''/srv/beegfs/scratch/'' | 1.0 PB              | HDD         | No         | -                  | 10 M               | 
-| :::       | ''/srv/fast''            | 5 TB                | 1             | 1                          | No         | 500G/User 1T/Group | -               | +| :::       | ''/srv/fast''            | 5 TB                | SSD         | No         | 500G/User 1T/Group | -               | 
-| Yggdrasil | ''/home/''               | 495 TB              | 2             | 1 meta, 2 storage          | Yes (tape) | 1 TB               | -                  | +| Yggdrasil | ''/home/''               | 495 TB              | HDD         | Yes (tape) | 1 TB               | -                  | 
-| :::       | ''/srv/beegfs/scratch/'' | 1.2 PB              | 2             | 1 meta, 6 storage          | No         | -                  | 10 M               |+| :::       | ''/srv/beegfs/scratch/'' | 1.2 PB              | HDD         | No         | -                  | 10 M               | 
 +| Bamboo    | ''/home/''               | 378 TB              | SSD         | Yes (tape) | 1 TB               | -                  | 
 +| :::       | ''/srv/beegfs/scratch/'' | 1.1 PB              | HDD         | No         | -                  | 10 M               |
  
 We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, see table above for details. Beyond this limit, you will not able to write to this filesystem. We count on all of you to only store research data on the clusters. We also count on your **to periodically delete old or unneeded files** and to **clean up everything when you will leave UNIGE**. Please keep on reading to understand when you should use each type of storage. We realize you all have different needs in terms of storage. To guarantee storage space for all users, we have **set a quota on home and scratch directory**, see table above for details. Beyond this limit, you will not able to write to this filesystem. We count on all of you to only store research data on the clusters. We also count on your **to periodically delete old or unneeded files** and to **clean up everything when you will leave UNIGE**. Please keep on reading to understand when you should use each type of storage.
Line 163: Line 165:
 ====== Sharing files with other users ====== ====== Sharing files with other users ======
  
-Sometimes you need to share files with some colleagues or your research group.+Sometimesyou may need to share files with colleagues or members of your research group.
  
-We provide two types of shared folders : +We offer two types of shared folders:
-  * in "home" (''/home/share/'') - to share scripts and shared libraries for a common project +
-  * in "scratch" (''/srv/beegfs/scratch/shares/'') - to share datasets for instance +
  
-If you need one, please fill the form on DW: https://dw.unige.ch/openentry.html?tid=hpc +  * **In the "home" directory** (`/home/share/`): Ideal for sharing scripts and common libraries related to a project  
 +  * **In the "scratch" directory** (`/srv/beegfs/scratch/shares/`): Suitable for sharing larger files, such as datasets.
  
-If you are an Outisder user and you don'have access to DW please request to your PI to fill the form.+To request a shared folder, please fill out the form at [[https://dw.unige.ch/openentry.html?tid=hpc|DW]]. As part of the request, you'll be asked if you already have a *group* you’d like to use. If this isn't the caseyou'll need to create one ([[https://dw.unige.ch/openentry.html?tid=adaccess|link]] on the form)
  
 +A **group** is a collection of users used to manage shared access to resources. These groups are defined and stored in the **Active Directory** and allow us to control who can access specific folders.  
 +If you need more details about groups, please contact your **CI** (*correspondant informatique*).
 +
 +If you are an *Outsider* user and do not have access to DW, please ask your **PI** to submit the request on your behalf.
 <note important> <note important>
 You are not allowed to change the permission of your ''$HOME/$SCRATCH'' folder on the clusters. Even if you did, our automation scripts will break what you did. You are not allowed to change the permission of your ''$HOME/$SCRATCH'' folder on the clusters. Even if you did, our automation scripts will break what you did.
Line 312: Line 317:
 <code console> <code console>
 [sagon@login1 ~] $ dbus-launch bash [sagon@login1 ~] $ dbus-launch bash
 +</code>
 +
 +**If you are using sbatch add a sleep after ''dbus-launch'' to be sure initialisation is done**
 +
 +<code>
 +dbus-launch bash
 +sleep 3
 +gio mount ....
 </code> </code>
  
Line 336: Line 349:
 </code> </code>
  
-<note important>The data are only available on the node where gio has been mounted.  +<note important>The data are only available where gio has been mounted.  
-If you need to access the data on the nodes, you need to mount them there as well in your sbatch script.</note>+If you need to access the on other nodes, you need to mount them there as well in your sbatch script.</note>
  
 If you need to script this, you can put your credentials in a file in your home directory. If you need to script this, you can put your credentials in a file in your home directory.
Line 380: Line 393:
  
 reference: (([[https://hpc-community.unige.ch/t/howto-access-external-storage-from-baobab/551|How to access external storage from Baobab]])) reference: (([[https://hpc-community.unige.ch/t/howto-access-external-storage-from-baobab/551|How to access external storage from Baobab]]))
 +
 +=== Sometimes mount is not available but you can browse/copy/interract with gio commands === 
 +
 +<code>
 +$ dbus-launch bash
 +
 +$ gio mount smb://nasac-evs2.unige.ch/hpc_exchange/backup
 +Authentication Required
 +Enter user and password for share “hpc_exchange” on “nasac-evs2.unige.ch”:
 +User [rossigng]: s-hpc-share
 +Domain [SAMBA]: ISIS
 +Password:
 +
 +$ gio mount -l
 +Drive(0): SAMSUNG MZ7L3480HBLT-00A07
 +  Type: GProxyDrive (GProxyVolumeMonitorUDisks2)
 +Drive(1): SAMSUNG MZ7L3480HBLT-00A07
 +  Type: GProxyDrive (GProxyVolumeMonitorUDisks2)
 +Mount(0): hpc_exchange on nasac-evs2.unige.ch -> smb://nasac-evs2.unige.ch/hpc_exchange/
 +  Type: GDaemonMount
 +
 +$ gio list smb://nasac-evs2.unige.ch/hpc_exchange/
 +backup
 +
 +$ gio list smb://nasac-evs2.unige.ch/hpc_exchange/backup
 +toto
 +titi
 +tata.txt
 +
 +$ gio cp smb://nasac-evs2.unige.ch/hpc_exchange/backup/tata /tmp
 +
 +...
 +</code>
 +
 ===== CVMFS ===== ===== CVMFS =====
 All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way. All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way.
Line 413: Line 460:
  
 The EESSI did a nice tutorial about CVMFS readable on [[https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/|multixscale]] git repo. The EESSI did a nice tutorial about CVMFS readable on [[https://multixscale.github.io/cvmfs-tutorial-hpc-best-practices/|multixscale]] git repo.
 +===== EOS =====
 +You can mount root filesystems using EOS.
  
 +<code>
 +(bamboo)-[sagon@login1 ~]$ export EOS_MGM_URL=root://eospublic.cern.ch
 +(bamboo)-[sagon@login1 ~]$ export EOS_HOME=/eos/opendata
 +(bamboo)-[sagon@login1 ~]$ eos fuse mount /tmp/sagon/opendata
 +</code>
  
 +<note important>do not mount the filesystem in your home or scratch space as it isn't working because they aren't standard filesystems</note>
 ====== Robinhood ====== ====== Robinhood ======
 Robinhood Policy Engine is a versatile tool to manage contents of large file systems. It daily scans the scratch beegfs filesystems. It makes it possible to schedule mass action on filesystem entries by defining attribute-based policies. Robinhood Policy Engine is a versatile tool to manage contents of large file systems. It daily scans the scratch beegfs filesystems. It makes it possible to schedule mass action on filesystem entries by defining attribute-based policies.
hpc/storage_on_hpc.1741948940.txt.gz · Last modified: (external edit)