hpc:storage_on_hpc
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
hpc:storage_on_hpc [2025/03/14 10:41] – [Check disk usage on home and scratch] Gaël Rossignol | hpc:storage_on_hpc [2025/03/31 14:33] (current) – [Sometimes mount is not available but you can browse/copy/interract with gio commands] Gaël Rossignol | ||
---|---|---|---|
Line 336: | Line 336: | ||
</ | </ | ||
- | <note important> | + | <note important> |
- | If you need to access the data on the nodes, you need to mount them there as well in your sbatch script.</ | + | If you need to access the on other nodes, you need to mount them there as well in your sbatch script.</ |
If you need to script this, you can put your credentials in a file in your home directory. | If you need to script this, you can put your credentials in a file in your home directory. | ||
Line 380: | Line 380: | ||
reference: (([[https:// | reference: (([[https:// | ||
+ | |||
+ | === Sometimes mount is not available but you can browse/ | ||
+ | |||
+ | < | ||
+ | $ dbus-launch bash | ||
+ | |||
+ | $ gio mount smb:// | ||
+ | Authentication Required | ||
+ | Enter user and password for share “hpc_exchange” on “nasac-evs2.unige.ch”: | ||
+ | User [rossigng]: s-hpc-share | ||
+ | Domain [SAMBA]: ISIS | ||
+ | Password: | ||
+ | |||
+ | $ gio mount -l | ||
+ | Drive(0): SAMSUNG MZ7L3480HBLT-00A07 | ||
+ | Type: GProxyDrive (GProxyVolumeMonitorUDisks2) | ||
+ | Drive(1): SAMSUNG MZ7L3480HBLT-00A07 | ||
+ | Type: GProxyDrive (GProxyVolumeMonitorUDisks2) | ||
+ | Mount(0): hpc_exchange on nasac-evs2.unige.ch -> smb:// | ||
+ | Type: GDaemonMount | ||
+ | |||
+ | $ gio list smb:// | ||
+ | backup | ||
+ | |||
+ | $ gio list smb:// | ||
+ | toto | ||
+ | titi | ||
+ | tata.txt | ||
+ | |||
+ | $ gio cp smb:// | ||
+ | |||
+ | ... | ||
+ | </ | ||
+ | |||
===== CVMFS ===== | ===== CVMFS ===== | ||
All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way. | All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way. |
hpc/storage_on_hpc.1741948870.txt.gz · Last modified: 2025/03/14 10:41 by Gaël Rossignol