hpc:storage_on_hpc
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
hpc:storage_on_hpc [2025/03/14 10:40] – [Where does gio mounts my data?] Gaël Rossignol | hpc:storage_on_hpc [2025/03/31 14:33] (current) – [Sometimes mount is not available but you can browse/copy/interract with gio commands] Gaël Rossignol | ||
---|---|---|---|
Line 230: | Line 230: | ||
<code console> | <code console> | ||
- | (baobab)-[sagon@login2 | + | (baobab)-[sagon@login1 |
home dir: /home/sagon | home dir: /home/sagon | ||
scratch dir: / | scratch dir: / | ||
Line 311: | Line 311: | ||
<code console> | <code console> | ||
- | [sagon@login2 | + | [sagon@login1 |
</ | </ | ||
Line 317: | Line 317: | ||
<code console> | <code console> | ||
- | [sagon@login2 | + | [sagon@login1 |
</ | </ | ||
Line 333: | Line 333: | ||
<code console> | <code console> | ||
- | [sagon@login2 | + | [sagon@login1 |
</ | </ | ||
- | <note important> | + | <note important> |
- | If you need to access the data on the nodes, you need to mount them there as well in your sbatch script.</ | + | If you need to access the on other nodes, you need to mount them there as well in your sbatch script.</ |
If you need to script this, you can put your credentials in a file in your home directory. | If you need to script this, you can put your credentials in a file in your home directory. | ||
Line 350: | Line 350: | ||
Mount example using credentials in a script: | Mount example using credentials in a script: | ||
<code console> | <code console> | ||
- | [sagon@login2 | + | [sagon@login1 |
</ | </ | ||
Line 373: | Line 373: | ||
<code console> | <code console> | ||
- | [sagon@login2 | + | [sagon@login1 |
196761 / | 196761 / | ||
224317 / | 224317 / | ||
Line 380: | Line 380: | ||
reference: (([[https:// | reference: (([[https:// | ||
+ | |||
+ | === Sometimes mount is not available but you can browse/ | ||
+ | |||
+ | < | ||
+ | $ dbus-launch bash | ||
+ | |||
+ | $ gio mount smb:// | ||
+ | Authentication Required | ||
+ | Enter user and password for share “hpc_exchange” on “nasac-evs2.unige.ch”: | ||
+ | User [rossigng]: s-hpc-share | ||
+ | Domain [SAMBA]: ISIS | ||
+ | Password: | ||
+ | |||
+ | $ gio mount -l | ||
+ | Drive(0): SAMSUNG MZ7L3480HBLT-00A07 | ||
+ | Type: GProxyDrive (GProxyVolumeMonitorUDisks2) | ||
+ | Drive(1): SAMSUNG MZ7L3480HBLT-00A07 | ||
+ | Type: GProxyDrive (GProxyVolumeMonitorUDisks2) | ||
+ | Mount(0): hpc_exchange on nasac-evs2.unige.ch -> smb:// | ||
+ | Type: GDaemonMount | ||
+ | |||
+ | $ gio list smb:// | ||
+ | backup | ||
+ | |||
+ | $ gio list smb:// | ||
+ | toto | ||
+ | titi | ||
+ | tata.txt | ||
+ | |||
+ | $ gio cp smb:// | ||
+ | |||
+ | ... | ||
+ | </ | ||
+ | |||
===== CVMFS ===== | ===== CVMFS ===== | ||
All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way. | All the compute nodes of our clusters have CernVM-FS client installed. CernVM-FS, the CernVM File System (also known as CVMFS), is a file distribution service that is particularly well suited to distribute software installations across a large number of systems world-wide in an efficient way. |
hpc/storage_on_hpc.1741948815.txt.gz · Last modified: 2025/03/14 10:40 by Gaël Rossignol