User Tools

Site Tools


hpc:data_life_cycle

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
hpc:data_life_cycle [2024/07/08 14:48] – created Adrien Alberthpc:data_life_cycle [2025/06/11 12:27] (current) – external edit 127.0.0.1
Line 3: Line 3:
 ===== Description ===== ===== Description =====
  
-This page will help users to understand how to manage their data on the cluster. We provide a quick procedure here, but for more details, please consult [[ https://www.unige.ch/researchdata/fr/accueil/ |the Data Management Plan ]](DMP) provided by Unige.+This page will help users to understand how to manage their data on the cluster. We provide a quick example here, but for more details, please consult [[ https://www.unige.ch/researchdata/fr/accueil/ |the Data Management Plan ]](DMP) provided by Unige.
  
  
Line 16: Line 16:
 This ensures enough space for everyone and guarantees optimal performance for computing. This ensures enough space for everyone and guarantees optimal performance for computing.
  
-{{:hpc:data-lifecycle-management.png?nolink&600|}}+ 
 +===== Data Management ===== 
 + 
 +Below is a schema representing an example data life cycle, which includes the following stages: 
 + 
 +  * **Acquisition:** The process of collecting or generating data. 
 +  * **Storage:** The data may be stored on HPC storage for production purposes only (e.g., Home, Scratch, Fast, etc.). 
 +  * **Processing:** The manipulation or analysis of data to extract useful information. 
 +  * **Usage:** The utilization of processed data for research, analysis, or other purposes. 
 +  * **Disposal:** This involves backing up and migrating data to appropriate storage solutions (e.g., [[https://catalogue-si.unige.ch/stockage-recherche|NASAC]], [[https://www.unige.ch/eresearch/fr/services/yareta/|Yareta]], [[https://www.unige.ch/eresearch/fr/services/hedera/|Hereda]]), and deleting data from the HPC cluster. 
 + 
 +This example should be adapted to your needs; however, it must comply with the terms of use. Any unused or unnecessary data for computation must be removed from the cluster. Additionally, old data should be removed if it will not be used in the near future. Keeping a small amount of old data is tolerable, but several hundred gigabytes or terabytes can become problematic. If everyone stores too much data, there will be no space left for new projects, impacting the overall performance and availability of the HPC cluster. (cf. [[https://hpc-community.unige.ch/t/baobab-urgent-scratch-partition-nearly-full/3513 |hpc-community: baobab-urgent-scratch-partition-nearly-full]]) 
 + 
 + 
 + 
 +{{ :hpc:data-lifecycle-management.png?nolink&1300 |}}
hpc/data_life_cycle.1720450126.txt.gz · Last modified: (external edit)