Both sides previous revisionPrevious revisionNext revision | Previous revision |
hpc:data_life_cycle [2024/07/09 10:14] – Adrien Albert | hpc:data_life_cycle [2025/06/11 12:27] (current) – external edit 127.0.0.1 |
---|
===== Description ===== | ===== Description ===== |
| |
This page will help users to understand how to manage their data on the cluster. We provide a quick procedure here, but for more details, please consult [[ https://www.unige.ch/researchdata/fr/accueil/ |the Data Management Plan ]](DMP) provided by Unige. | This page will help users to understand how to manage their data on the cluster. We provide a quick example here, but for more details, please consult [[ https://www.unige.ch/researchdata/fr/accueil/ |the Data Management Plan ]](DMP) provided by Unige. |
| |
| |
===== Data Management ===== | ===== Data Management ===== |
| |
Below is a schema representing the data life cycle, which includes the following stages: | Below is a schema representing an example data life cycle, which includes the following stages: |
| |
* **Acquisition:** The process of collecting or generating data. | * **Acquisition:** The process of collecting or generating data. |
* **Processing:** The manipulation or analysis of data to extract useful information. | * **Processing:** The manipulation or analysis of data to extract useful information. |
* **Usage:** The utilization of processed data for research, analysis, or other purposes. | * **Usage:** The utilization of processed data for research, analysis, or other purposes. |
* **Disposal:** This involves backing up and migrating data to appropriate storage solutions (e.g., NASAC, Yareta), and deleting data from the HPC cluster. | * **Disposal:** This involves backing up and migrating data to appropriate storage solutions (e.g., [[https://catalogue-si.unige.ch/stockage-recherche|NASAC]], [[https://www.unige.ch/eresearch/fr/services/yareta/|Yareta]], [[https://www.unige.ch/eresearch/fr/services/hedera/|Hereda]]), and deleting data from the HPC cluster. |
| |
| This example should be adapted to your needs; however, it must comply with the terms of use. Any unused or unnecessary data for computation must be removed from the cluster. Additionally, old data should be removed if it will not be used in the near future. Keeping a small amount of old data is tolerable, but several hundred gigabytes or terabytes can become problematic. If everyone stores too much data, there will be no space left for new projects, impacting the overall performance and availability of the HPC cluster. (cf. [[https://hpc-community.unige.ch/t/baobab-urgent-scratch-partition-nearly-full/3513 |hpc-community: baobab-urgent-scratch-partition-nearly-full]]) |
Each stage is crucial for ensuring that data is handled efficiently and responsibly throughout its life cycle. | |
| |
| |
| |
{{ :hpc:data-lifecycle-management.png?nolink&1300 |}} | {{ :hpc:data-lifecycle-management.png?nolink&1300 |}} |