Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

Storage Overview

After logging in to Ginsburg you will be in your home directory. This home directory storage space (50 GB) is appropriate for smaller files, such as documents, source code, and scripts but will fill up quickly if used for data sets or other large files.


Ginsburg's shared storage server is named "burg", and consequently the path to all home and scratch directories begins with "/burg". Your home directory is located at /burg/home/<UNI>. This is also the value of the environment variable $HOME.


Each group account on Ginsburg has an associated scratch storage space that is at least 1 terabyte (TB) in size.

Note the important "No backups" warning regarding this storage at the bottom of this page.

Your group account's scratch storage is located under /burg/<ACCOUNT>. The storage area for each account is as following:


Location

Size 

Default User Quota

$HOME


50 GB

/burg/abernathey20 TBNone
/burg/anastassiou5 TBNone

/burg/apam

7 TB

None

/burg/astro

35 TB

None

/burg/berkelbach

16 TB

None

/burg/camargo5 TBNone

/burg/ccce

22 TB

None

/burg/cgl20 TBNone
/burg/crew11 TBNone
/burg/dslab6 TBNone
/burg/e3lab1 TBNone

/burg/edru

2 TB

None

/burg/fiore20 TBNone
/burg/free1 TB64 GB
/burg/glab30 TBNone

/burg/gsb

2 TB

None

/burg/hblab20 TBNone

/burg/iicd

20 TB

None

/burg/jalab7 TBNone
/burg/katt31 TBNone
/burg/mckinley20 TB

None

/burg/myers1 TBNone
/burg/ntar_lab2 TBNone
/burg/ocp100 TBOCP shared volume with per user 10 TB quota.
/burg/palab120 TBNone
/burg/psych5 TBNone
/burg/qmech2 TBNone
/burg/rent2 TB128 GB
/burg/rqlab10 TBNone
/burg/sail3 TBNone
/burg/seager10 TBNone
/burg/sobel10 TBNone
/burg/stock10 TBNone
/burg/thea50 TBNone
/burg/theory10 TB

None

/burg/ting5 TBNone
/burg/tosches4 TBNone
/burg/urban5 TBNone

/burg/vedula

20 TB

None

/burg/wu5 TBNone

The amount of data stored in any directory along with its subdirectories can be found with:


cd <directoryName>
du -sh .

If you have lots of files in the directory, please allow some time for the 'du' command to return with its output.

Inodes

Inodes are used to store information about files and directories and an inode is used up for every file and directory that's created. The inode quota for home directories is 150,000.

$ df -hi /rigel/<ACCOUNT>
  
Should your group run out of inodes and there are free inodes available, we may be able to increase your inode allocation. Please contact us for more details about this if your group is running out of inodes.

Anaconda keeps a cache of the package files, tarballs etc. of the packages you've installed. This is great when you need to reinstall the same packages. But, over time, the space can add up.

You can use the 'conda clean' command and run the command in dry-run mode to see what would get cleaned up,

conda clean --all --dry-run

Once you're satisfied with what might be deleted, you can run the clean up,

conda clean --all

This will clean the index cache, lock files, tarballs, unused cache packages, and the source cache.

User and Project Scratch Directories


Ginsburg users can create directories in their account's scratch storage using their UNI or a project name.


$ cd /burg/<ACCOUNT>/users/
$ mkdir <UNI>


For example, an astro member may create the following directory:


$ cd /burg/astro/users/
$ mkdir <UNI>


Alternatively, for a project shared with other users:


$ cd /burg/astro/projects/
$ mkdir <PROJECT_NAME>


Naming conventions (such as using your UNI for your users directory) are not enforced, but following them is highly recommended as they have worked well as organization mechanisms on previous clusters.

No Backups

Storage is not backed up. User files may be lost due to hardware failure, user error, or other unanticipated events.


It is the responsibility of users to ensure that important files are copied from the system to other more robust storage locations.




  • No labels