Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 16 Next »

Storage Overview

After logging in to Ginsburg you will be in your home directory. This home directory storage space (50 GB) is appropriate for smaller files, such as documents, source code, and scripts but will fill up quickly if used for data sets or other large files.


Ginsburg's shared storage server is named "burg", and consequently the path to all home and scratch directories begins with "/burg". Your home directory is located at /burg/home/<UNI>. This is also the value of the environment variable $HOME.


Each group account on Ginsburg has an associated scratch storage space that is at least 1 terabyte (TB) in size.

Note the important "No backups" warning regarding this storage at the bottom of this page.

Your group account's scratch storage is located under /burg/<ACCOUNT>. The storage area for each account is as following:


Location

Size 

Default User Quota

$HOME


50 GB

/burg/abernathey

20 TB

None

/burg/anastassiou

5 TB

None

/burg/apam

7 TB

None

/burg/astro

35 TB

None

/burg/berkelbach

16 TB

None

/burg/camargo

5 TB

None

/burg/ccce

22 TB

None

/burg/cgl

20 TB

None

/burg/crew

11 TB

None

/burg/dslab

6 TB

None

/burg/e3lab

1 TB

None

/burg/edru

2 TB

None

/burg/fiore

20 TB

None

/burg/free

1 TB

64 GB

/burg/glab

30 TB

None

/burg/gsb

2 TB

None

/burg/hblab

20 TB

None

/burg/iicd

20 TB

None

/burg/jalab

7 TB

None

/burg/katt3

1 TB

None

/burg/mckinley

20 TB

None

/burg/myers

1 TB

None

/burg/ntar_lab

2 TB

None

/burg/ocp

100 TB

OCP shared volume with per user 10 TB quota.

/burg/palab

120 TB

None

/burg/psych

5 TB

None

/burg/qmech

2 TB

None

/burg/rent

2 TB

128 GB

/burg/rqlab

10 TB

None

/burg/sail

3 TB

None

/burg/seager

10 TB

None

/burg/sobel

10 TB

None

/burg/stock

10 TB

None

/burg/thea

50 TB

None

/burg/theory

10 TB

None

/burg/ting5 TBNone
/burg/tosches4 TBNone
/burg/urban5 TBNone

/burg/vedula

20 TB

None

/burg/wu5 TBNone

The amount of data stored in any directory along with its subdirectories can be found with:


cd <directoryName>
du -sh .

If you have lots of files in the directory, please allow some time for the 'du' command to return with its output.

Inodes

Inodes are used to store information about files and directories and an inode is used up for every file and directory that's created. The inode quota for home directories is 150,000.

$ df -hi /rigel/<ACCOUNT>
  
Should your group run out of inodes and there are free inodes available, we may be able to increase your inode allocation. Please contact us for more details about this if your group is running out of inodes.

Anaconda keeps a cache of the package files, tarballs etc. of the packages you've installed. This is great when you need to reinstall the same packages. But, over time, the space can add up.

You can use the 'conda clean' command and run the command in dry-run mode to see what would get cleaned up,

conda clean --all --dry-run

Once you're satisfied with what might be deleted, you can run the clean up,

conda clean --all

This will clean the index cache, lock files, tarballs, unused cache packages, and the source cache.

User and Project Scratch Directories


Ginsburg users can create directories in their account's scratch storage using their UNI or a project name.


$ cd /burg/<ACCOUNT>/users/
$ mkdir <UNI>


For example, an astro member may create the following directory:


$ cd /burg/astro/users/
$ mkdir <UNI>


Alternatively, for a project shared with other users:


$ cd /burg/astro/projects/
$ mkdir <PROJECT_NAME>


Naming conventions (such as using your UNI for your users directory) are not enforced, but following them is highly recommended as they have worked well as organization mechanisms on previous clusters.

No Backups

Storage is not backed up. User files may be lost due to hardware failure, user error, or other unanticipated events.


It is the responsibility of users to ensure that important files are copied from the system to other more robust storage locations.




  • No labels