Free Tier - Storage

Storage Overview


After logging in to the Free Tier, you will be in your home directory. This storage space (30 GB) is appropriate for smaller files, such as documents, source code, and scripts but will fill up quickly if used for data sets or other large files.


Terremoto's shared storage server is named "moto", and consequently the path to all home and scratch directories begins with "/moto". Your home directory is located at /moto/home/<UNI>. This is also the value of the environment variable $HOME.


Each group account on Terremoto has an associated scratch storage space that is at least 1 terabyte (TB) in size.

Your group account's scratch storage is located under /moto/<ACCOUNT>.


For "Free Tier", the account name is "free" and each individual user has a 32GB quota in the "free" scratch space.


Note the important "No backups" warning regarding this storage at the bottom of this page.


The storage for each account is as following:

Location

Size

Default User Quota

$HOME

n/a

30 GB (102,400 inodes)

/moto/apam9 TBNone

/moto/asenjo

1 TB

None

/moto/astro

48 TB

None

/moto/atmchm

12 TB

None

/moto/axs

5 TB

None

/moto/berkelbach8 TBNone

/moto/buck

1 TB

None

/moto/buddy

4 TB

None

/moto/cboyce

20 TB

None

/moto/cheme2 TBNone
/moto/cs2 TBNone
/moto/cury3TBNone
/moto/eaton6 TBNone

/moto/edu

3 TB

None

/moto/edu/e48803 TBNone
/moto/edu/emlab3 TBNone
/moto/fortin4 TBNone
/moto/free1 TB32 GB
/moto/febio2 TBNone

/moto/gsb

10 TB

None

/moto/hblab20 TBNone
/moto/hill8 TBNone

/moto/iicd

20 TB

None

/moto/katt2

1 TB

None

/moto/kohwi2 TBNone
/moto/kumar3 TBNone
/moto/mauel10 TBNone

/moto/nklab

2 TB

None

/moto/palab90 TBNone
/moto/pdlab15 TBNone

/moto/qmech

3 TB

None

/moto/rent1 TB100 GB

/moto/roam

1 TB

None

/moto/slab1 TBNone
/moto/sscc50 TBNone

/moto/stats

6 TB

None

/moto/trl

40 TB

None

/moto/urban10 TBNone
/moto/yoon3 TBNone
/moto/zi5 TBNone

/moto/ziab

8 TB

None

/moto/zims

2 TB

None

/moto/zidw

3 TB

None

The amount of data stored in any directory along with its subdirectories can be found with:


cd <directoryName>
du -sh .


If you have lots of files in the directory, please allow some time for the 'du' command to return with its output.


Inodes

Inodes are used to store information about files and directories and an inode is used up for every file and directory that's created. Each group has a limited number of inodes based on how many TB of storage purchased. To check your groups inode usage and limit, run:


$ df -hi /moto/<ACCOUNT>
  

Should your group run out of inodes and there are free inodes available, we may be able to increase your inode allocation. Please contact us for more details about this if your group is running out of inodes.


The inode quota for home directories is 102,400.

Anaconda keeps a cache of the package files, tarballs etc. of the packages you've installed. This is great when you need to reinstall the same packages. But, over time, the space can add up.

You can use the 'conda clean' command and run the command in dry-run mode to see what would get cleaned up,

conda clean --all --dry-run

Once you're satisfied with what might be deleted, you can run the clean up,

conda clean --all

This will clean the index cache, lock files, tarballs, unused cache packages, and the source cache.

User and Project Scratch Directories


Free Tier users can create directories in their account's scratch storage using their UNI or a project name.


$ cd /moto/free/users/
$ mkdir <UNI>


For example, an astro member may create the following directory:


$ cd /moto/free/users/
$ mkdir <UNI>


Alternatively, for a project shared with other users:


$ cd /moto/free/projects/
$ mkdir <PROJECT_NAME>


Naming conventions (such as using your UNI for your users directory) are not enforced, but following them is highly recommended as they have worked well as organization mechanisms on previous clusters.


No Backups


Storage is not backed up. User files may be lost due to hardware failure, user error, or other unanticipated events.

It is the responsibility of users to ensure that important files are copied from the system to other more robust storage locations.