Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Remmina is built upon FreeRDP, and apparently originates in that project.

...

The Fitzpatrick Lab currently has 5 6 workstations that are managed by Zuckerman Institute Research Computing.  1 of these workstations was built by Lambda Labs, 4 of these workstations were built by Single Particle while 1 was built by Exxact.  The following table summarizes the hardware specifications for these workstations:

...

HostnameVendorCPULogical CoresCPU Clock Speed (GHz)GPU ModelNumber of GPUsOperating System
exxgpu1.fitzpatrickExxactIntel(R) Core(TM) i7-8700 CPU 123.2NVIDIA GeForce 10601CentOS 7
spgpu3.fitzpatrickSingle ParticleXeon(R) Silver 4116482.1GeForce RTX 2080 Ti1CentOS 7
spgpu2.fitzpatrickSingle ParticleXeon(R) Silver 4116482.1GeForce RTX 2080 Ti1CentOS 7
spgpu1.fitzpatrickSingle ParticleXeon(R) Silver 4116482.1GeForce RTX 2080 Ti1CentOS 7
spgpu4.fitzpatrickSingle ParticleXeon(R) Silver 4116482.1GeForce RTX 2080 Ti1CentOS 7
warp.fitzpatrickLambda LabsAMD Threadripper 3960X484.5GeForce RTX 2080 Ti1Windows 10 Enterprise Edition

Linux Workstations

Software Installed

Each of the Linux workstations listed above has the following software installed:

...

To see the module names associated with these software suites, we can use the module avail command:

Code Block
[zrcadmin@spgpu2zrcadmin@spgpu1 ~]$ module avail

--------------------------------------------------------- /programsopt/sharelmod/modulefiles/x86_64-linux ------------------------------Linux ----------------------------
   biogrids/rc    sbgrid/rcimod    sbgrid/cshrcjanni    sbgrid/shrc (L,D)

---------------------------------------------------------------- /opt/lmod/modulefiles/Linux ----------------------------------------------------------------
   imod 

 janni

----------------------------------------------------------- /opt/lmod/lmod/lmod/modulefiles/Core ------------------------------------------------------------
   lmod    settarg

  Where:
   L:  Module is loaded

  D:  Default Module

Use "module spider" to find all possible modules.
Use "module keyword key1 key2 ..." to search for all possible modules matching
any of the "keys".


This indicates that the modules are named biogrids, sbgrid, imod, and janni.  The slashes in the sbgrid module names indicate multiple variants of the module script for different Linux terminal programs that are run based on predetermined conditions.  These variants are all nested under the sbgrid module name, which means that sbgrid and sbgrid/shrc are synonymous.

By default, sbgrid is loaded.  This can be seen in the (L,D) next to sbgrid/shrc, which indicates that by default SBGrid assumes you are using the bash Unix shell (D) and that SBGrid is loaded (L).

If you wanted to use IMOD v. 4.10.46, you can run the following set of commands, which will first unload unloads SBGrid (since it has conflicts with IMOD v. 4.10.46) and then load loads IMOD:

Code Block
module unload sbgrid
module load imod

...

Code Block
module unload imod
module load sbgrid

If you do not unload a conflicting module, SBGrid will tell you which modules you need to first unload:

Code Block
[zrcadmin@spgpu3 ~]$ module load imod
Lmod has detected the following error:  Cannot load module "imod" because these module(s) are loaded:
   sbgrid

While processing the following module(s):
    Module fullname  Module Filename
    ---------------  ---------------
    imod             /opt/lmod/modulefiles/Linux/imod.lua


Application-Specific Notes

cryoCARE

cryoCARE is set up with a  custom wrapper script for ease of use.  The following syntax is used by this wrapper script:

...

Code Block
singularity run --nv -B /opt/cryocare/user:/run/user -B /opt/cryocare/example/:/notebooks -B <DATA DIRECTORY>:/data /opt/cryocare/cryoCARE_v0.1.1.simg

CryoSPARC

CryoSPARC is available by first starting it up (if it isn't already running) and then navigating to either http://localhost:39000 if you are working on the machine that you want to run it on, or by navigating to http://spgpu#.fitzpatrick.zi.columbia.edu:39000, where # is the number associated with the workstation (see chart above).  Note that using the full spgpu domain name will only work if you are on the Columbia campus or using the CUIT VPN.

...

Code Block
cryosparcm start


Note

CryoSPARC will not start up automatically after a reboot, and must be manually started from the command line after a restart occurs.

Remote Access and Network Restrictions

To access a Linux workstation remotely, you can type the following command for command line access:

Code Block
ssh awf2130@spgpu awf2130@[spgpu or exxgpu][#].fitzpatrick.zi.columbia.edu

Where "#" is the number associated with the workstation.  You will be prompted for the credentials of the workstation user; these are different from UNI credentials used for other Columbia services (such as LionMail) and have been distributed to the Fitzpatrick lab. 

You can also access the graphical user interface (GNOME) for the Fitzpatrick workstations remotely as well by using VNC.  To use VNC, you will need a VNC client installed on your laptop/workstation.  A list of VNC clients available for various platforms can be found here.  Note that Mac OS X comes with a built-in VNC client, which is accessible from the Finder by navigating to Go → Connect to Server and then entering vnc://[spgpu or exxgpu][#].fitzpatrick.zi.columbia.edu:5901.   VNC is accessible on port 5901, and requires authentication using credentials distributed to the Fitzpatrick lab.  If these

If either the command line or VNC credentials are not working, please email rc@zi.columbia.edu for further assistance.

For security reasons, the Fitzpatrick workstations are only remotely accessible via SSH/VNC if you are on the Columbia campus or using the CUIT VPN.

Windows Workstation

Software Installed

The Windows workstation has the following software installed on it:

These software applications all have shortcuts on the desktop of the main account (Anthony Fitzpatrick).

Network Storage

The Fitzpatrick lab's Engram storage is configured to be mapped to the D drive automatically when the main account (Anthony Fitzpatrick) logs in.

Application-Specific Notes

M

M will take in data produced by RELION as input as described in the pipeline outline here.  Since RELION is not a native Windows application, this will require that you make heavy use of the Fitzpatrick lab Engram storage, which (as mentioned above) is mapped to the D drive.  Roughly, you will need to perform the first steps of the pipeline (preprocessing, particle image/sub-tomogram export) on warp.fitzpatrick using Warp, then run classification and refinement on the cryoem cluster or a Linux-based workstation and save the results to Engram, and then perform the final steps of the pipeline on warp.fitzpatrick using M.

Remote Access

The Windows Warp/M workstation is remotely accessible using RDP.  You can find a list of Microsoft-sanctioned RDP clients here.  For Linux, we suggest using Remmina.