Transfers from Engram to/from RCS clusters

ITS provides two methods to transfer data between the RCS Clusters and Engram:

  1. engram-xfer-01.rc.zi.columbia.edu

    We can set up user an account on this linux server that can mount Engram shares and be used to transfer (command line secure copy tool) files to/from outside servers, like the RCS cluster without tying up resources on your workstation. To use this method contact ITS with the share you want to be able mount and any UNIs that need access.

    1. For ITS sysadmin to set up new user account on engram-xfer-01.rc: 
      1. engram-xfer-01 binds CU ldap for UNI authentication. We need to verify if the new user is in LDAP first  by running "id UNI" on the host. If the UNI doesn't appear, ask CUIT to create the UNI account on CUNIX/LDAP first.
      2. Add the UNI to the local group engram-xfer-01 (ssh access group) by running command " sudo usermod -m -aG engram-xfer-01 UNI"
      3. Make a home directory and fix the permission and ownership for the new user: "sudo mkdir /home/UNI "; "chown UNI:user /home/UNI"; "sudo chmod 700 /home/UNI"
      4. Grant the new user permission to run CIFS mount bash scripts (/usr/local/bin/zi_smbmount; /usr/local/bin/zi_smbunmount) as root
        1. Check if the lab local group has been created and added in sudo file: grep -R LabName /etc/group; grep -R LabName /etc/sudoers . If the lab group, doesn't exit, you need to create a local Lab group first: sudo groupadd LabName and than add following line to /etc/sudoers: %LabName ALL=/usr/local/bin/zi_smbmount LabName-EngramShareTier, /usr/local/bin/zi_smbunmount
        2. Add new users to the local lab group: sudo usermod -a -G LocalLabName NewUser
    2. For users to transfer data between CUIT's HPC and Engram:
      1. Get on the server by running command on your terminal shell / Putty session: ssh UNI@engram-xfer-01.rc.zi.columbia.edu. You have to make sure you are either on Columbia Secure Wifi, a on campus computer that plugged in a wall jack or on Columbia's VPN network.
      2. Create a Engram share mount point on this sever by running "mkdir /home/UNI/mnt"
      3. Mount your lab's Engram share on this server: "/usr/local/bin/zi_smbmount LabName-EngramShareTier" (eg,  /usr/local/bin/zi_smbmount shohamy-locker ). Please note: you only need to do ii and iii steps once unless you unmount the engram share by running "/usr/local/bin/zi_smbmount". Then you need to run "/usr/local/bin/zi_smbmount LabName-EngramShareTier" again when you need Engram mounted on this server.
      4. Transfer data between HPC and Engram (engram-xfer-01.rc):
        1. Open a new tmux session by running "tmux" or "tmux new -s SessionName". You  can also rename the tmux by press keyboard buttons: ctrl + b, $ or reattach a existing tmux session by running : "tmux attach -t SessionName".
        2. Run the command to transfer data between HPC and Engram:  rsync -avPhW  SourceDirectory Destination. Example for transferring data from Engram to Ginsburg: rsync -avPhW /home/xxx/mnt/path/to/source/data  UNI@ginsburg.rcs.columbia.edu:/directory/to/destination/directory    Example for transferring data from Ginsburg to Engram:  rsync -avPhW UNI@ginsburg.rcs.columbia.edu:/directory/to/destination/directory /home/xxx/mnt/path/to/source/data. You can also ssh on ginsburg and run rsync data from ginsburg. Example for transferring data from Engram to Ginsburg on Ginsburg: rsync -avPhW  /path/to/source/data  UNI@engram-xfer-01.rc.zi.columbia.edu: /path/to/destination. Vice versa.
        3. Exit tmux session buy closing the sell window. DO NOT press ctrl c or type "exit". Then you can go and do other stuff, the rsync job will keep running in the air.
        4. In order to verify if the rsync job is finished, ssh back to the server you executed the rsync job, reattach the tmux session by running "tmux a -t SessionName".
        5. If the job was failed in the middle of the transferring, you can run a command to resume the rsync job when it dropped: sync -avhW  SourceDirectory Destination.  Here are the  links of  detail  usage  of  tumx  and  rsync:  tmux cheat sheet; rsync cheat sheet.
  2. Globus

           We can help you to set up a Globus local share on our existing access point, then you can use the Globus tool to transfer files back and forth, without tying up resources on your local workstation. To use this method, contact ITS and we will help you get your Globus share created.