Skip to end of metadata
Go to start of metadata

IBIC data storage

there are two primary type data storage available for IBIC Servers / Desktops, local storage and Network shared storage.

  • neuron class server local RAID storage, it is local storage and the fastest data storage for local process.
     /mnt/localhostname  (eg. [ardc]%ls /mnt/adrc) -- total size 20T (use df -h to see actual available size)

           all these local RAID are cross mounted on each server. they can be found from any server with the same path:

     /mnt/remotehostname (eg. [adrc]%ls /mnt/panuc)

This is for data reference only. Please do not use this space for SGE or any remote process. Neuron class servers are not storage server, it is heavily used by interactive local user.

  • Desktop local storage 

           /var/local/scratch  — total size 1T

  • IBIC network shared storage NAS for cluster global queue SGE process. this is hosted by dedicated Sun ZFS server
     /project_space    -- total size 5T
     /projects2        -- total size 6T
     /SCRATCH          -- total size 1.2T, automatically cleaned if older than 3 month. 
  • IBIC network shared Data Warehouse,This is a mega storage repository, and is hosted by dedicated SuperMicro ZFS storage server.
     it is capable but not ideal place for fast network process, ideal for data storage.
    /NAS_II                         –-  total size 50T 

  • new HPC network shared working space
     /projects3 accesible  on Neuron cluster servers --- total size 14T

IBIC-neuron Cluster Servers

The "IBIC-neuron" cluster consists of following SuperMicro U628 24-core Servers. these hosts are dedicated for each research group on interactive data processing.

It is also configured with SGE job scheduler (Son of Grid Engine 8.2), so it can run parallel jobs with multi-cores from the local computer. it is also equipped with a queue that combine all-hosts core to be used by other non-local users. It is meant for the resource sharing while the core is in idle. so it should be used carefully not to impact local prime usage.

here are the hostnames and two type of queues:

  • hostname 

    SuperMicro U628 24-core 252 GB RAM Servers 

    Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz, v4@2.40GHz
    adrc.ibic.washington.edu 
    ---------------------------------------------------------------------------------
    panuc.ibic.washington.edu
    ---------------------------------------------------------------------------------
    tpp.ibic.washington.edu
    ---------------------------------------------------------------------------------
    neuroimaging2.washington.edu
    ---------------------------------------------------------------------------------
    parcellator.ibic.washington.edu
    ---------------------------------------------------------------------------------
    viscog.ibic.washington.edu
    ---------------------------------------------------------------------------------
    adiposite.ibic.washington.edu 
    ---------------------------------------------------------------------------------
    chdd.ibic.washington.edu 
    ---------------------------------------------------------------------------------
    praxic.ibic.washington.edu 
    ---------------------------------------------------------------------------------

 

  • queue names:

neuro.q, contains adrc, tpp and panuc;

parcellator.q, CHDD.q, neuroimaging2.q, adiposite.q and viscog.q contains its own host the same as queue name.

it is only to be used by project group user for local host multi-core process. check with your supervisor regarding which host you should be using. Job can only be submitted from local host.

global.q : Shared queue combine all above 8 servers. (VISCOG is constantly busy, and asked to be removed from global use). Job can be submitted from any IBIC-neuron host, but specify queue name.

IBIC-SunFire Cluster Servers

This cluster contains SunFire X4270 16-core X 4   servers with total 72 cores. it is designated for batch job submission. configured with Sun Grid Engine (6.2). It is strongly recommend to use this cluster if you are develop SGE script.

  • hostname:
     

    Intel(R) Xeon(R) CPU  X5570  @ 2.93GHz 48GB RAM, X5560 @2.80GHz 8GB RAM

amygdala.ibic.washington.edu BIP 0/0/16 0.21 lx24-amd64 
---------------------------------------------------------------------------------
broca.ibic.washington.edu BIP 0/0/16 0.27 lx24-amd64
---------------------------------------------------------------------------------
evolution.ibic.washington.edu BIP 0/0/8 0.17 lx24-amd64
---------------------------------------------------------------------------------
homunculus.ibic.washington.edu BIP 0/0/16 0.18 lx24-amd64
---------------------------------------------------------------------------------
pole.ibic.washington.edu BIP 0/0/16 0.21 lx24-amd64
---------------------------------------------------------------------------------
  • Data storage:
    there is no local storage for this cluster of computer. 
    primary storage is /NAS_II, for fast data process is /project_space, /projects2, /SCRATCH

  • queue name: ibic-dev.q. job can only be submitted from any IBIC-SunFire host.

 

Legacy HP hosts: – Legacy OS: Debian 8 OS

HP z800 8-core X 2  workstations with total 16 cores. These two are only two hosts with old IBIC environment (Debian 8 and old software packages). If you have existing project needs to continue process on previous pipeline, this is right place to go.

  • spec: Intel(R) Xeon(R) CPU  E5630 @ 2.53GHz, E5530 @2.40GHz 24 GB RAM
  • hostname: 
nigra.ibic.washington.edu BIP 0/0/8 1.36 lx24-amd64 
---------------------------------------------------------------------------------
precuneus.ibic.washington.edu BIP 0/0/8 0.37 lx24-amd64
---------------------------------------------------------------------------------
  • Data storage: no local storage available. all IBIC network shared storage is available to access.
  • queue name: legacy.q
 
  • No labels