HPC Systems include 3 clusters 1) Bigjack IBM iDataPlex Linux cluster 71 nodes with 12 processor cores (852 cores) and 48 GB RAM/node.  Nine nodes have dual Nvidia Tesla M2090 graphic processing units (GPUs). The nodes have Mellanox InfiniBand low-latency interconnect, two nodes have host bus adaptors enabling presentation to SAN Block storage which is presented to nodes through Ethernet layer NFS file system to all nodes. Jobs are submitted to the cluster through a Moab/Torque scheduler and resource manager. One node of the cluster is dedicated to remote visualization use; 2) Silvertip2 and Silvertip3 clusters each with 24cores and 512 GB RAM and 3) IBM BladeCenter, 22 AMD Opteron nodes.  The SDSU HPC is supported by one lead, two science domain specialist and two staff systems specialists.

Inter-institute connectivity: All partner institutes of BioSNTR are part of South Dakota’s 10G grid (REED). Therefore, data transfer between institutions is possible for both genomics and imaging experiments. The major limitation is that only specific buildings in each institution have 10G infrastructure hardware. We have identified specific locations in each institution and will situate genomics and imaging equipment in these locations to enable fast data transfer.

Location: SDSU
Contact: Anne Fennell, Padmapriya Swaminathan