A. HPC Service
The UBDA platform provides researchers to run and execute their own programs and applications in multiple CPU cores, nodes and GPU environment with the parallel file system support. The job queue scheduling system is used for this platform for a user to run his/her programs/applications. A user requires to submit a job into different job queues for different resource requirements, like the maximum no. of computing nodes, GPUs, maximum no. of CPU cores in a single computing node, etc.
To run a code on the UBDA platform, the following job queue can be selected as per your application requirements:
Job Queues' Configuration
|Job queue||Max no. of nodes||No. of CPU cores per node||No. of GPU card per node||Useable memory per node (GB)|
Job Queues' Resource Limit
|Job queue||Max no. of CPU core for a job||Max no. concurrent job per user||Max no. of job(s) able to be submitted concurrently per user||Maximum run time limit (Walltime) for jobs (hrs)|
User Storage Quota
For the different demands from different projects, a user should let us know what computing resources they need, like no. of CPU cores, GPU, storage size as well as the applications to setup on the UBDA platform. We are pleased to work with you to setup and build for running your desired programs/application. For such requirements, users should let us know their plans to reserve the resources.
B. Virtual Machine Service
This service provides researchers to have their own virtual machine(s) (VM) to run and test their applications needed with highly control with operation system. Users could install and highly customize their application in this VM environment.
C. JupyterHub Service
This is JupyterLab online services supporting GPU cards. User could use this GPU-enabled JupyterLab environment to develop and run the research application supporting GPU like Tensorflow and Keras.