From 85b1cfc5148dff0e84077d1359bb9295f2470b2c Mon Sep 17 00:00:00 2001 From: Emilia Szynwald Date: Mon, 21 Apr 2025 18:20:52 -0400 Subject: [PATCH 1/2] renamed quickstart.md --- quickstart/{quickstart.md => access-policies.md} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename quickstart/{quickstart.md => access-policies.md} (100%) diff --git a/quickstart/quickstart.md b/quickstart/access-policies.md similarity index 100% rename from quickstart/quickstart.md rename to quickstart/access-policies.md From bb55acabe4d19040f4f62fe710166a2da8f676a7 Mon Sep 17 00:00:00 2001 From: Emilia Szynwald Date: Mon, 21 Apr 2025 18:25:50 -0400 Subject: [PATCH 2/2] adding new quickstart guides --- _config.yml | 2 + quickstart/quick_long_start.md | 142 ++++++++++++++++++++++++++++++ quickstart/quickstart-examples.md | 137 ++++++++++++++++++++++++++++ quickstart/quickstart.md | 70 +++++++++++++++ 4 files changed, 351 insertions(+) create mode 100644 quickstart/quick_long_start.md create mode 100644 quickstart/quickstart-examples.md create mode 100644 quickstart/quickstart.md diff --git a/_config.yml b/_config.yml index 3292e2a..b383beb 100644 --- a/_config.yml +++ b/_config.yml @@ -17,6 +17,8 @@ exclude: - Makefile - Gemfile - Gemfile.lock + - quickstart/quick_long_start.md + - quickstart/access-policies.md plugins: - jemoji diff --git a/quickstart/quick_long_start.md b/quickstart/quick_long_start.md new file mode 100644 index 0000000..7f57576 --- /dev/null +++ b/quickstart/quick_long_start.md @@ -0,0 +1,142 @@ +# Star HPC LONG Start Guide + +Welcome to Star HPC! This guide will help you log in, run your first job, and start using the cluster. For an overview of policies, see the [Account & Access Overview](https://starhpc.hofstra.io/account-policies/). + +## Connect to Star HPC + +### Step 1: SSH into the Login Node + +Use your provided credentials to connect: + +```bash +ssh -p 5010 your_username@[login_node].hofstra.edu +``` + +> If you're using a Windows system, you can connect using [PuTTY](https://www.putty.org/) or [WSL + OpenSSH]. + +## Submit Your First Job + +### Step 2: Load the Sample Python Job Script + +Copy the sample job script into your home directory: + +```bash +cp -r /fs1/shared/docs/examples/python_hello ~ +cd ~/python_hello +``` + +### Step 3: Submit the Job + +#### Using a Batch Job (Non-interactive) + +To submit a Batch job or a non-interactive job, simply run the sbatch command in the terminal. For example: + +```bash +sbatch python_hello.sh +``` + +For more information look at [Batch jobs](https://docs.starhpc.hofstra.io/jobs/submitting-jobs.html#batch-jobs-non-interactive) + +#### Using a Interactive Job + +To submit an interactive job, simply run the srun command with your desired options directly in the terminal. For example: + +```bash +srun --pty python ~/python_hello/hello_world.py +``` + +Once submitted, you'll be placed in an interactive shell on the allocated node when resources become available. Your prompt will change to indicate you're on the compute node. + +For more information look at [Interactive jobs](https://docs.starhpc.hofstra.io/jobs/submitting-jobs.html#interactive-jobs) + +### Step 4: Check the Output + +```bash +cat python_hello.out +``` + +## Common Job Monitoring Commands + +### Check your job status + +```bash +sacct --user= +``` + +### Cancel a Job + +```bash +scancel +``` + +### Transfer Files + +Use `scp` or `rsync` to transfer files to/from the cluster: + +```bash +scp -P 5010 myfile.txt your_username@[login_node].hofstra.edu:/home/your_username/ +``` +Learn more about monitoring and managing jobs at [Monitoring Jobs](https://docs.starhpc.hofstra.io/jobs/monitoring-jobs.html) + +## How to run a Jupyter Notebook + +### Step 5: Load the Jupyter Notebook Job Script + +Copy the job script into your home directory: + +```bash +cp -r /fs1/shared/docs/examples/jupyter ~ +cd ~/jupyter +``` + +### Step 6: Submit the Job + +```bash +sbatch jupyter_notebook.sbatch +``` + +Upon your job's submission to the queue, you will see the output indicating your job's ID. You need to replace your job ID value with the `` placeholder throughout the Jupyter Notebook section. + +Additionaly, if you run squeue immediately after submitting your job, you might see a message such as Node Unavailable next to your job. Before proceeding, wait until your job has changed to the RUNNING state as reported by the squeue command. + +### Step 7: Check your output file for the SSH command + +```bashmodu +cat jupyter_notebook_.out # Run this command in the directory the .out file is located. +``` + +### Step 8: Run the SSH port-forwarding command + +Open a new terminal on your local machine and run the SSH command provided in the output file. If prompted for a password, use your Linux lab password if you haven't set up SSH keys. You might be requested to enter your password multiple times. **Note** that the command will appear to hang after successful connection - this is the expected behavior. Do not terminate the command (Ctrl + C) as this will disconnect your Jupyter notebook session. + +### Step 9: Find and open the link in your browser + +Make sure you wait about 30 seconds after executing the SSH port-forwarding command on your local machine. It takes the .err file a little time to be updated and include your link. + +Check the error file on the login node for your Jupyter notebook's URL: + +```bash +cat jupyter_notebook_.err | grep '127.0.0.1' # Run this command in the directory the .err file is located. +``` + +Copy the URL from the error file, either of the two lines printed works, and paste it into your **local machine's browser**. + +### Step 10: Clean up + +If you're done prior to the job's termination due to the walltime, clean up your session by running this command on the login node: + +```bash +scancel +``` + +Afterwards, press Ctrl + C on your local computer's terminal session where you ran the port forwarding command to terminate the SSH connection. + +Learn more about running and creating a job script with [Jupyter Notebook](https://docs.starhpc.hofstra.io/software/jupyter-notebook.html) + +## Next Steps + +- Learn about job scheduling with [Slurm](https://docs.starhpc.hofstra.io/jobs/submitting-jobs.html) +- Explore available software with `module avail` +- Store large datasets in project directories (request access if needed) + +For help or questions, visit: [https://github.com/StarHPC/Issues](https://github.com/StarHPC/Issues) or email: `starhpc-support@hofstra.edu`. \ No newline at end of file diff --git a/quickstart/quickstart-examples.md b/quickstart/quickstart-examples.md new file mode 100644 index 0000000..9c8bd7d --- /dev/null +++ b/quickstart/quickstart-examples.md @@ -0,0 +1,137 @@ +# Star HPC Quick Start Guide + +Welcome to Star HPC! This guide will help you log in, run your first job, and start using the cluster. For an overview of policies, see the [Account & Access Overview]. + +## Connect to Star HPC + +### Step 1: SSH into the Login Node +Use your provided credentials to connect: + +```bash +ssh -p 5010 your_username@binary.star.hofstra.edu +``` + +> If you're using a Windows system, you can connect using [PuTTY](https://www.putty.org/) or [WSL + OpenSSH]. + +## Set Up Your Environment + +### Step 2: Load a Module +Star uses environment modules to manage software. Load a basic module like Python: + +```bash +module avail # View available modules +module load python # Load Python module +``` + +You can use `module list` to see what you have loaded. + +## Submit Your First Job + +### Step 3: Create a Simple Job Script +Create a file called `test_job.slurm`: + +```bash +nano test_job.slurm +``` + +Paste this content [These lines at the beginning of a job script are not just comments, but are actually processed by sbatch]: + +```bash +#!/bin/bash +#SBATCH --job-name=test_job +#SBATCH --output=test_job.out +#SBATCH --error=test_job.err +#SBATCH --nodes=1 +#SBATCH --time=10:00 +#SBATCH --mem=1G + +what you wish to run in the script. +``` + +Save and exit. +### Example + +Create a file called `hello.slurm`: + +```bash +nano hello.slurm +``` + +Paste this content [These lines at the beginning of a job script are not just comments, but are actually processed by sbatch]: + +```bash +#!/bin/bash +#SBATCH --job-name=hello +#SBATCH --output=hello.out +#SBATCH --time=00:01:00 +#SBATCH --ntasks=1 + +echo "Hello from Star HPC!" +``` + +Save and exit. + +### Step 4: Submit the Job + +### Using a Batch job (Non-interactive) + +```bash +sbatch hello.slurm +``` +For more information look at [Batch jobs] (https://docs.starhpc.hofstra.io/jobs/submitting-jobs.html#batch-jobs-non-interactive) + +### Using a Interactive Job + +To submit an interactive job, simply run the srun command with your desired options directly in the terminal. For example: + +```bash +chmod +x ~/python_hello/python_hello.sh +srun --pty --ntasks=1 --cpus-per-task=1 --time=01:00:00 --mem=4G ~/python_hello/python_hello.sh + +srun --pty --ntasks=1 --cpus-per-task=1 --time=01:00:00 --mem=4G python ~/python_hello/python_hello.sh +``` + +Once submitted, you'll be placed in an interactive shell on the allocated node when resources become available. Your prompt will change to indicate you're on the compute node. + +For more information look at [Interactive jobs] (https://docs.starhpc.hofstra.io/jobs/submitting-jobs.html#interactive-jobs) + +### Step 5: Monitor Your Job + +```bash +squeue -u your_username +``` + +Once complete, check the output: + +```bash +cat hello.out +``` + +### Cancel a job + +```bash +scancel job_id +``` + +## Transfer Files + +Use `scp` or `rsync` to transfer files to/from the Star cluster: + +```bash +scp -P 5010 myfile.txt your_username@binary.star.hofstra.edu:/home/your_username/ +``` + +### How to run a Jupyter Notebook – Include the necessary commands, a sample sbatch job template (cp /fs1/shared/job-examples/juperter-notebook.sbatch /fs1/projects/), and an ssh port-forwarding command. + +-copy into their own directory and have them submit that +change that one from 1-30 to anything cause if its down it wouldnt work(alex may have changed) + +Learn more about running [Jupyter Notebook] (https://docs.starhpc.hofstra.io/software/jupyter-notebook.html) + +## Next Steps + +- Learn about job scheduling with [Slurm] (https://docs.starhpc.hofstra.io/jobs/submitting-jobs.html) +- Explore available software with `module avail` +- Store large datasets in project directories (request access if needed) + +For help or questions, visit: [https://github.com/StarHPC/Issues] or email: `starhpc-support@hofstra.edu`. diff --git a/quickstart/quickstart.md b/quickstart/quickstart.md new file mode 100644 index 0000000..5db2d97 --- /dev/null +++ b/quickstart/quickstart.md @@ -0,0 +1,70 @@ +# Star HPC Quick Start Guide + +Welcome to Star HPC! This guide will help you log in, run your first job, and start using the cluster. For an overview of policies, see the [Account & Access Overview](https://starhpc.hofstra.io/account-policies/). + +## Connect to Star HPC + +### Step 1: SSH into the Login Node + +Login with the credentials provided in the Welcome email! + +If you're using a Windows system, you can connect using [PuTTY](https://www.putty.org/) or [WSL + OpenSSH]. + +## Submit Your First Job + +### Step 2: Load the Sample Python Job Script + +Copy the sample job script into your home directory: + +```bash +cp -r /fs1/shared/docs/examples/python_hello ~ +cd ~/python_hello +``` + +### Step 3: Submit the Job + +To submit a Batch job or a non-interactive job, simply run the sbatch command in the terminal. For example: + +```bash +sbatch python_hello.sh +``` + +For more information look at [Batch jobs](https://docs.starhpc.hofstra.io/jobs/submitting-jobs.html#batch-jobs-non-interactive) + +### Step 4: Check the Output + +```bash +cat python_hello.out +``` + +## Common Job Monitoring Commands + +### Check your job status + +```bash +sacct --user= +``` + +### Cancel a Job + +```bash +scancel +``` + +### Transfer Files + +Use `scp` or `rsync` to transfer files to/from the cluster: + +```bash +scp -P 5010 myfile.txt your_username@[login_node].hofstra.edu:/home/your_username/ +``` +Learn more about monitoring and managing jobs at [Monitoring Jobs](https://docs.starhpc.hofstra.io/jobs/monitoring-jobs.html) + +## Next Steps + +- Learn about running and creating a job script with [Jupyter Notebook](https://docs.starhpc.hofstra.io/software/jupyter-notebook.html) +- Learn about job scheduling with [Slurm](https://docs.starhpc.hofstra.io/jobs/submitting-jobs.html) +- Explore available software with `module avail` +- Store large datasets in project directories (request access if needed) + +For help or questions, visit: [https://github.com/StarHPC/Issues](https://github.com/StarHPC/Issues) or email: `starhpc-support@hofstra.edu`. \ No newline at end of file