Skip to content

Launching a Desktop Session

Pick the right tile

Your Interactive Apps menu lists the apps you're authorised to launch. The desktop-shaped ones:

  • Lab Desktop — a full Xfce desktop on one of your lab's nodes. Shortest form (three fields). Best default for most users.
  • Lab Desktop Advanced — same desktop, with sliders for hours, cores, memory, and GPUs. Use when you need to override the defaults baked into the profile.
  • Neurodesk Desktop — Xfce desktop with the Neurodesk catalog pre-loaded as neuro/<tool>/<version> Lmod modules. Only appears on neuroimaging-enabled profiles.

Plus single-tool launchers that put one application in your browser without a full desktop: JupyterLab, MATLAB, VS Code (code-server), ANSYS, Abaqus, COMSOL. They share the same resource form as Lab Desktop Advanced, plus tool-specific options (e.g. JupyterLab's modules field).

The form fields you'll actually see

Lab Desktop (simple)

Field What it does
Desktop Drop-down of approved profiles. Each profile pins to a specific node in your lab (see below).
Resolution 1920x1080 or 2560x1440.
AD password Optional. Only needed if this session should mount your lab's CIFS shares. Leave blank to work in $HOME and /scratch only.

Hours / cores / memory / GPUs aren't on this form — the profile's defaults are used as-is (typically 8 h, 8 cores, 32 GB, 1 GPU).

Lab Desktop Advanced

Same as above, plus:

Field Default Notes
Hours 8 Walltime — Slurm kills the session at expiry.
CPU cores 8
Memory (GB) 32
GPUs 1 (prefilled from the profile when you pick it) The widget's own default is 0; the profile auto-fills 1 for most profiles, up to max_gpus for the node.

JupyterLab / VS Code / MATLAB / ANSYS / Abaqus / COMSOL

Same resource fields as Advanced, plus tool-specific:

  • JupyterLabmodules (default jupyter-gpu/2026a), extra_jupyter_args.
  • VS Code (code-server)modules (default code-server), workdir (default $HOME).
  • MATLABmodules (default matlab/R2025a matlab-proxy).

The "Desktop" / "Lab profile" selector picks the node

The most consequential field on every form is the profile drop-down. Each entry maps to one specific node in your lab — there's no "any node in my lab" option. Today's profiles:

  • Lincheng Research Desktop → the lab's Ada-GPU node (3 GPUs, MPS sharding enabled).
  • Inspire Turing Desktop → the lab's 4-GPU Turing node.
  • Inspire Searle Desktop → the lab's single-GPU Ada node (smaller resource defaults: 4 cores, 16 GB).

If your lab adds another node, a matching profile gets added; pick the one that matches the hardware you need. The drop-down only shows profiles your lab is authorised for, so most users will just see their own lab's options.

AD password (optional)

  • Optional. Leave blank for sessions that only touch $HOME and /scratch.
  • Required if you need to read or write your lab's CIFS shares — i.e. anything under /mnt/<lab-share> or the matching symlinks in $HOME that your lab admin set up.
  • It's your AD.UMD.EDU password (for most people, the same as your UMD login password).
  • Used once at session start to register a per-user CIFS credential in the kernel keyring (cifscreds). Not stored beyond the session, not written to disk.

If you enter a wrong AD password, the session still launches; your lab share will just look empty. End the session and relaunch with the correct password.

There is no AD username field — the form uses your OOD-portal identity (your UMD directory ID) for the username automatically.

Picking a walltime

Slurm kills your session as soon as the walltime is up, regardless of whether you're in the middle of something. Pick something that comfortably covers your day:

  • 4 hours — a focused working block.
  • 8 hours (default) — a workday, plus a bit of slack.
  • 24–72 hours — an overnight or weekend run. Use this only when you know what's running; your node being occupied for 3 days blocks your labmates.

Long-running computation is usually better submitted as a batch job via sbatch (see slurm-cli.md) — you don't need a desktop for that, and batch queues can run longer jobs.

Picking CPU cores, memory, GPU

(Applies to Lab Desktop Advanced and the single-tool launchers — the simple Lab Desktop form uses the profile's resources as-is and doesn't ask you.)

  • Start with the prefilled defaults. They come from the profile and are tuned to leave room for your labmates on the same node.
  • Only ask for more than you'll use if you genuinely need it. CPU and memory are tracked per allocation; over-asking eats into what's available for labmates even though jobs share the node.
  • GPUs are shared, not queued. Up to several jobs can land on the same physical GPU at once. The widget itself defaults to 0, but most profiles auto-fill it to 1 when you pick them — drop to 0 only if you genuinely won't use the GPU (file browsing, prose writing).
  • 1 GPU is right for most CUDA / accelerated viewport workloads and for any neuroimaging container that needs --nv passthrough. Multi-GPU (e.g. distributed training) → 2+ if the profile's max_gpus allows it. The Inspire Searle profile, for instance, caps at 1 because the node only has one GPU.
Workload Sensible starting point
File browsing, light editing, viewing results 4 cores, 16 GB, 0 GPU
MATLAB (no gpuArray) 4 cores, 16 GB, 1 GPU
MATLAB with gpuArray / Parallel Computing Toolbox GPU 4 cores, 16 GB, 1 GPU
ANSYS / COMSOL / Abaqus light editing 8 cores, 32 GB, 1 GPU
ANSYS / COMSOL / Abaqus large-assembly interactive 8 cores, 32 GB, 1 GPU
Neuroimaging pipelines (fmriprep, C-PAC, Neurodesk) 8 cores, 32 GB, 1 GPU
PyTorch / JAX single-GPU training or inference 8 cores, 32 GB, 1 GPU
Multi-GPU training 16 cores, 64 GB, 2 GPUs

Neurodesk Desktop (preview)

Neuroimaging labs may also see a Neurodesk Desktop tile in Interactive Apps. It's the same Xfce desktop with the Neurodesk catalog pre-loaded — every tool surfaces as an entry under a Neurodesk menu in the desktop application launcher.

This is currently a preview. The plan is to fold the most-used Neurodesk tools into our standard Research Software menu over time, so don't structure long-term workflows around the dedicated tile. For now, use whichever tile gives you the tools you need.

Resolution

Match your actual monitor for the sharpest text. If you're unsure, 1920x1080 is safe on any laptop; 2560x1440 looks good on most desktop monitors.

You can change resolution mid-session from the VNC client's top-of-window menu, but picking it right the first time avoids a reconnect.

Your session, once running

You'll land on the "My Interactive Sessions" page. Each session card shows:

  • Time remaining — when Slurm will end it.
  • Host — which compute node it landed on.
  • Launch NoVNC in a new tab — opens the desktop in your browser.
  • Launch TurboVNC client — for the native VNC client if you've installed one (faster over poor networks).
  • Native Instructions — a tab with copy-paste instructions for connecting any VNC client over an SSH tunnel (see below).
  • Delete — ends the session early and returns the node to the queue.

Connecting with a native VNC client

For weak networks or larger displays, a native VNC client is much smoother than the in-browser NoVNC viewer. The session card has a Native Instructions tab that gives you everything you need:

  1. Click Native Instructions on the running session card.
  2. Copy the SSH tunnel command and run it in a local terminal. It forwards a port from your laptop to the compute node via the HPC login host. Leave the terminal open while you're connected.
  3. Either click the .vnc file download and open it with your VNC client, or point your client at localhost:<port> with the one-time password shown on the tab.

Recommended clients:

  • TigerVNC — free, works on macOS, Windows, Linux.
  • TurboVNC — best performance over slow networks; matches the server.
  • RealVNC Viewer — works, but configure it for Auto encoding.

Stop the SSH tunnel when you're done; the desktop session itself keeps running until walltime.

Extending a session

You can't extend a running session beyond the walltime you chose. Plan for a longer session up-front, or end it and start a new one when you need more time.

Ending a session

  • Best: click Delete on the session card. The node is returned to the queue immediately.
  • Otherwise the session ends automatically at walltime.
  • Just closing the browser tab leaves the session running until its walltime. Your CPU/memory allocation stays counted against the node, and any GPU you reserved stays reserved.

Reconnecting

The session runs on the compute node, not in your browser tab. You can close the tab and come back later — the session survives until walltime. Go back to My Interactive Sessions and click Launch NoVNC again.

This also means OOD server maintenance (or your laptop dying) doesn't kill your work — your desktop keeps running on the compute node.

What you can't see from the form

  • Whether the node you picked is free. The profile pins your session to a specific node; if it's already at the OverSubscribe limit, Slurm queues you on that node rather than picking a different one in your lab. To check before you launch, open Clusters → Research Slurm Shell Access and run sinfo -p <your-lab> -N or squeue -p <your-lab> — every job you see is a labmate's. That's also how you find out which labmate to ping about a forgotten overnight session blocking the node.
  • How long until queued sessions run. Slurm will start your job as soon as the node has room (the partition's OverSubscribe policy lets several jobs share each GPU). Usually seconds; if it sits longer than a minute or two, the node is genuinely full.

Fine-tuning for power users

The defaults in the form correspond to Slurm flags:

  • Hours → --time=<hh>:00:00
  • Cores → --cpus-per-task=<n>
  • Memory → --mem=<n>G
  • GPU → --gres=gpu:<n>

If you find yourself overriding defaults every time, ask IT to change the profile's defaults for your lab — they're one hiera edit.