Commit 5d89dc92 authored by Konstantin Kuck's avatar Konstantin Kuck 💬
Browse files

Coffee Lecture 03.12.2025

parent f20f9263
Loading
Loading
Loading
Loading
+65 −0
Original line number Diff line number Diff line
# Coffee Lecture - Using JupyterHub on bwUniCluster

This Coffee Lecture has a look on the JupyterHub running on bwUniCluster, focusing on different options to manage packages to run your analyses in Jupyter notebooks. 

Specifically, we discuss:
 
 - JupyterLab base modules
 - Virtual environments (venv)
 - Miniforge (conda)
 - Container

--- 

## Overview
JupyterHub is a **web-based programming and data-analysis environment**, providing a more contemporary approach to interactive computing on bwUniCluster. In contrast to an interactive session on the terminal (SSH login), your session remains active 

Please do not confuse the **JupyterHub on bwUniCluster** discussed here with the **bwJupyter** state service:

 - JupyterHub on bwUniCluster requires an entitlement for bwHPC and serves **research and teaching**
 - bwJupyter is focused on teaching (only) and can be used by any state-university member in Baden-Württemberg without any special entitlement.
 - Using scientific software packages and programming environments work differently across the two services.

## Getting started
### Launching a session
Connect to your campus-network and go to [https://uc3-jupyter.scc.kit.edu/](https://uc3-jupyter.scc.kit.edu/) to login.

Then, select the required compute resources. Particularly, specify:

 - Number of CPU cores
 - Number of GPUs
 - Runtime
 - JupyterLab Base module
 
Most of the other parameters of the session (e.g. RAM, partition) are automatically adjusted depending on this choice.
 
### Jupyter Notebooks
Data analysis and programming on JupyterHub is centered around interactive notebooks which combine

 - code
 - formatted text (e.g. documentation), and,
 - (potentially interactive) graphics.
 
Within a notebook, elements are structured within cells; Different cell-types (Markdown, Code, Raw) are available for the different types of content.

Code is executed by an interpreter attached to a notebook, the so-called kernel. The kernel basically provides the interface to the underlying programming environment (e.g. Python, Julia, R etc.)

Code in cells is executed (or interpreted) by pressing **CTRL**+**ENTER**.

### Logging out and stopping the server (release compute resources)
After completing the analyses, the Server (i.e. the JupyterLab session) should be **stopped**: Go to *File* -> *Hub Control Panel* -> *Stop Server*. 
This ensures that the compute resources allocated to your session are released. 


---

## Further information and support
 - **Detailed documentation** is provided in the [bwHPC Wiki](https://wiki.bwhpc.de/e/BwUniCluster3.0/Jupyter).
 - Do not hesitate to submit a ticket to the **bwSupport** if you have any questions: [https://bw-support.scc.kit.edu/](https://bw-support.scc.kit.edu/).
 
---




+735 KiB

File added.

No diff preview for this file type.

+174 −0
Original line number Diff line number Diff line
%% Cell type:markdown id:d2db826c-b2f0-4f64-b789-2949f2e48acd tags:

## Demonstration: Text-to-Image
The following example illustrates the the use of a GPU using an AI-based text-to-image generator (StableDiffusion). JupyterHub verwendet werden können.

### Preparation

First, we confirm that a GPU is available in our session:

%% Cell type:code id:a6c7c384-fff9-431e-9dab-9c0f5e29adf5 tags:

``` python
# Confirm presence of GPU
import torch
[print(f'[{i}]: {torch.cuda.get_device_properties(i)}') for i in range(torch.cuda.device_count())]
```

%% Output

    []

%% Cell type:markdown id:9bc98682-5afd-4ea6-8d90-55f595f5f069 tags:

When running this notebook the first time, we may need to install some packages.

This may take some time.

%% Cell type:code id:b548ab47-f382-4439-91ef-d4b9552b2d11 tags:

``` python
%pip install diffusers["torch"]==0.35.1 transformers==4.56.2 accelerate
```

%% Output

    Requirement already satisfied: diffusers==0.35.1 in ./text2image/lib64/python3.11/site-packages (from diffusers[torch]==0.35.1) (0.35.1)
    Requirement already satisfied: transformers==4.56.2 in ./text2image/lib64/python3.11/site-packages (4.56.2)
    Requirement already satisfied: accelerate in ./text2image/lib64/python3.11/site-packages (1.12.0)
    Requirement already satisfied: importlib_metadata in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from diffusers==0.35.1->diffusers[torch]==0.35.1) (8.7.0)
    Requirement already satisfied: filelock in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from diffusers==0.35.1->diffusers[torch]==0.35.1) (3.19.1)
    Requirement already satisfied: huggingface-hub>=0.34.0 in ./text2image/lib64/python3.11/site-packages (from diffusers==0.35.1->diffusers[torch]==0.35.1) (0.36.0)
    Requirement already satisfied: numpy in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from diffusers==0.35.1->diffusers[torch]==0.35.1) (2.3.3)
    Collecting regex!=2019.12.17 (from diffusers==0.35.1->diffusers[torch]==0.35.1)
      Using cached regex-2025.11.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (40 kB)
    Requirement already satisfied: requests in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from diffusers==0.35.1->diffusers[torch]==0.35.1) (2.32.5)
    Collecting safetensors>=0.3.1 (from diffusers==0.35.1->diffusers[torch]==0.35.1)
      Using cached safetensors-0.7.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.1 kB)
    Requirement already satisfied: Pillow in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from diffusers==0.35.1->diffusers[torch]==0.35.1) (11.3.0)
    Requirement already satisfied: packaging>=20.0 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from transformers==4.56.2) (25.0)
    Requirement already satisfied: pyyaml>=5.1 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from transformers==4.56.2) (6.0.2)
    Requirement already satisfied: tokenizers<=0.23.0,>=0.22.0 in ./text2image/lib64/python3.11/site-packages (from transformers==4.56.2) (0.22.1)
    Requirement already satisfied: tqdm>=4.27 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from transformers==4.56.2) (4.67.1)
    Requirement already satisfied: torch>=1.4 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from diffusers[torch]==0.35.1) (2.8.0)
    Requirement already satisfied: fsspec>=2023.5.0 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from huggingface-hub>=0.34.0->diffusers==0.35.1->diffusers[torch]==0.35.1) (2025.9.0)
    Requirement already satisfied: typing-extensions>=3.7.4.3 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from huggingface-hub>=0.34.0->diffusers==0.35.1->diffusers[torch]==0.35.1) (4.15.0)
    Requirement already satisfied: hf-xet<2.0.0,>=1.1.3 in ./text2image/lib64/python3.11/site-packages (from huggingface-hub>=0.34.0->diffusers==0.35.1->diffusers[torch]==0.35.1) (1.2.0)
    Requirement already satisfied: psutil in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from accelerate) (7.1.0)
    Requirement already satisfied: sympy>=1.13.3 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (1.14.0)
    Requirement already satisfied: networkx in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (3.5)
    Requirement already satisfied: jinja2 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (3.1.6)
    Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.8.93 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (12.8.93)
    Requirement already satisfied: nvidia-cuda-runtime-cu12==12.8.90 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (12.8.90)
    Requirement already satisfied: nvidia-cuda-cupti-cu12==12.8.90 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (12.8.90)
    Requirement already satisfied: nvidia-cudnn-cu12==9.10.2.21 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (9.10.2.21)
    Requirement already satisfied: nvidia-cublas-cu12==12.8.4.1 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (12.8.4.1)
    Requirement already satisfied: nvidia-cufft-cu12==11.3.3.83 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (11.3.3.83)
    Requirement already satisfied: nvidia-curand-cu12==10.3.9.90 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (10.3.9.90)
    Requirement already satisfied: nvidia-cusolver-cu12==11.7.3.90 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (11.7.3.90)
    Requirement already satisfied: nvidia-cusparse-cu12==12.5.8.93 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (12.5.8.93)
    Requirement already satisfied: nvidia-cusparselt-cu12==0.7.1 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (0.7.1)
    Requirement already satisfied: nvidia-nccl-cu12==2.27.3 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (2.27.3)
    Requirement already satisfied: nvidia-nvtx-cu12==12.8.90 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (12.8.90)
    Requirement already satisfied: nvidia-nvjitlink-cu12==12.8.93 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (12.8.93)
    Requirement already satisfied: nvidia-cufile-cu12==1.13.1.3 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (1.13.1.3)
    Requirement already satisfied: triton==3.4.0 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from torch>=1.4->diffusers[torch]==0.35.1) (3.4.0)
    Requirement already satisfied: setuptools>=40.8.0 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from triton==3.4.0->torch>=1.4->diffusers[torch]==0.35.1) (65.5.1)
    Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from sympy>=1.13.3->torch>=1.4->diffusers[torch]==0.35.1) (1.3.0)
    Requirement already satisfied: zipp>=3.20 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from importlib_metadata->diffusers==0.35.1->diffusers[torch]==0.35.1) (3.23.0)
    Requirement already satisfied: MarkupSafe>=2.0 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from jinja2->torch>=1.4->diffusers[torch]==0.35.1) (3.0.2)
    Requirement already satisfied: charset_normalizer<4,>=2 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from requests->diffusers==0.35.1->diffusers[torch]==0.35.1) (3.4.3)
    Requirement already satisfied: idna<4,>=2.5 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from requests->diffusers==0.35.1->diffusers[torch]==0.35.1) (3.10)
    Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from requests->diffusers==0.35.1->diffusers[torch]==0.35.1) (2.5.0)
    Requirement already satisfied: certifi>=2017.4.17 in /opt/bwhpc/common/jupyter/ai/2025-08-05/lib/python3.11/site-packages (from requests->diffusers==0.35.1->diffusers[torch]==0.35.1) (2025.8.3)
    Using cached regex-2025.11.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (800 kB)
    Using cached safetensors-0.7.0-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (507 kB)
    Installing collected packages: safetensors, regex
    [2K   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2/2 [regex]32m1/2 [regex]
    [1A[2KSuccessfully installed regex-2025.11.3 safetensors-0.7.0
    
    [notice] A new release of pip is available: 25.2 -> 25.3
    [notice] To update, run: /pfs/data6/home/ho/ho_kim/ho_kuck/text2image/bin/python3 -m pip install --upgrade pip
    Note: you may need to restart the kernel to use updated packages.

%% Cell type:markdown id:d2df1c6c-cc35-45b5-a260-eac0da7644f2 tags:

It is important to restart the kernel after the installation is complete.

**Note:** The kernel is provides an interface to the underlying programming environment (e.g., Julia, Python, R etc.).

%% Cell type:markdown id:4c20ba44-7bad-4f55-a7dc-3a26c1e5b3ef tags:

After the restart of the kernel, all packages and functions need to be reloaded:

%% Cell type:code id:c5f280c5-cc08-4d32-a358-32ebacbf6f3a tags:

``` python
import torch
from diffusers import DiffusionPipeline
```

%% Output

    2025-12-03 18:02:03.234241: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
    2025-12-03 18:02:10.251678: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
    To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
    2025-12-03 18:02:26.560142: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.

%% Cell type:markdown id:c9e53b43-2d96-4d7f-b4fc-86b503dbace0 tags:

Next, we load the pre-trained model from the repository:

%% Cell type:code id:a4bf7ea1-1515-4816-8698-eb4692516de3 tags:

``` python
# Create stable diffusion pipeline
pipeline = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
```

%% Output


%% Cell type:markdown id:41f9767a-e71e-4994-97ce-99b1fc1e7218 tags:

### Move model to GPU or CPU

%% Cell type:code id:230758f9-c0ce-4f1a-aafa-ce438553bc00 tags:

``` python
# Move the model to NVIDIA GPU ("cuda")
pipeline.to("cuda")
```

%% Cell type:code id:55efd36b-866c-49bb-96e2-dd4bede00f24 tags:

``` python
# Move the model to the CPU
pipeline.to("cpu")

# Confirm that model will run on CPU
next(pipeline.unet.parameters()).device
```

%% Output

    device(type='cpu')

%% Cell type:markdown id:59168485-f104-4175-829d-de30822f367c tags:

### Image generation

%% Cell type:markdown id:a8f977e8-514c-46bf-a396-eebc4cad5110 tags:

Now, we can enter a some instructions for the image creation:

%% Cell type:code id:79c44cc5-823c-43ba-bd4a-f21f5f755204 tags:

``` python
# Model is ready for submitting queries.
prompt = "Students are walking on the campus rainy autumn day."
pipeline(prompt).images[0]
```

%% Output