Create Workbench
TOC
PrerequisitesCreate Workbench by using the web consoleProcedureConnect to WorkbenchUpload Files in JupyterLabInstall a Python Wheel File OfflineAvailable Workbench ImagesBuilt-in imagesMulti-architecture images (x86_64 and arm64)Additional imagesx86_64 imagesarm64 imagesDocker Hub Image Synchronization Script GuideScript PrerequisitesEnvironment Variables ConfigurationRequired Parameters (Target Private Registry Configuration)Optional Parameters (Source DockerHub Configuration)Example 1: Basic Usage (Most Common)Example 2: Single-Line Command Execution (Suitable for CI Environments)Example 3: Full Execution with DockerHub Authentication (Rate-Limit Prevention)Troubleshooting and NotesPrerequisites
- Ensure you have
kubectlconfigured and connected to your cluster. - Ensure you have created
PVC.
- Login, go to the Alauda Container Platform page.
- Click Storage > PersistentVolumeClaims to enter the PVC list page.
- Find the Create PVC button, click Create, and enter the info.
Create Workbench by using the web console
Procedure
Login, go to the Alauda AI page.
Click Workbench to enter the Workbench list page.
Find the Create button, click Create, you will enter the creation form, and you can create a workbench after filling in the information.
Connect to Workbench
After creating a workbench instance, click Workbench in the left navigation bar; your workbench instance should show up in the list. When the status becomes Running, click the Connect button to enter the workbench.
Upload Files in JupyterLab
If you use a JupyterLab-based workbench, you can upload files from your local machine by using the Upload Files button in the file browser. This is useful when your workbench cannot access the public internet or a PyPI mirror and you need to install Python packages from local wheel files.
Install a Python Wheel File Offline
-
Connect to the workbench and open JupyterLab.
-
In the left-side file browser, click the Upload Files button and select one or more
.whlfiles from your local machine. -
Open a terminal in JupyterLab and go to the directory that contains the uploaded files.
-
Install the package:
If the package depends on other wheel files, upload all required .whl files to the same directory and install them without accessing an external package index:
Packages installed directly into the container are suitable for temporary or personal use. If you recreate the workbench, packages installed only inside the container may be lost. For repeatable environments, prefer a custom workbench image or a virtual environment stored on persistent storage.
Available Workbench Images
The platform provides a set of ready-to-use WorkspaceKind images that appear directly in the workbench creation form. Additional images are also published on Docker Hub, but they are not synchronized into the platform by default.
The following tables use the same general style as the Red Hat OpenShift AI documentation: each image is described by its intended use, and key preinstalled packages are listed for quick reference. The package lists are representative rather than exhaustive. Versions are taken from the matching image directories in the build repository and their corresponding lock files.
Built-in images
The following images are available out of the box:
Multi-architecture images (x86_64 and arm64)
Additional images
The following images are available on Docker Hub but are not built into the platform by default:
x86_64 images
These images are intended for x86_64 nodes with NVIDIA GPU support.
arm64 images
These images are intended for arm64 nodes with Ascend NPU support.
To use an additional image, first synchronize it to your own image registry. You can do this with a tool such as skopeo, or by using the script described in the next section.
Docker Hub Image Synchronization Script Guide
sync-from-dockerhub.sh is an automated tool for synchronizing selected Docker Hub images, especially very large images, to a private image registry such as Harbor. Large images are more likely to encounter Out-Of-Memory (OOM) or timeout failures during direct transfer because of network fluctuations. To improve reliability, the script uses a relay workflow: pull locally → export as a tar archive → push the tar archive to the target registry. It also cleans up temporary files automatically when the task completes or exits unexpectedly.
Script Prerequisites
Before running this script, ensure the following tools are installed and accessible on your execution machine:
bash(Execution environment)nerdctl(For pulling images and exporting layers as tar archives)skopeo(For pushing the tar image archives to the target private registry)
Environment Variables Configuration
The script executes synchronization by reading environment variables, providing flexible usage without the need to modify the code.
Required Parameters (Target Private Registry Configuration)
Optional Parameters (Source DockerHub Configuration)
To prevent triggering DockerHub's Rate Limit when pulling a large volume of images, you can provide your DockerHub credentials to log in prior to pulling. If unnecessary, leave these blank.
Example 1: Basic Usage (Most Common)
If you only need to synchronize the images defined within the script to your private Harbor:
Example 2: Single-Line Command Execution (Suitable for CI Environments)
You can declare environment variables and run the script on the same line. This approach avoids polluting the current Shell environment variables:
Example 3: Full Execution with DockerHub Authentication (Rate-Limit Prevention)
When pulling images frequently from the same machine, DockerHub might reject your requests. In this case, include your DockerHub credentials:
Troubleshooting and Notes
- Disk Space: Since the script needs to temporarily store ultra-large images (e.g., 13GB) as
tararchives, ensure that your system's/tmpdirectory (or its underlying root partition) has ample free space (at least 30GB recommended). The script's default staging directory is/tmp/workbench-images-export-from-hub. - Transfer Timeouts: The current script sets a timeout of 120 minutes (
SKOPEO_TIMEOUT="120m") for pushing large files. If the process fails due to extremely slow network speeds, you can adjust this parameter value at the top of the script using any text editor. - Modifying the Image List: If there are images you no longer wish to synchronize, simply open
sync-from-dockerhub.shand use a#to comment out those specific lines within theWORKBENCH_IMAGESarray (similar to how the minimal images were filtered out insync.sh).
After the image is available in your registry, you also need to add the corresponding configuration to the imageConfig field of the WorkspaceKind resource that you plan to use. Below is an example patch YAML that adds a new image configuration to an existing WorkspaceKind:
You can apply the patch to the WorkspaceKind you are using with a command similar to the following:
This command applies the JSON patch file to the specified WorkspaceKind and updates its imageConfig so the new workbench image becomes available in the workbench creation UI.
In practice, you can adapt the name, image, and description fields according to the image you synchronized and the naming conventions used in your cluster.
We have also built in some resource options, which you can see in the dropdown menu.