Create Workbench
TOC
PrerequisitesCreate Workbench by using the web consoleProcedureConnect to WorkbenchDockerHub Image Synchronization Script GuideScript PrerequisitesEnvironment Variables ConfigurationRequired Parameters (Target Private Registry Configuration)Optional Parameters (Source DockerHub Configuration)Example 1: Basic Usage (Most Common)Example 2: Single-Line Command Execution (Suitable for CI Environments)Example 3: Full Execution with DockerHub Authentication (Rate-Limit Prevention)Troubleshooting and NotesPrerequisites
- Ensure you have
kubectlconfigured and connected to your cluster. - Ensure you have created
PVC.
- Login, go to the Alauda Container Platform page.
- Click Storage > PersistentVolumeClaims to enter the PVC list page.
- Find the Create PVC button, click Create, and enter the info.
Create Workbench by using the web console
Procedure
Login, go to the Alauda AI page.
Click Workbench to enter the Workbench list page.
Find the Create button, click Create, you will enter the creation form, and you can create a workbench after filling in the information.
Connect to Workbench
After creating a workbench instance, click Workbench in the left navigation bar; your workbench instance should show up in the list. When the status becomes Running, click the Connect button to enter the workbench.
We have built-in WorkspaceKind resources that are ready to use out of the box; you can see the two options we provide in the dropdown menu.
The following additional workbench images are available but are not built into the platform by default:
- alaudadockerhub/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9
- alaudadockerhub/odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9
- alaudadockerhub/odh-workbench-jupyter-pytorch-cuda-py312-ubi9
- alaudadockerhub/odh-workbench-jupyter-minimal-cpu-py312-ubi9
- alaudadockerhub/odh-workbench-jupyter-minimal-cuda-py312-ubi9
If you want to use these images, you must first manually synchronize them to your own image registry (for example, by using a tool such as skopeo).
DockerHub Image Synchronization Script Guide
sync-from-dockerhub.sh is an automated tool designed to synchronize specific DockerHub images (especially those with ultra-large capacities, such as a single layer exceeding 7GB) to a private image registry (like Harbor). Because large-capacity images are highly susceptible to Out-Of-Memory (OOM) errors or timeout failures during direct transfer (via pipelines or memory) due to network fluctuations, this script adopts a relay strategy of Pull to local -> Export as tar archive -> Push from tar archive to target registry. This ensures stable synchronization even for files in the tens of gigabytes range. Additionally, it features an automatic temporary file cleanup mechanism that triggers upon task completion or unexpected errors, protecting your disk space.
Script Prerequisites
Before running this script, ensure the following tools are installed and accessible on your execution machine:
bash(Execution environment)nerdctl(For pulling images and exporting layers as tar archives)skopeo(For pushing the tar image archives to the target private registry)
Environment Variables Configuration
The script executes synchronization by reading environment variables, providing flexible usage without the need to modify the code.
Required Parameters (Target Private Registry Configuration)
Optional Parameters (Source DockerHub Configuration)
To prevent triggering DockerHub's Rate Limit when pulling a large volume of images, you can provide your DockerHub credentials to log in prior to pulling. If unnecessary, leave these blank.
Example 1: Basic Usage (Most Common)
If you only need to synchronize the images defined within the script to your private Harbor:
Example 2: Single-Line Command Execution (Suitable for CI Environments)
You can declare environment variables and run the script on the same line. This approach avoids polluting the current Shell environment variables:
Example 3: Full Execution with DockerHub Authentication (Rate-Limit Prevention)
When pulling images frequently from the same machine, DockerHub might reject your requests. In this case, include your DockerHub credentials:
Troubleshooting and Notes
- Disk Space: Since the script needs to temporarily store ultra-large images (e.g., 13GB) as
tararchives, ensure that your system's/tmpdirectory (or its underlying root partition) has ample free space (at least 30GB recommended). The script's default staging directory is/tmp/workbench-images-export-from-hub. - Transfer Timeouts: The current script sets a timeout of 120 minutes (
SKOPEO_TIMEOUT="120m") for pushing large files. If the process fails due to extremely slow network speeds, you can adjust this parameter value at the top of the script using any text editor. - Modifying the Image List: If there are images you no longer wish to synchronize, simply open
sync-from-dockerhub.shand use a#to comment out those specific lines within theWORKBENCH_IMAGESarray (similar to how the minimal images were filtered out insync.sh).
After the image is available in your registry, you also need to add the corresponding configuration to the imageConfig field of the WorkspaceKind resource that you plan to use. Below is an example patch YAML that adds a new image configuration to an existing WorkspaceKind:
You can apply the patch to the WorkspaceKind you are using with a command similar to the following:
This command applies the JSON patch file to the specified WorkspaceKind and updates its imageConfig so the new workbench image becomes available in the workbench creation UI.
In practice, you can adapt the name, image, and description fields according to the image you synchronized and the naming conventions used in your cluster.
We have also built in some resource options, which you can see in the dropdown menu.