Running Independent Tune Trials (Ray Tune)
Provides an end-to-end workflow for running independent Tune trials in HPE Ezmeral Unified Analytics Software.
Prerequisites
- Sign in to HPE Ezmeral Unified Analytics Software.
- Verify that the installed Ray client and server versions match. To verify,
complete the following steps in the terminal:
- To switch to Ray's environment,
run:
source /opt/conda/etc/profile.d/conda.sh && conda activate ray
- To verify that the Ray client and server versions match, run
:
ray --version
- To switch to Ray's environment,
run:
About this task
In this tutorial, you will run N
independent model training trials
using Tune as a simple grid sweep.
You will complete the following steps:
Procedure
-
Create a notebook server using the
jupyter-data-science
image with at least 3 CPUs and 4 Gi of memory in Kubeflow. See Creating and Managing Notebook Servers. - In your notebook environment, activate the Ray-specific Python kernel.
-
To ensure optimal performance, use dedicated directories containing only the
essential files needed for that job submission as a working directory.
For example, if you do not see the
Ray-Tune
folder in the<username>
directory, copy the folder from theshared/ezua-tutorials/current-release/Data-Science/Ray/Ray-Tune
directory into the<username>
directory. Theshared
directory is accessible to all users. Editing or running examples from theshared
directory is not advised. The<username>
directory is specific to you and cannot be accessed by other users. -
Open the
independent-tune-trials-executor.ipynb
file in the<username>/Ray-Tune
directory. -
Select the first cell of the
independent-tune-trials-executor.ipynb
notebook and click Run the selected cells and advance (play icon). Continue until you run all cells.
Results
To learn about this tutorial in detail, see Ray Tune Example from open-source Ray documentation.