Use third-party optimizers¶
For special use cases where the Comet Optimizer does not support your tuning requirements, you can use your own or any third-party optimizer for hyperparameter tuning and still track your tuning runs taking advantage of Comet Experiment Management.
How to log custom tuning runs to Comet¶
You can use Experiment.log_optimization()
to log any custom optimization data to Comet. These attributes are logged in the Other tab of the Single Experiment page with the prefix "optimizer_".
Your custom tuning script should:
- Create a Comet Experiment.
- Obtain optimization parameters from your custom or third-party optimizer.
- Train (and optionally evaluate) your model with the selected parameters.
- Log the optimization data to Comet + any relevant metrics, other parameters, and assets.
This logic is showcased in the pseudocode example below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
By logging optimization data, you gain access to the same Comet UI functionalities as the Comet Optimizer. Discover more in the Analyze hyperparameter tuning results page.
Warning
The Experiment.log_optimization() method is available in version 3.33.10 and later of the Comet Python SDK.
End-to-end example¶
Below is an end-to-end example for custom optimization with Comet.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
Note that this example custom optimizer has many limitations, including:
- Randomly selecting combinations may lead to memory issues if a large number of combinations are stored in memory simultaneously, potentially causing system crashes.
- In the event that a training example fails to complete (e.g., due to a crash), there is no built-in mechanism to retry the combination, which could result in incomplete or inconsistent results.
- The example only conducts a single trial for each combination, which may not adequately capture the variability in model performance for a real-world model training run.
- The example lacks support for distributed computing, for which a centralized server is required to provide combinations, limiting scalability and efficiency.
- The tuning approach used in the example is very basic.
You can substitute this simple version with any custom hyperparameter search, or use the Comet Optimizer directly to solve all of the issues listed.