Hyperparameter tuning for humans

Overview

KerasTuner

codecov PyPI version

KerasTuner is an easy-to-use, scalable hyperparameter optimization framework that solves the pain points of hyperparameter search. Easily configure your search space with a define-by-run syntax, then leverage one of the available search algorithms to find the best hyperparameter values for your models. KerasTuner comes with Bayesian Optimization, Hyperband, and Random Search algorithms built-in, and is also designed to be easy for researchers to extend in order to experiment with new search algorithms.

Official Website: https://keras.io/keras_tuner/


Quick links


Installation

KerasTuner requires Python 3.6+ and TensorFlow 2.0+.

Install the latest release:

pip install keras-tuner --upgrade

You can also check out other versions in our GitHub repository.


Quick introduction

Import KerasTuner and TensorFlow:

import kerastuner as kt
from tensorflow import keras

Write a function that creates and returns a Keras model. Use the hp argument to define the hyperparameters during model creation.

def build_model(hp):
  model = keras.Sequential()
  model.add(keras.layers.Dense(
      hp.Choice('units', [8, 16, 32]),
      activation='relu'))
  model.add(keras.layers.Dense(1, activation='relu'))
  model.compile(loss='mse')
  return model

Initialize a tuner (here, RandomSearch). We use objective to specify the objective to select the best models, and we use max_trials to specify the number of different models to try.

tuner = kt.RandomSearch(
    build_model,
    objective='val_loss',
    max_trials=5)

Start the search and get the best model:

tuner.search(x_train, y_train, epochs=5, validation_data=(x_val, y_val))
best_model = tuner.get_best_models()[0]

To learn more about KerasTuner, check out this starter guide.


Contributing Guide

Please refer to the CONTRIBUTING.md for the contributing guide.


Community

Please use the Keras Slack workspace, the #keras-tuner channel for communication.

Use this link to request an invitation to the channel.


Citing KerasTuner

If KerasTuner helps your research, we appreciate your citations. Here is the BibTeX entry:

@misc{omalley2019kerastuner,
	title        = {KerasTuner},
	author       = {O'Malley, Tom and Bursztein, Elie and Long, James and Chollet, Fran\c{c}ois and Jin, Haifeng and Invernizzi, Luca and others},
	year         = 2019,
	howpublished = {\url{https://github.com/keras-team/keras-tuner}}
}
Comments
  • Add tutorial and doc for Custom Objective function

    Add tutorial and doc for Custom Objective function

    I am implementing a classifier with three classes, I am using hot encoding for the labels

    I want to use a custom objective function in the tuner (precision at class 1):

    I defined:

    def prec_class1(y_true, y_pred): from sklearn.metrics import precision_recall_curve threshold=0.76 y_pred = np.squeeze(y_pred, axis=1) y_true = np.squeeze(y_true, axis=1) precision, recall, _ = precision_recall_curve(y_true[:,1],y_pred[:,1]) for m in range(len(recall)): if recall[m] > threshold and recall[m] < threshold + 0.001: prec = precision[m] return Threshold

    and then:

    tuner1 = Hyperband(hypermodel, objective= kt.Objective("prec_class1", direction="max"), max_epochs=30, executions_per_trial=1, #nr di volte in cui la stessa configurazione viene testata directory=root_dir)

    do you think this would work?

    Thank you

    documentation 
    opened by AnconaAndrea 22
  • KeyError when using conditional hyperparameters

    KeyError when using conditional hyperparameters

    Describe the bug When I use parent_name and parent_valus I always get a KeyError when building a model.

    Invalid model 5/5
    Traceback (most recent call last):
      File "/Users/douira/Documents/dev/uni/bachelorarbeit/tf-time-series/venv/lib/python3.8/site-packages/keras_tuner/engine/hypermodel.py", line 127, in build
        model = self.hypermodel.build(hp)
      File "keras-tuner-test.py", line 22, in model_builder
        dense_units1 = hp.Int(
      File "/Users/douira/Documents/dev/uni/bachelorarbeit/tf-time-series/venv/lib/python3.8/site-packages/keras_tuner/engine/hyperparameters.py", line 850, in Int
        return self._retrieve(hp)
      File "/Users/douira/Documents/dev/uni/bachelorarbeit/tf-time-series/venv/lib/python3.8/site-packages/keras_tuner/engine/hyperparameters.py", line 707, in _retrieve
        return self.values[hp.name]
    KeyError: 'dense_units1'
    

    To Reproduce Use keras-tuner 1.0.4: https://colab.research.google.com/drive/1dlys0Dmpt9hjLkKOP62SfhYnIQmOgVxy?usp=sharing

    import keras_tuner as kt
    from tensorflow import keras
    
    def model_builder(hp):
        dense_layers = hp.Int("dense_layers", min_value=0, max_value=2, step=1)
    
        dense_units1 = hp.Int(
            "dense_units1",
            min_value=16,
            max_value=512,
            step=16,
            parent_name="dense_layers",
            parent_values=[1, 2],
        )
    
        return keras.Sequential()
    
    tuner = kt.RandomSearch(
        model_builder,
        objective="val_accuracy",
        directory="./model_tuning",
        max_trials=1,
    )
    

    Expected behavior The model should build normally. This worked on keras-tuner 1.0.3. It does not work (this bug happens) on keras-tuner 1.0.4.

    Additional context If there is any way of using conditional hyperparameters on keras-tuner while avoiding this bug I'd be interested to hear about it. Otherwise I'll downgrade my version of keras-tuner until this issue is resolved somehow. Thank you for the great work on this project!

    Would you like to help us fix it? I don't know why it's happening. Maybe I'm doing something wrong. If this actually is a bug, then looking at what changed between keras-tuner 1.0.3 and 1.0.4 is probably a good idea. (the Google Collab confirms this difference)

    bug 
    opened by douira 14
  • Data augmentation

    Data augmentation

    Do you plan to support/document/colab Keras tuner use in augmentation pipelines?

    https://blog.insightdatascience.com/automl-for-data-augmentation-e87cf692c366

    https://arxiv.org/abs/1905.07373

    documentation 
    opened by bhack 14
  • Sequential models may cause issues with get_best_models()

    Sequential models may cause issues with get_best_models()

    tuner.get_best_models() fails with Sequential models, with the following exception

    Traceback (most recent call last):
      File "test_issue_74.py", line 85, in <module>
        # For 
      File "test_issue_74.py", line 70, in test_issue_74_reproduction
        _ = tuner.get_best_models()
      File "/usr/local/google/home/jamlong/git/keras-tuner/kerastuner/engine/tuner.py", line 413, in get_best_models
        model.load_weights(best_checkpoint)
      File "/usr/local/google/home/jamlong/envs/py36_tfnightly/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 182, in load_weights
        return super(Model, self).load_weights(filepath, by_name)
      File "/usr/local/google/home/jamlong/envs/py36_tfnightly/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1364, in load_weights
        self._assert_weights_created()
      File "/usr/local/google/home/jamlong/envs/py36_tfnightly/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 1617, in _assert_weights_created
        self.name)
    ValueError: Weights for model sequential_1 have not yet been created. Weights are created when the Model is first called on inputs or `build()` is called with an `input_shape`.
    
    
    opened by jamlong 14
  • Add exhaustive tuner

    Add exhaustive tuner

    Hi,

    This PR adds an exhaustive tuner, i.e., a Tuner that explores all the trial space. I needed it and I saw that others too (https://github.com/keras-team/keras-tuner/issues/192 and https://github.com/keras-team/keras-tuner/issues/408), but unfortunately there was not a way of doing that given RandomSearch has an internal max_collisions that will stop the search if too many of the same parameter combination has already been explored.

    The change is self contained, everything goes into a single file. I am using it to train a 600+ combination space. The main limitation is that it only allows hp.Choice, but I am open to improve it. Adding hp.Boolean, hp.Fixed, or even using the step on hp.Int to generate all the combinations before-hand for all the types.

    I will surely add some tests if you guys are positive about this change.

    opened by edumucelli 12
  • Fix Bayesian optimization error

    Fix Bayesian optimization error

    Problem

    Unlike documented in the SciPy docs the OptimizeResult.fun property can also be a scalar (See https://github.com/fmfn/BayesianOptimization/issues/60 for an example). This can lead to the following error:

    TypeError: 'float' object is not subscriptable
    

    Solution

    Check if result.fun is a scalar. If it is, use the value as is else get the first element as usual.

    opened by Jet132 12
  • How do I reduce the verbosity of logs during the trials?

    How do I reduce the verbosity of logs during the trials?

    Is there any argument that can be passed to tuner.search() to control the logs being produced at the end of each trial (similar to verbose in Keras model.filt()). Right now it is printing all the hyperparameters in addition to other information.

    enhancement 
    opened by sibyjackgrove 12
  • Conditional hyperparameter tuning bug

    Conditional hyperparameter tuning bug

    I'm using Keras-Tuner to run trials on a multi-layer NN with variable number of layer and units within each layer, similar to the example in the README:

    for i in range(hp.Int('num_layers', 2, 20)):
            model.add(layers.Dense(units=hp.Int('units_' + str(i),
                                                min_value=32,
                                                max_value=512,
                                                step=32),
                                   activation='relu'))
    

    The "units_#" hyperpameter should be conditional upon "num_layer" hyperparameter. E.g.if "num_layers=2" then I should see "units_0" and "units_1". However in my testing I'm not seeing proper correlation (num_layers doesn't match the number of units_# hyperparameter values set). Instead I see something like the following:

    [Trial summary]

    Hp values: |-num_fc_layers: 2 |-num_units_0: ... |-num_units_1: ... |-num_units_2: ... |-num_units_3: ... |-num_units_4: ..

    or

    [Trial summary]

    Hp values: |-num_fc_layers: 5 |-num_units_0: ... |-num_units_1: ... |-num_units_2: ...

    This effectively makes the summary of hyperparameters used in a trial useless. I did some debugging of the code but haven't found the culprit yet. I'm using "randomsearch" tuner and wrapped my model build in HyperModel class (rather than function method).

    Could someone please take a look? Thank you.

    opened by rcmagic1 11
  • TypeError when sorting candidates during Hyperband search

    TypeError when sorting candidates during Hyperband search

    The following simple toy example fails with the error message shown below. When using the RandomSearch tuner instead, everything works as expected.

    def build_model(hp):
        inputs = layers.Input(shape=(5, ))
        x = layers.Dense(units=hp.Range('units', min_value=32, max_value=512, step=32),
                             activation='relu')(inputs)
        predictions = layers.Dense(1)(x)
        model = keras.models.Model(inputs=inputs, outputs=predictions)
        model.compile(optimizer='adam', loss='mean_squared_error')
        return model
    
    tuner = kerastuner.tuners.Hyperband(
        build_model,
        objective='val_loss',
        max_trials=2,
        executions_per_trial=1
    )
    
    tuner.search(np.eye(5), np.ones((5, 1)),
                 validation_data=(np.eye(5), np.ones((5, 1))),
                 epochs=2)
    
    1/2 trials left
    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    <ipython-input-3-c7537b51ff32> in <module>
         19 tuner.search(np.eye(5), np.ones((5, 1)),
         20              validation_data=(np.eye(5), np.ones((5, 1))),
    ---> 21              epochs=2)
    
    /opt/conda/lib/python3.6/site-packages/kerastuner/engine/tuner.py in search(self, *fit_args, **fit_kwargs)
        207             # Obtain unique trial ID to communicate with the oracle.
        208             trial_id = tuner_utils.generate_trial_id()
    --> 209             hp = self._call_oracle(trial_id)
        210             if hp is None:
        211                 # Oracle triggered exit
    
    /opt/conda/lib/python3.6/site-packages/kerastuner/engine/tuner.py in _call_oracle(self, trial_id)
        525         # Obtain hp value suggestions from the oracle.
        526         while 1:
    --> 527             oracle_answer = self.oracle.populate_space(trial_id, hp.space)
        528             if oracle_answer['status'] == 'RUN':
        529                 hp.values = oracle_answer['values']
    
    /opt/conda/lib/python3.6/site-packages/kerastuner/tuners/hyperband.py in populate_space(self, trial_id, space)
         86         if self._bracket_index + 1 < self._num_brackets:
         87             self._bracket_index += 1
    ---> 88             self._select_candidates()
         89         # If the current band ends
         90         else:
    
    /opt/conda/lib/python3.6/site-packages/kerastuner/tuners/hyperband.py in _select_candidates(self)
        135     def _select_candidates(self):
        136         sorted_candidates = sorted(list(range(len(self._candidates))),
    --> 137                                    key=lambda i: self._candidate_score[i])
        138         num_selected_candidates = self._model_sequence[self._bracket_index]
        139         for index in sorted_candidates[:num_selected_candidates]:
    
    TypeError: '<' not supported between instances of 'NoneType' and 'float'
    

    The code was run with Python 3.6.6 and the following relevant libraries:

    tensorflow                         2.0.0b1      
    Keras-Tuner                        0.9.0.1562790722 
    numpy                              1.16.4 
    
    opened by floscha 11
  • Further optimize the disk usage

    Further optimize the disk usage

    I'm conducting a Hyperparameter search on large parameter space using Hyperband. Im experiencing Diskspace issues (+700GB), because of all the save Trails. I would like to delete discarded Trails, which have been discarded by successive halving.

    How can I lookup discarded trails that will not be used in futer trails? Which trails are safe for me to delete. Is it safe to delete all trails that are listed in "past_id" in the Brackets, which can be found in self.oracle.get_state() (see example below)?

    Thanks alot! Great Repo!

    Example: This is a current state of a small Hyperparameter space after serveral trails:

    {
        "brackets": [
            {
                "bracket_num": 2,
                "rounds": [
                    [
                        {
                            "id": "e1822fa866ee7b337a6ce32e154a81e7",
                            "past_id": null
                        },
                        {
                            "id": "a83404db6388d841da8adfffe6c574d6",
                            "past_id": null
                        },
                        {
                            "id": "7b35e7e6e19a6cb906ff0ec4dd00b0d7",
                            "past_id": null
                        },
                        {
                            "id": "4bcb8b7e9e868f2c29b5205c8ece21f6",
                            "past_id": null
                        },
                        {
                            "id": "e32d7a054796e970dca5484336501b48",
                            "past_id": null
                        },
                        {
                            "id": "1f955d99321824edb51f34065fd6bf6d",
                            "past_id": null
                        },
                        {
                            "id": "9fe445228a1608fee0454a0d783b7183",
                            "past_id": null
                        },
                        {
                            "id": "331329bda15420cbc26346cab29b664b",
                            "past_id": null
                        },
                        {
                            "id": "135e840913f8a796b15dc5e0a9f4c72c",
                            "past_id": null
                        },
                        {
                            "id": "2656c6528a0dff80e1fb619d5db78d72",
                            "past_id": null
                        },
                        {
                            "id": "a38341d5327dab6f2b2c113b4486e060",
                            "past_id": null
                        },
                        {
                            "id": "8f6ec3b93f4cdd17d91daed2ce270d61",
                            "past_id": null
                        }
                    ],
                    [
                        {
                            "id": "7a0c454d30b87ed7ba7577383c1e66db",
                            "past_id": "1f955d99321824edb51f34065fd6bf6d"
                        },
                        {
                            "id": "4595fc148424315a5e718464ad4c6806",
                            "past_id": "8f6ec3b93f4cdd17d91daed2ce270d61"
                        },
                        {
                            "id": "443291ecac85d951672d74098aba5d80",
                            "past_id": "7b35e7e6e19a6cb906ff0ec4dd00b0d7"
                        }
                    ],
                    []
                ]
            }
        ],
        "current_bracket": 2,
        "current_iteration": 0,
        "factor": 3,
        "hyperband_iterations": 4,
        "hyperparameters": {
            "space": [
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": 64,
                        "name": "n_hidden",
                        "ordered": true,
                        "values": [
                            64,
                            512,
                            128
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": 48,
                        "name": "n_dense",
                        "ordered": true,
                        "values": [
                            48,
                            56,
                            128
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": 0.2,
                        "name": "dropout",
                        "ordered": true,
                        "values": [
                            0.2,
                            0.5
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": 0.1,
                        "name": "dropout_dense",
                        "ordered": true,
                        "values": [
                            0.1
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": 0.6,
                        "name": "momentum",
                        "ordered": true,
                        "values": [
                            0.6
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": 0.001,
                        "name": "learning_rate",
                        "ordered": true,
                        "values": [
                            0.001
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": "LSTM",
                        "name": "mode",
                        "ordered": false,
                        "values": [
                            "LSTM"
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": "elu",
                        "name": "activation_rnn",
                        "ordered": false,
                        "values": [
                            "elu"
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": "sigmoid",
                        "name": "recurrent_activation_rnn",
                        "ordered": false,
                        "values": [
                            "sigmoid"
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": "elu",
                        "name": "activation_dense",
                        "ordered": false,
                        "values": [
                            "elu"
                        ]
                    }
                },
                {
                    "class_name": "Choice",
                    "config": {
                        "conditions": [],
                        "default": "categorical_crossentropy",
                        "name": "loss",
                        "ordered": false,
                        "values": [
                            "categorical_crossentropy"
                        ]
                    }
                }
            ],
            "values": {
                "activation_dense": "elu",
                "activation_rnn": "elu",
                "dropout": 0.2,
                "dropout_dense": 0.1,
                "learning_rate": 0.001,
                "loss": "categorical_crossentropy",
                "mode": "LSTM",
                "momentum": 0.6,
                "n_dense": 48,
                "n_hidden": 64,
                "recurrent_activation_rnn": "sigmoid"
            }
        },
        "max_epochs": 20,
        "min_epochs": 1,
        "ongoing_trials": {
            "tuner0": "443291ecac85d951672d74098aba5d80"
        },
        "seed": 9306,
        "seed_state": 9438,
        "tried_so_far": [
            "999ad17860a2835f685acaffad22a8e9",
            "83912e964572c3e5471b9c1956ac8fc4",
            "c53014e5e1523fa1499a3162f6f21221",
            "04b5f4035d0d8e3eeee393bd63563f5a",
            "3697a835f3f1144b3b6120d9c11521eb",
            "43647cb41652f1ec535a14a749d731e9",
            "88d5a1ad41730b8854f697f9efbd68ca",
            "b5357efc9ad057e78cd58b8aa2861dc9",
            "4973018c5f6b84338d88249fc851f8d5",
            "7c86520a8761f167ee528223d9450c58",
            "9a6e1847de8a317f7e107c96a74ebd88",
            "1a7559c42715ecc6df85a76a7cb10e64"
        ]
    }
    
    enhancement 
    opened by PatternAlpha 10
  • Keras Progress Bar broken when importing kerastuner

    Keras Progress Bar broken when importing kerastuner

    Looks like importing kerastuner into a trivial keras proj causes the progress bar to not overwrite each update:-

    `import tensorflow as tf import tensorflow_addons as tfa

    from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import SGD

    #import kerastuner as kt

    (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0

    model_keras_static = Sequential()
    model_keras_static.add(Dense(512, input_dim=784, activation='sigmoid')) model_keras_static.add(Dense(128, activation='sigmoid')) model_keras_static.add(Dense(10, activation='softmax'))

    model_keras_static.compile(loss='sparse_categorical_crossentropy', metrics=['accuracy'], optimizer=SGD(learning_rate=0.1)) model_keras_static.fit(x_train.reshape(x_train.shape[0], 784), y_train, batch_size=1000, epochs=2, verbose=1)`

    Train on 60000 samples Epoch 1/2 60000/60000 [==============================] - 2s 29us/sample - loss: 2.2526 - accuracy: 0.2699 Epoch 2/2 60000/60000 [==============================] - 1s 25us/sample - loss: 2.1220 - accuracy: 0.5087 <tensorflow.python.keras.callbacks.History at 0x1eb9e9f9408>

    However if I uncomment the kerastuner import the output no longer overwrites

    Train on 60000 samples Epoch 1/2 60000/60000 [==============================] - ETA: 12s - loss: 2.6233 - accuracy: 0.115 - ETA: 2s - loss: 2.3653 - accuracy: 0.102 - ETA: 1s - loss: 2.3288 - accuracy: 0.12 - ETA: 0s - loss: 2.3130 - accuracy: 0.12 - ETA: 0s - loss: 2.3037 - accuracy: 0.14 - ETA: 0s - loss: 2.2962 - accuracy: 0.17 - ETA: 0s - loss: 2.2882 - accuracy: 0.18 - ETA: 0s - loss: 2.2811 - accuracy: 0.20 - ETA: 0s - loss: 2.2743 - accuracy: 0.21 - ETA: 0s - loss: 2.2680 - accuracy: 0.23 - ETA: 0s - loss: 2.2618 - accuracy: 0.24 - 1s 13us/sample - loss: 2.2607 - accuracy: 0.2506 Epoch 2/2 60000/60000 [==============================] - ETA: 0s - loss: 2.1928 - accuracy: 0.40 - ETA: 0s - loss: 2.1914 - accuracy: 0.44 - ETA: 0s - loss: 2.1852 - accuracy: 0.42 - ETA: 0s - loss: 2.1791 - accuracy: 0.45 - ETA: 0s - loss: 2.1721 - accuracy: 0.45 - ETA: 0s - loss: 2.1650 - accuracy: 0.46 - ETA: 0s - loss: 2.1577 - accuracy: 0.47 - ETA: 0s - loss: 2.1507 - accuracy: 0.48 - ETA: 0s - loss: 2.1420 - accuracy: 0.48 - ETA: 0s - loss: 2.1332 - accuracy: 0.49 - 1s 9us/sample - loss: 2.1293 - accuracy: 0.4950 <tensorflow.python.keras.callbacks.History at 0x196d7342408>

    Any suggestions ?

    absl-py==0.9.0 argon2-cffi @ file:///C:/ci/argon2-cffi_1596828549974/work astor==0.8.0 attrs==19.3.0 backcall==0.2.0 bleach==3.1.5 blinker==1.4 brotlipy==0.7.0 cachetools @ file:///tmp/build/80754af9/cachetools_1596822027882/work certifi==2020.6.20 cffi==1.14.0 chardet==3.0.4 click==7.1.2 colorama==0.4.3 cryptography==2.9.2 cycler==0.10.0 decorator==4.4.2 defusedxml==0.6.0 entrypoints==0.3 future==0.18.2 gast==0.2.2 google-auth @ file:///tmp/build/80754af9/google-auth_1596863485713/work google-auth-oauthlib==0.4.1 google-pasta==0.2.0 grpcio==1.27.2 h5py==2.10.0 idna @ file:///tmp/build/80754af9/idna_1593446292537/work importlib-metadata @ file:///C:/ci/importlib-metadata_1593446525189/work ipykernel @ file:///C:/ci/ipykernel_1596208728219/work/dist/ipykernel-5.3.4-py3-none-any.whl ipython @ file:///C:/ci/ipython_1596868620883/work ipython-genutils==0.2.0 ipywidgets==7.5.1 jedi==0.15.2 Jinja2==2.11.2 joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1593624380152/work json5==0.9.5 jsonschema @ file:///C:/ci/jsonschema_1594363671836/work jupyter-client @ file:///tmp/build/80754af9/jupyter_client_1594826976318/work jupyter-core==4.6.3 jupyterlab==2.1.5 jupyterlab-server @ file:///tmp/build/80754af9/jupyterlab_server_1594164409481/work Keras-Applications @ file:///tmp/build/80754af9/keras-applications_1594366238411/work Keras-Preprocessing==1.1.0 keras-tuner==1.0.1 kiwisolver==1.2.0 Markdown==3.1.1 MarkupSafe @ file:///C:/ci/markupsafe_1594405949945/work matplotlib @ file:///C:/ci/matplotlib-base_1592846084747/work mistune @ file:///C:/ci/mistune_1594373272338/work mkl-fft==1.1.0 mkl-random==1.1.1 mkl-service==2.3.0 nbconvert @ file:///C:/ci/nbconvert_1594372737468/work nbformat==5.0.7 notebook @ file:///C:/ci/notebook_1596837179121/work numpy @ file:///C:/ci/numpy_and_numpy_base_1596233945180/work oauthlib==3.1.0 opt-einsum==3.1.0 packaging==20.4 pandas @ file:///D:/bld/pandas_1595958729109/work pandocfilters==1.4.2 parso @ file:///tmp/build/80754af9/parso_1596826841367/work pickleshare @ file:///C:/ci/pickleshare_1594374056827/work prometheus-client==0.8.0 prompt-toolkit==3.0.5 protobuf==3.12.3 pyasn1==0.4.8 pyasn1-modules==0.2.7 pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work Pygments==2.6.1 PyJWT==1.7.1 pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1594392929924/work pyparsing==2.4.7 pyreadline==2.1 pyrsistent==0.16.0 PySocks @ file:///C:/ci/pysocks_1594394709107/work python-dateutil==2.8.1 pytz==2020.1 pywin32==227 pywinpty==0.5.7 pyzmq==19.0.1 requests @ file:///tmp/build/80754af9/requests_1592841827918/work requests-oauthlib==1.3.0 rsa @ file:///tmp/build/80754af9/rsa_1596998415516/work scikit-learn @ file:///D:/bld/scikit-learn_1596546337481/work scipy @ file:///C:/ci/scipy_1592916958183/work seaborn==0.10.1 Send2Trash==1.5.0 six==1.15.0 tabulate==0.8.7 tensorboard==2.2.1 tensorboard-plugin-wit==1.6.0 tensorflow==2.1.0 tensorflow-addons==0.9.1 tensorflow-estimator==2.1.0 termcolor==1.1.0 terminado==0.8.3 terminaltables==3.1.0 testpath==0.4.4 threadpoolctl @ file:///tmp/tmp79xdzxkt/threadpoolctl-2.1.0-py3-none-any.whl tornado==6.0.4 tqdm @ file:///home/conda/feedstock_root/build_artifacts/tqdm_1596476591553/work traitlets==4.3.3 typeguard==2.9.1 urllib3==1.25.9 wcwidth @ file:///tmp/build/80754af9/wcwidth_1593447189090/work webencodings==0.5.1 Werkzeug==0.14.1 widgetsnbextension @ file:///D:/bld/widgetsnbextension_1594164533747/work win-inet-pton==1.1.0 wincertstore==0.2 wrapt==1.12.1 zipp==3.1.0

    bug 
    opened by nazq 10
  • [WIP] Change grid search for parallel tuning

    [WIP] Change grid search for parallel tuning

    GridSearch Redesign

    Things to consider:

    • Conditional space.
    • Lazy discovery of hps.
    • Concurrent calls.

    With these things in mind, even a simple grid search algorithm can be hard.

    Overall process

    We populate all the value sets (only for the discovered hps, not including the not discovered ones) at the begining and put them in a queue. This queue is for new populate_space() requests to fetch from it. When a trial is finished, we check if there are more combinations between this finished one and its original next trial. If so, we put all of them into the queue. To check if anything in between two trials, we also need to maintain a linked list of trials sorted in ascending order to get the trial next to it in the combination order. It use linked list because we will keep inserting the new combinations in between trials to maintain the ascending order.

    Pseudo code:

    # Try to exhaust all combinations between a1 & a2:
    while next_combination(a1) < a2:
        new_a1 = next_combination(a1)
        queue.append(new_a1)
        link_list.insert_after(item=new_a1, pos=a1)
        a1 = new_a1
    

    Compare two sets of values

    To achieve the aboe, we should have a function to compare two set of values, which one is larger in their combination order to sort them in ascending order.

    When comparing, only the active values should be considered. We compare from the left most to the right most. The first different value decides who is larger.

    If the they have a difference set of values due to different conditional scope activation, it still works since the parent hp should be different, which is on the left of the first different activated hp.

    A corner case example

    We should also make it work when comparing a finished trial and a ongoing trial (new hps not reported back yet). The above logic should resolve most of the cases, but the case one combination is the prefix of another combination needs a special casing here. In this "prefix" case, we should judge the longer one as larger.

    This decision is for the following use case:

    class MyHyperModel(keras_tuner.HyperModel):
        def build(self, hp):
            hp.Int("hp1", 0, 5)
            ...
        
        def fit(self, hp, model, **kwargs):
            hp.Int("hp2", 0, 5, default=0)
            ...
    

    In the first round of parallel Oracle.create_trial(), the oracle never know hp2. It populates hp1 from 0 to 5. If the trial with hp1=0 (hp2=0 was discovered during the trial) finished first. It starts to populate hp1=0, hp2=1 to 5 by calling next_combination(), then it would get hp1=1,hp2=0, which is actually equal to hp1=1 (the second trial) as the hp2=0 will be discovered during the trial. So, these 2 trials are not equal in values does not mean they are not equal. In this case, we need to see if hp1=1,hp=0 is greater equal than hp1=1, and we judge that it is.

    Here is the general description. Trial a1 < a2 and they are next to each other before a1 got some new hps when finished. a2 keeps running for a long time. a1 start to produce the combinations whose order is between a1 and a2 by changing the values of the newly appeared hps. The new value sets are produce using get next combination mechanism. When a new set of values are produced we need to judge whether all the combinations between a1 and a2 are exhausted, which is decided by if the newly produced values is larger than a2. This is when a2 is the prefix of a newly produced set of values, we have exhausted the values.

    With the comparison function above, we achieved the following. Given 2 finished trials, we can tell if there are not tried value sets between them by next_combination(a1) < a2 (If true, there are sets between a1 & a2) even when a2 is not finished.

    So even when a trial is finished with new hps, we can start to produce more trials between it and its original next trial. This is good for parallel.

    Caveat

    Do not use Oracle._tried_so_far, which did not count the new hps of a1 in a2. Even when it is exhausted, the new set will not equal to a2 due to the new hps.

    opened by haifeng-jin 1
  • Add nightly builds of keras-tuner

    Add nightly builds of keras-tuner

    Is your feature request related to a problem? Please describe.

    Currently for ml-compiler-opt, we're using the nightly versions of tensorflow, tf-agents, and all the associated dependencies as we have recently needed to pull in upstream patches. We want to use keras-tuner to add open source hyperparameter tuning capabilities to the project (see this PR), but we're running into issues with the versioning as installing keras-tuner wants to pull in tensorboard, which is incompatible with the tb-nightly that gets pulled in by tf-nightly.

    Describe the solution you'd like

    I'd like to see keras-tuner provide nightly packages on PyPI.

    Describe alternatives you've considered

    1. Manually modifying lockfiles - we didn't have this capability until very recently, but this solution is still kind of klunky. Lockfiles are supposed to be machine generated and not touched, and this doesn't guarantee compatibility with the tf-nightly packages since there might be some differences between the latest release of keras-tuner and tip of tree.

    Additional context

    There are a couple other reasons why I think pushing a nightly version is a good idea:

    1. Compatibility with tf-nightly: the primary reason I'd be interested in this change.
    2. If someone upstreams a patch and wants to use it without having to wait for a release to get tagged, simply pulling from the nightly version should do the trick without having to build the package from source.

    Given that all of the PyPI infrastructure and build scripts are open source in this repository (ie in setup.py and the Github workflows), I'm very much interested in implementing this myself assuming that such a change is desired/would be accepted. There's even a nightly testing job already that (with some modifications) could serve as the starting point.

    Just looking to see if this is a good idea before I start working on the implementation.

    enhancement 
    opened by boomanaiden154 0
  • HyperResNet with binary classification

    HyperResNet with binary classification

    Hi maintainers,

    I am using HyperResNet for a binary classification problem, and ended up subclassing it to modify the last layer and loss to be suitable for binary classification rather than multi-class. I wondered whether this would make a welcome addition – if it would, I'd be happy to come up with a PR.

    Thank you for all your marvelous work!

    opened by ZviBaratz 0
  • Allow

    Allow "fixed" hyper parameters that have a different value per build

    Is your feature request related to a problem? Please describe. I would like to add hyperparameters that do not get optimized but have a different value every build. They are more like derived hyper parameters or meta data than real hps . A simple call to hp.Fixed with changing values does not produce the expected result of a changing value in the hps.

    My goal is to include some meta information about the built model in the hps so that i can evaluate them later without rebuilding all the models. Some meta information could even be added after training and be expensive to get (like the time taken, memory used, ...) as such it is not possible to get them after the hp search by just rebuilding the models.

    As a work around I register them with a fixed value like this:

    hp.Fixed("meta/parameters", value=0)
    

    And later i override them like this (this at the end of the build call):

    parameter_count = model.count_params()
    parameter_count_hp = kt.HyperParameter("meta/parameters", default=parameter_count)
    hp._register(parameter_count_hp, overwrite=True)
    

    This achieves the desired result but is a bit hacky and the table shown while searching does only display the values in the second column and not in the first one. The first one always displays the fixed initial value.

    Describe the solution you'd like It would be nice to get a new method named something like hp.AddMeta or hp.AddUnoptimized to add such information. Another solution is to change the behavior of the hp.Fixed call or the addition of an overwrite parameter in the hp.Fixed call.

    Describe alternatives you've considered An alternative is to automatically add some meta information to the hps. This could be the number of parameters, number of layers, memory usage and a lot of other information. As these are not needed by everyone it is probably better be able to add them manually or have only the most important ones added automatically.

    Additional context I use the meta data in hiplot to evaluate the effect of the hps. This is a small example. image

    opened by FrTerstappen 0
  • feat: :boom: Add _pseudo_ genetic search

    feat: :boom: Add _pseudo_ genetic search

    opened by Anselmoo 3
  • Implement Bayesian optimization with TF or Jax instead of using sklearn

    Implement Bayesian optimization with TF or Jax instead of using sklearn

    Is your feature request related to a problem? Please describe.

    Optimisation via Bayesian might cause some performance issues due to the evaluation of the Einstein Summation.

    https://github.com/keras-team/keras-tuner/blob/a7a361f9521cb1033a05aba865c86eb30784d907/keras_tuner/tuners/bayesian.py#L124-L127

    Describe the solution you'd like

    Let performing jax.numpy.einsum can help.

    Partially support can look like:

    try:
        from jax.numpy import einsum
    except ImportError:
        from numpy import einsum
    

    or system dependent:

    if sys.platform != "win32":
        import jax.numpy as np
    else:
        import numpy as np
    
    enhancement 
    opened by Anselmoo 4
Releases(1.1.3)
  • 1.1.3(Jul 16, 2022)

    Summary

    Bug fixes to better support AutoKeras.

    What's Changed

    • Fixed issue #677 by @Anselmoo in https://github.com/keras-team/keras-tuner/pull/678
    • Adopt safe model and trial saving practices in the multi-worker setting by @jamesmullenbach in https://github.com/keras-team/keras-tuner/pull/684
    • tuner_utils: use datetime to calculate elapsed time by @mebeim in https://github.com/keras-team/keras-tuner/pull/690
    • Add pre_create_trial callback by @jamesmullenbach in https://github.com/keras-team/keras-tuner/pull/695
    • Multi-worker file writing checks by @jamesmullenbach in https://github.com/keras-team/keras-tuner/pull/694
    • Update actions.yml by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/698
    • Add "declare_hyperparameters" to HyperModel by @jamesmullenbach in https://github.com/keras-team/keras-tuner/pull/696
    • Record best epoch info with update_trial by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/706

    New Contributors

    • @Anselmoo made their first contribution in https://github.com/keras-team/keras-tuner/pull/678
    • @jamesmullenbach made their first contribution in https://github.com/keras-team/keras-tuner/pull/684
    • @mebeim made their first contribution in https://github.com/keras-team/keras-tuner/pull/690

    Full Changelog: https://github.com/keras-team/keras-tuner/compare/1.1.2...1.1.3

    Source code(tar.gz)
    Source code(zip)
  • 1.1.3rc0(Jul 15, 2022)

    What's Changed

    • Fixed issue #677 by @Anselmoo in https://github.com/keras-team/keras-tuner/pull/678
    • Adopt safe model and trial saving practices in the multi-worker setting by @jamesmullenbach in https://github.com/keras-team/keras-tuner/pull/684
    • tuner_utils: use datetime to calculate elapsed time by @mebeim in https://github.com/keras-team/keras-tuner/pull/690
    • Add pre_create_trial callback by @jamesmullenbach in https://github.com/keras-team/keras-tuner/pull/695
    • Multi-worker file writing checks by @jamesmullenbach in https://github.com/keras-team/keras-tuner/pull/694
    • Update actions.yml by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/698
    • Add "declare_hyperparameters" to HyperModel by @jamesmullenbach in https://github.com/keras-team/keras-tuner/pull/696
    • Record best epoch info with update_trial by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/706

    New Contributors

    • @Anselmoo made their first contribution in https://github.com/keras-team/keras-tuner/pull/678
    • @jamesmullenbach made their first contribution in https://github.com/keras-team/keras-tuner/pull/684
    • @mebeim made their first contribution in https://github.com/keras-team/keras-tuner/pull/690

    Full Changelog: https://github.com/keras-team/keras-tuner/compare/1.1.2...1.1.3rc0

    Source code(tar.gz)
    Source code(zip)
  • 1.1.2(Mar 25, 2022)

    What's Changed

    • add --profile=black to isort by @LukeWood in https://github.com/keras-team/keras-tuner/pull/672
    • In model checkpointing callback, check logs before get objective value by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/674

    New Contributors

    • @LukeWood made their first contribution in https://github.com/keras-team/keras-tuner/pull/672

    Full Changelog: https://github.com/keras-team/keras-tuner/compare/1.1.1...1.1.2

    Source code(tar.gz)
    Source code(zip)
  • 1.1.2rc0(Mar 25, 2022)

    What's Changed

    • add --profile=black to isort by @LukeWood in https://github.com/keras-team/keras-tuner/pull/672
    • In model checkpointing callback, check logs before get objective value by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/674

    New Contributors

    • @LukeWood made their first contribution in https://github.com/keras-team/keras-tuner/pull/672

    Full Changelog: https://github.com/keras-team/keras-tuner/compare/1.1.1...1.1.2rc0

    Source code(tar.gz)
    Source code(zip)
  • 1.1.1(Mar 20, 2022)

    Highlights

    • Support passing a list of objectives as the objective argument.
    • Raise better error message when the return value of run_trial() or HyperModel.fit() are of wrong type.
    • Various bug fixes for BayesianOptimization tuner.
    • The trial IDs are changed from hex strings to integers counting from 0.

    What's Changed

    • Make hyperparameters names visible in Display output by @C-Pro in https://github.com/keras-team/keras-tuner/pull/634
    • Replace import kerastuner with import keras_tuner by @ageron in https://github.com/keras-team/keras-tuner/pull/640
    • Support multi-objective by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/641
    • reorganize the tests to follow keras best practices by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/643
    • keep Objective in oracle for backward compatibility by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/644
    • better error check for returned eval results by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/646
    • Mitigate the issue of hanging workers after chief already quits when running keras-tuner in distributed tuning mode. by @mtian29 in https://github.com/keras-team/keras-tuner/pull/645
    • Ensure hallucination checks if the Gaussian regressor has been fit be… by @brydon in https://github.com/keras-team/keras-tuner/pull/650
    • Resolves #609: Support for sklearn functions without sample_weight by @brydon in https://github.com/keras-team/keras-tuner/pull/651
    • Resolves #652 and #605: Make human readable trial_id and sync trial numbers between worker Displays by @brydon in https://github.com/keras-team/keras-tuner/pull/653
    • Update tuner.py by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/657
    • fix(bayesian): scalar optimization result (#655) by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/662
    • Generalize hallucination checks to avoid racing conditions by @alisterl in https://github.com/keras-team/keras-tuner/pull/664
    • remove scipy from required dependency by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/665
    • Import scipy.optimize by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/667

    New Contributors

    • @C-Pro made their first contribution in https://github.com/keras-team/keras-tuner/pull/634
    • @ageron made their first contribution in https://github.com/keras-team/keras-tuner/pull/640
    • @mtian29 made their first contribution in https://github.com/keras-team/keras-tuner/pull/645
    • @brydon made their first contribution in https://github.com/keras-team/keras-tuner/pull/650
    • @alisterl made their first contribution in https://github.com/keras-team/keras-tuner/pull/664

    Full Changelog: https://github.com/keras-team/keras-tuner/compare/1.1.1rc0...1.1.1

    Source code(tar.gz)
    Source code(zip)
  • 1.1.1rc0(Mar 1, 2022)

    Highlights

    • Support passing a list of objectives as the objective argument.
    • Raise better error message when the return value of run_trial() or HyperModel.fit() are of wrong type.

    What's Changed

    • Make hyperparameters names visible in Display output by @C-Pro in https://github.com/keras-team/keras-tuner/pull/634
    • Replace import kerastuner with import keras_tuner by @ageron in https://github.com/keras-team/keras-tuner/pull/640
    • Support multi-objective by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/641
    • reorganize the tests to follow keras best practices by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/643
    • keep Objective in oracle for backward compatibility by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/644
    • better error check for returned eval results by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/646
    • Mitigate the issue of hanging workers after chief already quits when running keras-tuner in distributed tuning mode. by @mtian29 in https://github.com/keras-team/keras-tuner/pull/645
    • Ensure hallucination checks if the Gaussian regressor has been fit be… by @brydon in https://github.com/keras-team/keras-tuner/pull/650
    • Resolves #609: Support for sklearn functions without sample_weight by @brydon in https://github.com/keras-team/keras-tuner/pull/651
    • Resolves #652 and #605: Make human readable trial_id and sync trial numbers between worker Displays by @brydon in https://github.com/keras-team/keras-tuner/pull/653
    • Update tuner.py by @haifeng-jin in https://github.com/keras-team/keras-tuner/pull/657

    New Contributors

    • @C-Pro made their first contribution in https://github.com/keras-team/keras-tuner/pull/634
    • @ageron made their first contribution in https://github.com/keras-team/keras-tuner/pull/640
    • @mtian29 made their first contribution in https://github.com/keras-team/keras-tuner/pull/645
    • @brydon made their first contribution in https://github.com/keras-team/keras-tuner/pull/650

    Full Changelog: https://github.com/keras-team/keras-tuner/compare/1.1.0...1.1.1rc0

    Source code(tar.gz)
    Source code(zip)
  • 1.1.0(Nov 5, 2021)

    What's Changed

    • Support HyperModel.fit() to tune the fit process.
    • Support Tuner.run_trial() to return a single float as the objective value to minimize.
    • Support Tuner.run_trial() to return a dictionary of {metric_name: value} or Keras history.
    • Allow not providing hypermodel to Tuner if override Tuner.run_trial().
    • Allow not providing objective to Tuner if HyperModel.fit() or Tuner.run_trial() return a single float.
    • Bug fixes

    Breaking Changes

    • Change internal class MultiExecutionTuner to Tuner to replace all overridden methods.
    • Removed KerasHyperModel an internal class to wrap the user provided HyperModel.

    New Contributors

    • @liqiongyu made their first contribution in https://github.com/keras-team/keras-tuner/pull/594
    • @vardhanaleti made their first contribution in https://github.com/keras-team/keras-tuner/pull/595
    • @howl-anderson made their first contribution in https://github.com/keras-team/keras-tuner/pull/607

    Full Changelog: https://github.com/keras-team/keras-tuner/compare/1.0.4...1.1.0rc0

    Source code(tar.gz)
    Source code(zip)
  • 1.1.0rc0(Oct 19, 2021)

    What's Changed

    • Support HyperModel.fit() to tune the fit process.
    • Support Tuner.run_trial() to return a single float as the objective value to minimize.
    • Support Tuner.run_trial() to return a dictionary of {metric_name: value} or Keras history.
    • Allow not providing hypermodel to Tuner if override Tuner.run_trial().
    • Allow not providing objective to Tuner if HyperModel.fit() or Tuner.run_trial() return a single float.
    • Bug fixes

    Breaking Changes

    • Change internal class MultiExecutionTuner to Tuner to replace all overridden methods.
    • Removed KerasHyperModel an internal class to wrap the user provided HyperModel.

    New Contributors

    • @liqiongyu made their first contribution in https://github.com/keras-team/keras-tuner/pull/594
    • @vardhanaleti made their first contribution in https://github.com/keras-team/keras-tuner/pull/595
    • @howl-anderson made their first contribution in https://github.com/keras-team/keras-tuner/pull/607

    Full Changelog: https://github.com/keras-team/keras-tuner/compare/1.0.4...1.1.0rc0

    Source code(tar.gz)
    Source code(zip)
  • 1.0.4(Aug 25, 2021)

    • Support DataFrame in SklearnTuner.
    • Support Tuner.search_space_summary() to print all the hyperparameters based on conditional_scopes.
    • Support TensorFlow 2.0 for backward compatibility.
    • Bug fixes and documentation improvements.
    • Raise a warning when using with TF 1.
    • Save TPUStrategy models with the TF format.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.4rc1(Aug 24, 2021)

    • Support DataFrame in SklearnTuner.
    • Support Tuner.search_space_summary() to print all the hyperparameters based on conditional_scopes.
    • Support TensorFlow 2.0 for backward compatibility.
    • Bug fixes and documentation improvements.
    • Raise a warning when using with TF 1.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.4rc0(Aug 15, 2021)

    • Support DataFrame in SklearnTuner.
    • Support Tuner.search_space_summary() to print all the hyperparameters based on conditional_scopes.
    • Support TensorFlow 2.0 for backward compatibility.
    • Bug fixes and documentation improvements.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.3(Jun 17, 2021)

    • Renamed import name of kerastuner to keras_tuner.
    • Renamed the Oracles to add the Oracle as suffix, e.g., RandomSearch oracle is renamed to RandomSearchOracle. (The RandomSearch tuner is still named RandomSearch.)
    • Renamed Tuner._populate_space to Tuner.populate_space.
    • Renamed Tuner._score_trail to Tuner.score_trial.
    • Renamed kt.tuners.Sklearn tuner to kt.SklearnTuner and put it at the root level of import.
    • Removed the CloudLogger feature, but the Logger class still works.
    • Tuning sklearn.pipeline.Pipeline.
    • Improved the docstrings.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.2(Nov 20, 2020)

    • Added Multi-worker DistributionStrategy support.
    • Added EfficientNet application.
    • Added application hypermodel with augmentation.
    • Format console output information.
    • Various bug fixes.
    Source code(tar.gz)
    Source code(zip)
  • 1.0.0(Oct 30, 2019)

Owner
Keras
Deep Learning for humans
Keras
optimization routines for hyperparameter tuning

Optunity is a library containing various optimizers for hyperparameter tuning. Hyperparameter tuning is a recurrent problem in many machine learning t

Marc Claesen 398 Nov 9, 2022
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 4, 2023
Saeed Lotfi 28 Dec 12, 2022
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

null 98 Aug 16, 2022
Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization

This project is now archived. It's been fun working on it, but it's time for me to move on. Thank you for all the support and feedback over the last c

Max Pumperla 2.1k Jan 3, 2023
Distributed Asynchronous Hyperparameter Optimization in Python

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

null 6.5k Jan 1, 2023
Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Hyperparameter Optimization for Keras Talos • Key Features • Examples • Install • Support • Docs • Issues • License • Download Talos radically changes

Autonomio 1.6k Dec 15, 2022
Automated Hyperparameter Optimization Competition

QQ浏览器2021AI算法大赛 - 自动超参数优化竞赛 ACM CIKM 2021 AnalyticCup 在信息流推荐业务场景中普遍存在模型或策略效果依赖于“超参数”的问题,而“超参数"的设定往往依赖人工经验调参,不仅效率低下维护成本高,而且难以实现更优效果。因此,本次赛题以超参数优化为主题,从真

null 20 Dec 9, 2021
HyperaPy: An automatic hyperparameter optimization framework ⚡🚀

hyperpy HyperPy: An automatic hyperparameter optimization framework Description HyperPy: Library for automatic hyperparameter optimization. Build on t

Sergio Mora 7 Sep 6, 2022
A Lightweight Hyperparameter Optimization Tool 🚀

Lightweight Hyperparameter Optimization ?? The mle-hyperopt package provides a simple and intuitive API for hyperparameter optimization of your Machin

null 136 Jan 8, 2023
2021-AIAC-QQ-Browser-Hyperparameter-Optimization-Rank6

2021-AIAC-QQ-Browser-Hyperparameter-Optimization-Rank6

Aigege 8 Mar 31, 2022
DeepHyper: Scalable Asynchronous Neural Architecture and Hyperparameter Search for Deep Neural Networks

What is DeepHyper? DeepHyper is a software package that uses learning, optimization, and parallel computing to automate the design and development of

DeepHyper Team 214 Jan 8, 2023
Deep Learning for humans

Keras: Deep Learning for Python Under Construction In the near future, this repository will be used once again for developing the Keras codebase. For

Keras 57k Jan 9, 2023
Topic Modelling for Humans

gensim – Topic Modelling in Python Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. Targ

RARE Technologies 13.8k Jan 3, 2023
Machine Learning toolbox for Humans

Reproducible Experiment Platform (REP) REP is ipython-based environment for conducting data-driven research in a consistent and reproducible way. Main

Yandex 662 Nov 20, 2022
Deep Learning for humans

Keras: Deep Learning for Python Under Construction In the near future, this repository will be used once again for developing the Keras codebase. For

Keras 50.7k Feb 12, 2021
Knowledge Management for Humans using Machine Learning & Tags

HyperTag HyperTag helps humans intuitively express how they think about their files using tags and machine learning.

Ravn Tech, Inc. 165 Nov 4, 2022
Reimplementation of the paper `Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? (ACL2020)`

Human Attention for Text Classification Re-implementation of the paper Human Attention Maps for Text Classification: Do Humans and Neural Networks Foc

Shunsuke KITADA 15 Dec 13, 2021
Synthetic Humans for Action Recognition, IJCV 2021

SURREACT: Synthetic Humans for Action Recognition from Unseen Viewpoints Gül Varol, Ivan Laptev and Cordelia Schmid, Andrew Zisserman, Synthetic Human

Gul Varol 59 Dec 14, 2022