Optimize hyperparameters for machine learning models efficiently.
Distributed Asynchronous Hyperparameter Optimization
$ python -c "from hyperopt import fmin, tpe, hp; fmin(fn=objective, space=hp.choice('x', [1,2,3]), algo=tpe.suggest, max_evals=100)"$ python -c "from hyperopt import fmin, tpe, hp; fmin(fn=evaluate_model, space={'n_estimators': hp.randint('n_est', 500), 'max_depth': hp.randint('depth', 30)}, algo=tpe.suggest, max_evals=50)"$ python -c "from hyperopt import fmin, tpe, SparkTrials; fmin(fn=train_model, space=search_space, algo=tpe.suggest, trials=SparkTrials(), max_evals=200)"$ python -c "from hyperopt import fmin, atpe, hp; best = fmin(fn=validate_clf, space={'C': hp.loguniform('C', -5, 5)}, algo=atpe.suggest, max_evals=100)"$ python -c "from hyperopt import fmin, tpe, Trials; trials = Trials(); fmin(fn=objective, space=space, algo=tpe.suggest, trials=trials, max_evals=100); trials.save_pickle('my_trials.pkl')"