The huge number of possible variations (hyperparameter) within a neural network model makes it very hard to build a complete automated testing tool. From the other hand, manual tuning hyperparameters is very time wasting.
Unlike random automated tuning, Bayesian Optimisation methods aim to choose next hyperparameter values according to past good models.
One Python implementation of this approach is called Hyperopt. If you didn’t read this general post about Hyperopt I strongly reccomand to do it.
It’s assumed that the reader knows the basics of Feed-Forward and LSTM neural networks.
Example for regression problem with Feed-Forward neural network
Example for binary classification problem with LSTM neural network
The same applies to LSTM networks. In this case, we have to build our model a bit differently, since we are dealing with a classification problem and LSTMs. Otherwise, the principle is the same. You can see here the html version and here the Github repository with Jupyter Notebook and PDF.
Deal with errors with some combinations of Hyperparameter
I faced many errors with the LSTM network whenever some hyperparameter values were set. It’s enough to re-execute the tuning to make such errors disappear, since they depend on the specific chosen set of values. The general approach I would suggest to avoid annoying manual code executions is to use a try … except block containing the parameter tuning inside an always true while loop. If an error occurs continue with the next iteration, otherwise break and show the results. This is the section where I used this approach (see the above LSTM implementation):
opt_params = fmin(
max_evals=18, # stop searching after 18 iterations
trials = keep_trials
# store trials in a file
f = open('store_trials_LSTM.pckl', 'wb')
print('number of trials:', len(keep_trials.trials))