...
In our example we run a machine learning algorithm which produces a flare prediction to store within our database. Hereby, the algorithm consists of a training phase and a prediction phase. Within the training phase the algorithm learns and tunes its parameters which are stored within the database as a configuration for later use. Afterwards, within the prediction phase we use this configuration to compute flare predictions which are also stored within the database.
Code Block | ||||
---|---|---|---|---|
| ||||
def train_model(model, train_data, validation_data, max_epoches, batch_size): # train model (e.g. until max_epoches are reached or validation loss increases) ... # store model parameters within database post_data = { "algorithm_run_id": r_id, "config_data": model.get_parameters(), "description": {}, } requests.post('http://localhost:8004/algoconfig/%s' % cfg_name, data=post_data) def test_model(model, test_data): # test model (e.g. predict the test_data) ... # store predictions within database (time_start, position_hg, prediction_data) = model.get_prediction() post_data = [ { "algorithm_config": cfg_name, "algorithm_run_id": r_id, "lat_hg": position_hg[0], "long_hg": position_hg[1], "prediction_data": prediction_data, "source_data": [get_fc_id(row) for row in test_data], "time_start": time_start } ] requests.post('http://localhost:8004/prediction/bulk', data=post_data) |
Implementation
Prepare
First of all you have to define the data you would like to store into the prediction service. In our example we are going to store some predictions from a machine learning algorithm into the dataset ml-algorithms.
...