Commit 8b4eab38 authored by Florian RICHOUX's avatar Florian RICHOUX

Merge branch 'develop' of gitlab.univ-nantes.fr:richoux-f/DeepPPI into develop

parents 89174c08 00c52a5a
keras/models/figs/fc2_2dense.png

70.4 KB | W: | H:

keras/models/figs/fc2_2dense.png

94.7 KB | W: | H:

keras/models/figs/fc2_2dense.png
keras/models/figs/fc2_2dense.png
keras/models/figs/fc2_2dense.png
keras/models/figs/fc2_2dense.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -5,35 +5,36 @@ protein1 (InputLayer) (None, 1166, 20) 0
__________________________________________________________________________________________________
protein2 (InputLayer) (None, 1166, 20) 0
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 1166, 20) 0 protein1[0][0]
flatten_1 (Flatten) (None, 23320) 0 protein1[0][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, 1166, 20) 0 protein2[0][0]
flatten_2 (Flatten) (None, 23320) 0 protein2[0][0]
__________________________________________________________________________________________________
lambda_3 (Lambda) (None, 1166, 20) 0 protein1[0][0]
dense_1 (Dense) (None, 1000) 23321000 flatten_1[0][0]
__________________________________________________________________________________________________
lambda_4 (Lambda) (None, 1166, 20) 0 protein2[0][0]
dense_3 (Dense) (None, 1000) 23321000 flatten_2[0][0]
__________________________________________________________________________________________________
lambda_5 (Lambda) (None, 1166, 20) 0 protein1[0][0]
batch_normalization_1 (BatchNor (None, 1000) 4000 dense_1[0][0]
__________________________________________________________________________________________________
lambda_6 (Lambda) (None, 1166, 20) 0 protein2[0][0]
batch_normalization_3 (BatchNor (None, 1000) 4000 dense_3[0][0]
__________________________________________________________________________________________________
lambda_7 (Lambda) (None, 1166, 20) 0 protein1[0][0]
dense_2 (Dense) (None, 1000) 1001000 batch_normalization_1[0][0]
__________________________________________________________________________________________________
lambda_8 (Lambda) (None, 1166, 20) 0 protein2[0][0]
dense_4 (Dense) (None, 1000) 1001000 batch_normalization_3[0][0]
__________________________________________________________________________________________________
model_1 (Model) (None, 1) 48860601 lambda_1[0][0]
lambda_2[0][0]
lambda_3[0][0]
lambda_4[0][0]
lambda_5[0][0]
lambda_6[0][0]
lambda_7[0][0]
lambda_8[0][0]
batch_normalization_2 (BatchNor (None, 1000) 4000 dense_2[0][0]
__________________________________________________________________________________________________
activation_1 (Concatenate) (None, 1) 0 model_1[1][0]
model_1[2][0]
model_1[3][0]
model_1[4][0]
batch_normalization_4 (BatchNor (None, 1000) 4000 dense_4[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 2000) 0 batch_normalization_2[0][0]
batch_normalization_4[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 100) 200100 concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 100) 400 dense_5[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 1) 101 batch_normalization_5[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 1) 0 dense_6[0][0]
==================================================================================================
Total params: 48,860,601
Trainable params: 48,852,401
......
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
protein1 (InputLayer) (None, None, 20) 0
__________________________________________________________________________________________________
protein2 (InputLayer) (None, None, 20) 0
__________________________________________________________________________________________________
conv1d_1 (Conv1D) (None, None, 5) 2005 protein1[0][0]
protein2[0][0]
__________________________________________________________________________________________________
max_pooling1d_1 (MaxPooling1D) (None, None, 5) 0 conv1d_1[0][0]
conv1d_1[1][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, 5) 20 max_pooling1d_1[0][0]
max_pooling1d_1[1][0]
__________________________________________________________________________________________________
conv1d_2 (Conv1D) (None, None, 5) 505 batch_normalization_1[0][0]
batch_normalization_1[1][0]
__________________________________________________________________________________________________
max_pooling1d_2 (MaxPooling1D) (None, None, 5) 0 conv1d_2[0][0]
conv1d_2[1][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, None, 5) 20 max_pooling1d_2[0][0]
max_pooling1d_2[1][0]
__________________________________________________________________________________________________
conv1d_3 (Conv1D) (None, None, 5) 505 batch_normalization_2[0][0]
batch_normalization_2[1][0]
__________________________________________________________________________________________________
max_pooling1d_3 (MaxPooling1D) (None, None, 5) 0 conv1d_3[0][0]
conv1d_3[1][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, None, 5) 20 max_pooling1d_3[0][0]
max_pooling1d_3[1][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (None, 64) 17920 batch_normalization_3[0][0]
batch_normalization_3[1][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 128) 0 lstm_1[0][0]
lstm_1[1][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 25) 3225 concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 25) 100 dense_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 26 batch_normalization_4[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 1) 0 dense_2[0][0]
==================================================================================================
Total params: 24,346
Trainable params: 24,266
Non-trainable params: 80
__________________________________________________________________________________________________
File fc2_2dense_2019-01-03_02:52_gpu-0-1_nadam_0.002_1024_15_mirror-double.txt
fc2_2dense, epochs=15, batch=1024, optimizer=nadam, learning rate=0.002, patience=15
Number of training samples: 91036
Loss
0: train_loss=0.36501177620178543, val_loss=0.2392944501659002
1: train_loss=0.19815394991355786, val_loss=0.2057685309169999
2: train_loss=0.12208851836214188, val_loss=0.19690281326210674
3: train_loss=0.0715234048513653, val_loss=0.2264957801496508
4: train_loss=0.03894896965696992, val_loss=0.2476133049754511
5: train_loss=0.02054658807950634, val_loss=0.2529032679103993
6: train_loss=0.01005931429854183, val_loss=0.2619100981915491
7: train_loss=0.005895763559153901, val_loss=0.2766181545487866
8: train_loss=0.00243523188266281, val_loss=0.26215376339015173
9: train_loss=0.0009202075296643571, val_loss=0.2621832119880934
10: train_loss=0.0006837855870210823, val_loss=0.2650636923507597
11: train_loss=0.0005785938947553577, val_loss=0.26800069097097673
12: train_loss=0.00049187975275842, val_loss=0.27173215197064143
13: train_loss=0.00043310047310694225, val_loss=0.27383288328368893
14: train_loss=0.0004137822801393307, val_loss=0.27538527591922807
///////////////////////////////////////////
Accuracy
0: train_acc=0.8541236431633548, val_acc=0.9044458662416961
1: train_acc=0.9218221365491475, val_acc=0.9220374223519836
2: train_acc=0.9525242762362592, val_acc=0.9378698227997686
3: train_acc=0.9735489256787716, val_acc=0.9354709739420446
4: train_acc=0.9869941559170602, val_acc=0.9392291700079329
5: train_acc=0.9938266180176439, val_acc=0.9425875579817056
6: train_acc=0.9973087567578898, val_acc=0.9436270593262359
7: train_acc=0.998670855582643, val_acc=0.9434671357843603
8: train_acc=0.9995935673799377, val_acc=0.9473052930397242
9: train_acc=0.999989015334593, val_acc=0.9478650243686864
10: train_acc=1.0, val_acc=0.948184870842379
11: train_acc=1.0, val_acc=0.9482648324608021
12: train_acc=1.0, val_acc=0.9481049092239558
13: train_acc=1.0, val_acc=0.9480249476055327
14: train_acc=1.0, val_acc=0.9480249476055327
///////////////////////////////////////////
Validation metrics
Number of 0 predicted: 5998
Number of 1 predicted: 6508
Validation precision: 0.9806366918280276
Validation recall: 0.9182544560540873
Validation F1-score: 0.9484208855737185
File fc2_2dense_2019-01-03_03:14_gpu-0-1_nadam_0.002_1024_3_test-mirror-double_train-val.txt
fc2_2dense, epochs=3, batch=1024, optimizer=nadam, learning rate=0.002, patience=3
Number of training samples: 103542
Test loss: 0.8568853457768758
Test accuracy: 0.7958333333333333
Number of 0 predicted: 565
Number of 1 predicted: 155
Test precision: 0.5153846153846153
Test recall: 0.864516129032258
Test F1-score: 0.6457831325301204
File fc2_2dense_2019-01-03_03:18_gpu-0-1_nadam_0.002_1024_3_test-mirror-double_train-val_test-medium.txt
fc2_2dense, epochs=3, batch=1024, optimizer=nadam, learning rate=0.002, patience=3
Test loss: 0.10233030778283317
Test accuracy: 0.9640793378104013
Number of 0 predicted: 6610
Number of 1 predicted: 6196
Test precision: 0.9602053915275995
Test recall: 0.9657843770174306
Test F1-score: 0.9629868039909881
File lstm32_3conv3_2dense_shared_2019-01-03_02:22_gpu-0-1_nadam_0.002_1024_200_test-mirror_double_train-val.txt
lstm32_3conv3_2dense_shared, epochs=200, batch=1024, optimizer=nadam, learning rate=0.002, patience=200
Test loss: 1.1455803977118597
Test accuracy: 0.7638888888888888
Number of 0 predicted: 568
Number of 1 predicted: 152
Test precision: 0.4653846153846154
Test recall: 0.7960526315789473
Test F1-score: 0.587378640776699
File lstm32_3conv3_2dense_shared_2019-01-03_02:23_gpu-0-1_nadam_0.002_1024_200_test-mirror_double_train-val_on-double-medium.txt
lstm32_3conv3_2dense_shared, epochs=200, batch=1024, optimizer=nadam, learning rate=0.002, patience=200
Test loss: 0.19429034672359852
Test accuracy: 0.9256598469374349
Number of 0 predicted: 7076
Number of 1 predicted: 5730
Test precision: 0.8833440308087291
Test recall: 0.9607329842931938
Test F1-score: 0.9204146463802041
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment