Commit bca39ff2 authored by Florian RICHOUX's avatar Florian RICHOUX

Last results to analyse


Former-commit-id: b205a171
parent 4808f380
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
protein1 (InputLayer) (None, None, 20) 0
__________________________________________________________________________________________________
protein2 (InputLayer) (None, None, 20) 0
__________________________________________________________________________________________________
conv1d_1 (Conv1D) (None, None, 5) 2005 protein1[0][0]
protein2[0][0]
__________________________________________________________________________________________________
max_pooling1d_1 (MaxPooling1D) (None, None, 5) 0 conv1d_1[0][0]
conv1d_1[1][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, None, 5) 20 max_pooling1d_1[0][0]
max_pooling1d_1[1][0]
__________________________________________________________________________________________________
conv1d_2 (Conv1D) (None, None, 5) 505 batch_normalization_1[0][0]
batch_normalization_1[1][0]
__________________________________________________________________________________________________
max_pooling1d_2 (MaxPooling1D) (None, None, 5) 0 conv1d_2[0][0]
conv1d_2[1][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, None, 5) 20 max_pooling1d_2[0][0]
max_pooling1d_2[1][0]
__________________________________________________________________________________________________
lstm_1 (LSTM) (None, 32) 4864 batch_normalization_2[0][0]
batch_normalization_2[1][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 64) 0 lstm_1[0][0]
lstm_1[1][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 25) 1625 concatenate_1[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 25) 100 dense_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 25) 650 batch_normalization_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 25) 100 dense_2[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 25) 650 batch_normalization_4[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 25) 100 dense_3[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 1) 26 batch_normalization_5[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 1) 0 dense_4[0][0]
==================================================================================================
Total params: 10,665
Trainable params: 10,495
Non-trainable params: 170
__________________________________________________________________________________________________
......@@ -29,9 +29,9 @@ class LSTM32_2Conv3_2Dense_S(AbstractModel):
protein1 = conv2(protein1)
protein1 = pool2(protein1)
protein1 = batchnorm2(protein1)
protein1 = conv3(protein1)
protein1 = pool3(protein1)
protein1 = batchnorm3(protein1)
#protein1 = conv3(protein1)
#protein1 = pool3(protein1)
#protein1 = batchnorm3(protein1)
protein1 = lstm(protein1)
input2 = Input(shape=(None,20,), dtype=np.float32, name='protein2')
......@@ -41,9 +41,9 @@ class LSTM32_2Conv3_2Dense_S(AbstractModel):
protein2 = conv2(protein2)
protein2 = pool2(protein2)
protein2 = batchnorm2(protein2)
protein2 = conv3(protein2)
protein2 = pool3(protein2)
protein2 = batchnorm3(protein2)
#protein2 = conv3(protein2)
#protein2 = pool3(protein2)
#protein2 = batchnorm3(protein2)
protein2 = lstm(protein2)
head = layers.concatenate([protein1, protein2], axis=-1)
......
......@@ -29,9 +29,9 @@ class LSTM32_2Conv3_4Dense_S(AbstractModel):
protein1 = conv2(protein1)
protein1 = pool2(protein1)
protein1 = batchnorm2(protein1)
protein1 = conv3(protein1)
protein1 = pool3(protein1)
protein1 = batchnorm3(protein1)
#protein1 = conv3(protein1)
#protein1 = pool3(protein1)
#protein1 = batchnorm3(protein1)
protein1 = lstm(protein1)
input2 = Input(shape=(None,20,), dtype=np.float32, name='protein2')
......@@ -41,9 +41,9 @@ class LSTM32_2Conv3_4Dense_S(AbstractModel):
protein2 = conv2(protein2)
protein2 = pool2(protein2)
protein2 = batchnorm2(protein2)
protein2 = conv3(protein2)
protein2 = pool3(protein2)
protein2 = batchnorm3(protein2)
#protein2 = conv3(protein2)
#protein2 = pool3(protein2)
#protein2 = batchnorm3(protein2)
protein2 = lstm(protein2)
head = layers.concatenate([protein1, protein2], axis=-1)
......
File lstm32_3conv3_2dense_shared_2019-01-06_03:45_gpu-3-1_adam_0.001_2048_55_mirror-medium.txt
lstm32_3conv3_2dense_shared, epochs=55, batch=2048, optimizer=adam, learning rate=0.001, patience=55
Number of training samples: 85104
Loss
0: train_loss=0.6799900982697715, val_loss=0.6997605065737141
1: train_loss=0.6280333764415461, val_loss=0.6289812705108109
2: train_loss=0.4837541959859034, val_loss=1.834091910592964
3: train_loss=0.41386795336981874, val_loss=2.3090519681584536
4: train_loss=0.37924608289579137, val_loss=1.4066053382358543
5: train_loss=0.35979775642627027, val_loss=2.0909988249432296
6: train_loss=0.3480355662720435, val_loss=1.9914535727335052
7: train_loss=0.335607010695378, val_loss=1.4803673835203106
8: train_loss=0.32806927303574007, val_loss=0.45117945285524746
9: train_loss=0.3220277748950152, val_loss=0.5947588221124248
10: train_loss=0.31877007339766744, val_loss=2.1863369878118264
11: train_loss=0.3125224515535169, val_loss=0.9199483262694881
12: train_loss=0.31047472207589716, val_loss=1.305245375116313
13: train_loss=0.3069303399258302, val_loss=0.6270787233198848
14: train_loss=0.3014528531904036, val_loss=0.6049941075961985
15: train_loss=0.29758701638902885, val_loss=0.4425861762217437
16: train_loss=0.29387483910121154, val_loss=0.7240578894439641
17: train_loss=0.2928977593328523, val_loss=1.0219202274336046
18: train_loss=0.2905614507626402, val_loss=1.3090468079265427
19: train_loss=0.28768124118584576, val_loss=0.7512391149821064
20: train_loss=0.2869332683536876, val_loss=0.3947440334497683
21: train_loss=0.28439446455062073, val_loss=0.6892616856936786
22: train_loss=0.28543079556054385, val_loss=0.5889539578053569
23: train_loss=0.28126003662543453, val_loss=0.5191930367057436
24: train_loss=0.2783720697489733, val_loss=0.38270354658165356
25: train_loss=0.2784205080927662, val_loss=0.4396661711248625
26: train_loss=0.27664039945844465, val_loss=0.38816280928217406
27: train_loss=0.27638360475279106, val_loss=0.6077685693351252
28: train_loss=0.27516298582957727, val_loss=0.4000612534704641
29: train_loss=0.2777540695624684, val_loss=0.8647484898548605
30: train_loss=0.2709167645342436, val_loss=0.5361901867912184
31: train_loss=0.26895173705034314, val_loss=0.5117508500158982
32: train_loss=0.26853321996661234, val_loss=0.38446994621153835
33: train_loss=0.26788876109451343, val_loss=0.6343548430739727
34: train_loss=0.26751430926297837, val_loss=0.3910971844722544
35: train_loss=0.26628884894997135, val_loss=0.5104015339482614
36: train_loss=0.2654115781566213, val_loss=0.37558072473028115
37: train_loss=0.26501847214998214, val_loss=0.3841704403651224
38: train_loss=0.2625839278749067, val_loss=0.3869497032856127
39: train_loss=0.26252552468562895, val_loss=0.3848755787322979
40: train_loss=0.2625487379829201, val_loss=0.44221002853636326
41: train_loss=0.25997104140954214, val_loss=0.529710524368725
42: train_loss=0.2594636706766676, val_loss=0.9966231871554937
43: train_loss=0.25630343598538624, val_loss=0.5845257073775634
44: train_loss=0.25726745291230463, val_loss=0.39202547393189524
45: train_loss=0.2559448567340418, val_loss=0.4027062015356056
46: train_loss=0.25465056742239456, val_loss=0.4341520394527341
47: train_loss=0.2582665998744301, val_loss=0.3742032842282069
48: train_loss=0.2569219624890849, val_loss=0.4901083228768843
49: train_loss=0.2529401668291563, val_loss=0.4931294202265325
50: train_loss=0.25420817284571495, val_loss=0.3672841096971247
51: train_loss=0.2541189195462514, val_loss=0.37641034593375383
52: train_loss=0.25212358359354603, val_loss=0.370302761494509
53: train_loss=0.2520684733227649, val_loss=0.364379785893358
54: train_loss=0.2513853672819717, val_loss=0.38268557368472483
///////////////////////////////////////////
Accuracy
0: train_acc=0.5763888887207991, val_acc=0.5238652315401581
1: train_acc=0.6529305318419838, val_acc=0.6987989397042256
2: train_acc=0.7754277118862875, val_acc=0.5329121820195158
3: train_acc=0.820760481461566, val_acc=0.5289346434325887
4: train_acc=0.8416290658603689, val_acc=0.5710497576888053
5: train_acc=0.8540256625280543, val_acc=0.5290906253758765
6: train_acc=0.8595013159584072, val_acc=0.5237092496154647
7: train_acc=0.8667277682094248, val_acc=0.5425830600488913
8: train_acc=0.8698416053296301, val_acc=0.8019029786128965
9: train_acc=0.8735782100778254, val_acc=0.7423958821799487
10: train_acc=0.8739777217011722, val_acc=0.5158321634504431
11: train_acc=0.877855329568235, val_acc=0.6262673541492093
12: train_acc=0.8793241208624764, val_acc=0.5492122908907437
13: train_acc=0.8800643920158843, val_acc=0.7234440805517443
14: train_acc=0.883307482631925, val_acc=0.7348307597713414
15: train_acc=0.8845882687404609, val_acc=0.8135236305056971
16: train_acc=0.8872673437881793, val_acc=0.6973171115962862
17: train_acc=0.8869148332455437, val_acc=0.597254717448557
18: train_acc=0.8877373565227202, val_acc=0.5527998747868018
19: train_acc=0.8893706526360043, val_acc=0.6781313373506618
20: train_acc=0.889582158591788, val_acc=0.8433161741615067
21: train_acc=0.8908276928056114, val_acc=0.7019185777146365
22: train_acc=0.8897349126111701, val_acc=0.7429418185165939
23: train_acc=0.8913094568328764, val_acc=0.7735922628047035
24: train_acc=0.8941295356830274, val_acc=0.8469817489548307
25: train_acc=0.893424515751973, val_acc=0.8179691146063816
26: train_acc=0.8945760479276399, val_acc=0.8493214774905306
27: train_acc=0.8944585451754351, val_acc=0.7276555921093868
28: train_acc=0.8936595226123069, val_acc=0.8422243008560034
29: train_acc=0.8933892651011249, val_acc=0.6293090013181377
30: train_acc=0.8966558563751842, val_acc=0.768990796760731
31: train_acc=0.897983643710109, val_acc=0.7700826699732618
32: train_acc=0.8976781350438096, val_acc=0.8564186542366247
33: train_acc=0.8979131419041435, val_acc=0.7261737637969079
34: train_acc=0.8973726264895701, val_acc=0.8447980023438242
35: train_acc=0.899299680413463, val_acc=0.7725783802477102
36: train_acc=0.8996169394872134, val_acc=0.8569645909451598
37: train_acc=0.8990529237732131, val_acc=0.8540009347292808
38: train_acc=0.9002749580349951, val_acc=0.850101387021025
39: train_acc=0.9001222031975761, val_acc=0.8555607537530807
40: train_acc=0.8994994362979752, val_acc=0.8127437209566081
41: train_acc=0.9008272232182785, val_acc=0.7724223984531783
42: train_acc=0.9012267342813258, val_acc=0.6077834959614762
43: train_acc=0.9029657828402174, val_acc=0.733348931477457
44: train_acc=0.9023430152682573, val_acc=0.8412884094380045
45: train_acc=0.9029540325112082, val_acc=0.8380127893727388
46: train_acc=0.9029070317106468, val_acc=0.8444860384386539
47: train_acc=0.9008272234311923, val_acc=0.8611761047248442
48: train_acc=0.9012854860496378, val_acc=0.7858368424148721
49: train_acc=0.9044463247660296, val_acc=0.7958196841820673
50: train_acc=0.9038000560093697, val_acc=0.8600842314193409
51: train_acc=0.9033770446356895, val_acc=0.8568086113540752
52: train_acc=0.9037295547637035, val_acc=0.860630168127876
53: train_acc=0.9047753339782876, val_acc=0.8604741861473991
54: train_acc=0.9041525660253239, val_acc=0.8537669619073215
///////////////////////////////////////////
Validation metrics
Number of 0 predicted: 6875
Number of 1 predicted: 5947
Validation precision: 0.8258642765685019
Validation recall: 0.8676643685892046
Validation F1-score: 0.8462484624846249
File lstm32_3conv3_2dense_shared_2019-01-06_04:03_gpu-3-1_adam_0.001_2048_55_test-mirror-medium.txt
lstm32_3conv3_2dense_shared, epochs=55, batch=2048, optimizer=adam, learning rate=0.001, patience=55
Number of training samples: 97926
Test loss: 0.3981160481060257
Test accuracy: 0.8462439481399961
Number of 0 predicted: 6137
Number of 1 predicted: 6669
Test precision: 0.8770860077021823
Test recall: 0.819613135402609
Test F1-score: 0.8473761723897373
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment