Commit 80e6eeea authored by Florian RICHOUX's avatar Florian RICHOUX

Add FC results with 24 AA

parent 8d8ae11c
keras/models/figs/fc2_20_2dense.png

95.4 KB | W: | H:

keras/models/figs/fc2_20_2dense.png

95.8 KB | W: | H:

keras/models/figs/fc2_20_2dense.png
keras/models/figs/fc2_20_2dense.png
keras/models/figs/fc2_20_2dense.png
keras/models/figs/fc2_20_2dense.png
  • 2-up
  • Swipe
  • Onion skin
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
protein1 (InputLayer) (None, 1166, 20) 0
protein1 (InputLayer) (None, 1166, 24) 0
__________________________________________________________________________________________________
protein2 (InputLayer) (None, 1166, 20) 0
protein2 (InputLayer) (None, 1166, 24) 0
__________________________________________________________________________________________________
flatten_1 (Flatten) (None, 23320) 0 protein1[0][0]
flatten_1 (Flatten) (None, 27984) 0 protein1[0][0]
__________________________________________________________________________________________________
flatten_2 (Flatten) (None, 23320) 0 protein2[0][0]
flatten_2 (Flatten) (None, 27984) 0 protein2[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 20) 466420 flatten_1[0][0]
dense_1 (Dense) (None, 20) 559700 flatten_1[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 20) 466420 flatten_2[0][0]
dense_3 (Dense) (None, 20) 559700 flatten_2[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 20) 80 dense_1[0][0]
__________________________________________________________________________________________________
......@@ -36,7 +36,7 @@ dense_6 (Dense) (None, 1) 21 batch_normaliza
__________________________________________________________________________________________________
activation_1 (Activation) (None, 1) 0 dense_6[0][0]
==================================================================================================
Total params: 934,921
Trainable params: 934,721
Total params: 1,121,481
Trainable params: 1,121,281
Non-trainable params: 200
__________________________________________________________________________________________________
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
protein1 (InputLayer) (None, None, 20) 0
protein1 (InputLayer) (None, None, 24) 0
__________________________________________________________________________________________________
protein2 (InputLayer) (None, None, 20) 0
protein2 (InputLayer) (None, None, 24) 0
__________________________________________________________________________________________________
conv1d_1 (Conv1D) (None, None, 5) 2005 protein1[0][0]
conv1d_1 (Conv1D) (None, None, 5) 2405 protein1[0][0]
protein2[0][0]
__________________________________________________________________________________________________
max_pooling1d_1 (MaxPooling1D) (None, None, 5) 0 conv1d_1[0][0]
......@@ -46,7 +46,7 @@ dense_2 (Dense) (None, 1) 26 batch_normaliza
__________________________________________________________________________________________________
activation_1 (Activation) (None, 1) 0 dense_2[0][0]
==================================================================================================
Total params: 9,690
Trainable params: 9,610
Total params: 10,090
Trainable params: 10,010
Non-trainable params: 80
__________________________________________________________________________________________________
File fc2_20_2dense_2019-01-08_06:19_gpu-0-1_adam_0.001_2048_25_AA24_mirror-double.txt
fc2_20_2dense, epochs=25, batch=2048, optimizer=adam, learning rate=0.001, patience=25
Number of training samples: 91036
Loss
0: train_loss=0.4417371770083116, val_loss=0.32252587977360103
1: train_loss=0.26160661137854624, val_loss=0.2544424402061775
2: train_loss=0.20477362871471222, val_loss=0.22944802333931913
3: train_loss=0.16909993784075342, val_loss=0.22021506133602653
4: train_loss=0.14366864884531902, val_loss=0.21090741871872387
5: train_loss=0.12197253316361936, val_loss=0.20641996513342603
6: train_loss=0.10435193384602208, val_loss=0.20725221363102247
7: train_loss=0.09023278191626095, val_loss=0.20953509090005085
8: train_loss=0.07765869756893658, val_loss=0.21243230660362433
9: train_loss=0.06582636758308638, val_loss=0.21787047172648877
10: train_loss=0.056477498791820846, val_loss=0.21742388012217423
11: train_loss=0.04703014490899975, val_loss=0.22122446108146532
12: train_loss=0.038443962533637965, val_loss=0.23559537308799083
13: train_loss=0.03428265224078129, val_loss=0.24056694742280846
14: train_loss=0.029342984914972156, val_loss=0.24233892475645505
15: train_loss=0.025534223374911513, val_loss=0.25545422405161783
16: train_loss=0.021602180771412804, val_loss=0.25036901982425785
17: train_loss=0.01760456295042092, val_loss=0.2621175021614701
18: train_loss=0.015382590854051133, val_loss=0.27272200201218477
19: train_loss=0.014537894726928981, val_loss=0.2758074817649082
20: train_loss=0.01330122905946835, val_loss=0.27649184096322793
21: train_loss=0.012704190020439843, val_loss=0.29414154056566305
22: train_loss=0.010565862220331153, val_loss=0.30025702340305854
23: train_loss=0.010217883922246424, val_loss=0.29689574462307866
24: train_loss=0.009469830345355387, val_loss=0.29579308147184874
///////////////////////////////////////////
Accuracy
0: train_acc=0.8119425281965438, val_acc=0.8624660159594799
1: train_acc=0.9013247507057122, val_acc=0.9008475929837064
2: train_acc=0.9243815635156692, val_acc=0.9137214140359753
3: train_acc=0.9386067050711918, val_acc=0.9153206462233272
4: train_acc=0.9484160110361626, val_acc=0.925395810144645
5: train_acc=0.9572586666888714, val_acc=0.9289141212313456
6: train_acc=0.9648051319178255, val_acc=0.930833200502449
7: train_acc=0.9698470934130111, val_acc=0.9310730853577185
8: train_acc=0.9747242846101977, val_acc=0.9326723168682866
9: train_acc=0.9797882155566692, val_acc=0.9317927399235268
10: train_acc=0.9835889098110934, val_acc=0.9353910118374809
11: train_acc=0.987312711407668, val_acc=0.9357908205396553
12: train_acc=0.9902895560236932, val_acc=0.9349112427370007
13: train_acc=0.9914429456976356, val_acc=0.9363505521736468
14: train_acc=0.9928819370833373, val_acc=0.9361106673183773
15: train_acc=0.9941671427421649, val_acc=0.9341915880472741
16: train_acc=0.9953754558400438, val_acc=0.9380297457315856
17: train_acc=0.996243244310303, val_acc=0.9359507439576129
18: train_acc=0.9972318641969413, val_acc=0.9338717412685521
19: train_acc=0.9970451247881196, val_acc=0.9338717418786108
20: train_acc=0.9974405732508509, val_acc=0.9349112426130824
21: train_acc=0.9975284503803055, val_acc=0.9317127783051037
22: train_acc=0.9980666986709792, val_acc=0.9325923556788109
23: train_acc=0.9980776838444623, val_acc=0.9341116264288509
24: train_acc=0.998187530304731, val_acc=0.9340316645053983
///////////////////////////////////////////
Validation metrics
Number of 0 predicted: 5909
Number of 1 predicted: 6597
Validation precision: 0.9735805710534953
Validation recall: 0.8993481885705624
Validation F1-score: 0.9349933023402411
File fc2_20_2dense_2019-01-08_06:27_gpu-0-1_adam_0.001_2048_25_AA24_mirror-double_test-val.txt
fc2_20_2dense, epochs=25, batch=2048, optimizer=adam, learning rate=0.001, patience=25
Test loss: 0.20641995968102608
Test accuracy: 0.9289141212313456
Number of 0 predicted: 6283
Number of 1 predicted: 6223
Test precision: 0.9376435838529701
Test recall: 0.918206652739836
Test F1-score: 0.927823333603962
File fc2_20_2dense_2019-01-08_06:29_gpu-0-1_adam_0.001_2048_6_AA24_test-mirror-double_train-val.txt
fc2_20_2dense, epochs=6, batch=2048, optimizer=adam, learning rate=0.001, patience=6
Number of training samples: 103542
Test loss: 0.7262086047066583
Test accuracy: 0.7625
Number of 0 predicted: 587
Number of 1 predicted: 133
Test precision: 0.4269230769230769
Test recall: 0.8345864661654135
Test F1-score: 0.5648854961832062
File fc2_20_2dense_2019-01-08_06:32_gpu-0-1_adam_0.001_2048_25_AA24_mirror-medium.txt
fc2_20_2dense, epochs=25, batch=2048, optimizer=adam, learning rate=0.001, patience=25
Number of training samples: 85104
Loss
0: train_loss=0.4486376456937343, val_loss=0.357229798641727
1: train_loss=0.24951713952293833, val_loss=0.29054698276456553
2: train_loss=0.1899645471089898, val_loss=0.27555710439878367
3: train_loss=0.15455953714923318, val_loss=0.26717710217937185
4: train_loss=0.12807079513990963, val_loss=0.2670306978499419
5: train_loss=0.10791322144743284, val_loss=0.2816649661934742
6: train_loss=0.0911609009174414, val_loss=0.28466035492370734
7: train_loss=0.07748283675137622, val_loss=0.2899525196564177
8: train_loss=0.0642308882317007, val_loss=0.29467389881936273
9: train_loss=0.054052814076461835, val_loss=0.2946554654949151
10: train_loss=0.04521708057580148, val_loss=0.30657995809837096
11: train_loss=0.03638651692922188, val_loss=0.32845682857319597
12: train_loss=0.030281034642299958, val_loss=0.3380908965971481
13: train_loss=0.025921582786251132, val_loss=0.34177399636932665
14: train_loss=0.022116255542166386, val_loss=0.3566752125377905
15: train_loss=0.018704107104765447, val_loss=0.3559649671143032
16: train_loss=0.015300201659370441, val_loss=0.3647005604779064
17: train_loss=0.01256866567677283, val_loss=0.38319184330691974
18: train_loss=0.011149929333348643, val_loss=0.4044518668888475
19: train_loss=0.010006057149460103, val_loss=0.4029646525701333
20: train_loss=0.009052609110471337, val_loss=0.39312557329199577
21: train_loss=0.00832996301126757, val_loss=0.40643559706562526
22: train_loss=0.007603313191246692, val_loss=0.4081302050810823
23: train_loss=0.007597681182214451, val_loss=0.4268340890687595
24: train_loss=0.00731778229835588, val_loss=0.423528772851008
///////////////////////////////////////////
Accuracy
0: train_acc=0.7995511374318481, val_acc=0.8469037580017812
1: train_acc=0.9058563639445735, val_acc=0.8922165038897678
2: train_acc=0.9301560444588902, val_acc=0.9002495720724553
3: train_acc=0.9450202105210719, val_acc=0.9048510382279946
4: train_acc=0.9559362661482181, val_acc=0.907346748651199
5: train_acc=0.9634329762465973, val_acc=0.9086725948344451
6: train_acc=0.9710941907157797, val_acc=0.908672594946012
7: train_acc=0.9758530738300388, val_acc=0.9101544230539514
8: train_acc=0.9807764618754006, val_acc=0.9096084862710384
9: train_acc=0.9848185748192438, val_acc=0.9142099524451722
10: train_acc=0.9878971612325712, val_acc=0.9121041966942427
11: train_acc=0.9908699942365762, val_acc=0.9104663867917713
12: train_acc=0.9929968040225694, val_acc=0.9099204500646417
13: train_acc=0.9945125963975219, val_acc=0.9114802691256306
14: train_acc=0.9952176161380746, val_acc=0.9089065676749989
15: train_acc=0.9961928934234272, val_acc=0.9111683053320273
16: train_acc=0.9971329200019019, val_acc=0.9114022781911757
17: train_acc=0.9980846961361727, val_acc=0.9099204501018306
18: train_acc=0.9981199471232003, val_acc=0.9054749658337957
19: train_acc=0.9985194582871015, val_acc=0.9092185315615746
20: train_acc=0.9987427145382766, val_acc=0.9117922329750174
21: train_acc=0.9986604627730995, val_acc=0.9096864772798714
22: train_acc=0.9988014664410604, val_acc=0.9107003596509197
23: train_acc=0.9985899602835688, val_acc=0.9084386220124857
24: train_acc=0.9987897158543134, val_acc=0.9092185315243857
///////////////////////////////////////////
Validation metrics
Number of 0 predicted: 5958
Number of 1 predicted: 6864
Validation precision: 0.956145966709347
Validation recall: 0.8703379953379954
Validation F1-score: 0.9112263575350825
File fc2_20_2dense_2019-01-08_06:39_gpu-0-1_adam_0.001_2048_25_AA24_mirror-medium_test-val.txt
fc2_20_2dense, epochs=25, batch=2048, optimizer=adam, learning rate=0.001, patience=25
Test loss: 0.2670307161097332
Test accuracy: 0.9073467477958523
Number of 0 predicted: 6280
Number of 1 predicted: 6542
Test precision: 0.9284571062740077
Test recall: 0.8867318862733109
Test F1-score: 0.9071149335418295
File fc2_20_2dense_2019-01-08_06:40_gpu-0-1_adam_0.001_2048_5_AA24_test-mirror-medium_train-val.txt
fc2_20_2dense, epochs=5, batch=2048, optimizer=adam, learning rate=0.001, patience=5
Number of training samples: 97926
Test loss: 0.2751177446953062
Test accuracy: 0.8997344994533812
Number of 0 predicted: 5922
Number of 1 predicted: 6884
Test precision: 0.9492939666238768
Test recall: 0.8593840790238234
Test F1-score: 0.9021043000914913
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment