Commit 536784b2 authored by Florian RICHOUX's avatar Florian RICHOUX

Last results

parent b8a85cb2
File fc2_20_2dense_2019-01-07_06:19_gpu-0-1_adam_0.001_2048_25_mirror-double.txt
fc2_20_2dense, epochs=25, batch=2048, optimizer=adam, learning rate=0.001, patience=25
Number of training samples: 91036
Loss
0: train_loss=0.4686645127983807, val_loss=0.33107895182358366
1: train_loss=0.2672874126593977, val_loss=0.25368393600873523
2: train_loss=0.2100410969128555, val_loss=0.2368007622898702
3: train_loss=0.17697199346994683, val_loss=0.22613283993720093
4: train_loss=0.15144595505640332, val_loss=0.216331849370063
5: train_loss=0.13041471672163388, val_loss=0.20878649979958472
6: train_loss=0.11397858204982095, val_loss=0.21106952807721416
7: train_loss=0.09730530277420324, val_loss=0.20544756788569185
8: train_loss=0.08452578322008711, val_loss=0.20459613890795064
9: train_loss=0.07227277302495562, val_loss=0.2072023311872664
10: train_loss=0.06253207858983009, val_loss=0.21559281314771003
11: train_loss=0.05383973563898321, val_loss=0.2277511427464646
12: train_loss=0.04594664296466849, val_loss=0.22306958556785292
13: train_loss=0.039196613588081027, val_loss=0.2318347294616943
14: train_loss=0.03193121823911518, val_loss=0.240138019740286
15: train_loss=0.025865791085358636, val_loss=0.23835565642358855
16: train_loss=0.02159772701425593, val_loss=0.2465614808162422
17: train_loss=0.018414280917118196, val_loss=0.25648539712690993
18: train_loss=0.016349993451606114, val_loss=0.25713226435071235
19: train_loss=0.014058293686252658, val_loss=0.2659560546014436
20: train_loss=0.011772868759226946, val_loss=0.27385362082913306
21: train_loss=0.009778327461312604, val_loss=0.2900715769910057
22: train_loss=0.009266051621124292, val_loss=0.2907758083283263
23: train_loss=0.00791932636847545, val_loss=0.29220194283645856
24: train_loss=0.007051128253471341, val_loss=0.2945209332281125
///////////////////////////////////////////
Accuracy
0: train_acc=0.7790873941786862, val_acc=0.8557492405648005
1: train_acc=0.8994463730180006, val_acc=0.8979689745393616
2: train_acc=0.9236016520282035, val_acc=0.9080441383367612
3: train_acc=0.9352014585042902, val_acc=0.9131616826498201
4: train_acc=0.9462520320242961, val_acc=0.9201183433287167
5: train_acc=0.9541060680810699, val_acc=0.9233168077606135
6: train_acc=0.9610044377367318, val_acc=0.9240364626314513
7: train_acc=0.9674194821171006, val_acc=0.9284343517686431
8: train_acc=0.9724724284924472, val_acc=0.9297137378445248
9: train_acc=0.9772068192829036, val_acc=0.930993123434266
10: train_acc=0.9811832681367003, val_acc=0.9299536225186831
11: train_acc=0.9841930666756161, val_acc=0.930273469173487
12: train_acc=0.9879168680783886, val_acc=0.9331520871316912
13: train_acc=0.9894547213558531, val_acc=0.9324324323847716
14: train_acc=0.9922008878280976, val_acc=0.9317927396184975
15: train_acc=0.9944637287081564, val_acc=0.933871742002529
16: train_acc=0.9956390878831446, val_acc=0.9326723175450704
17: train_acc=0.9962981677578101, val_acc=0.9317127778189632
18: train_acc=0.9966386921680572, val_acc=0.9317927390084388
19: train_acc=0.9973966344687509, val_acc=0.9330721256943793
20: train_acc=0.9980557143198459, val_acc=0.9322725087189777
21: train_acc=0.9985829779451124, val_acc=0.9291540056004746
22: train_acc=0.9986269168241135, val_acc=0.9307532383978853
23: train_acc=0.9989454721209192, val_acc=0.9306732767794621
24: train_acc=0.9990992574366184, val_acc=0.9321125862161084
///////////////////////////////////////////
Validation metrics
Number of 0 predicted: 5831
Number of 1 predicted: 6675
Validation precision: 0.9780111585165737
Validation recall: 0.8928838951310861
Validation F1-score: 0.9335108465815647
File fc2_20_2dense_2019-01-07_06:24_gpu-0-1_adam_0.001_2048_25_mirror-double_test-val.txt
fc2_20_2dense, epochs=25, batch=2048, optimizer=adam, learning rate=0.001, patience=25
Test loss: 0.20459612915654307
Test accuracy: 0.9297137373583843
Number of 0 predicted: 6123
Number of 1 predicted: 6383
Test precision: 0.9515917295700689
Test recall: 0.9085069716434279
Test F1-score: 0.9295503726857417
File fc2_20_2dense_2019-01-07_06:37_gpu-0-1_adam_0.001_2048_8_test-mirror-double_train-val.txt
fc2_20_2dense, epochs=8, batch=2048, optimizer=adam, learning rate=0.001, patience=8
Number of training samples: 103542
Test loss: 0.7310285535123613
Test accuracy: 0.7791666666666667
Number of 0 predicted: 559
Number of 1 predicted: 161
Test precision: 0.5038461538461538
Test recall: 0.8136645962732919
Test F1-score: 0.6223277909738718
File fc2_20_2dense_2019-01-07_06:41_gpu-0-1_adam_0.001_2048_25_mirror-medium.txt
fc2_20_2dense, epochs=25, batch=2048, optimizer=adam, learning rate=0.001, patience=25
Number of training samples: 85104
Loss
0: train_loss=0.4721732157853161, val_loss=0.4003029189779576
1: train_loss=0.257617010279557, val_loss=0.29266388302677937
2: train_loss=0.1917861581554671, val_loss=0.2726654611694108
3: train_loss=0.15724602413930533, val_loss=0.2631619219623031
4: train_loss=0.13422470448059254, val_loss=0.26552764965635883
5: train_loss=0.11394071878739644, val_loss=0.2648573012912428
6: train_loss=0.09773493696002052, val_loss=0.27011166670786907
7: train_loss=0.08381891765805603, val_loss=0.27916787823450356
8: train_loss=0.07057671783578465, val_loss=0.2829560277112801
9: train_loss=0.058179511119770724, val_loss=0.28585776092109505
10: train_loss=0.047920697691729026, val_loss=0.3027151906672552
11: train_loss=0.04110528665683529, val_loss=0.3126125807063737
12: train_loss=0.035954157522599214, val_loss=0.3188181616571611
13: train_loss=0.030697614460564418, val_loss=0.33734474757535465
14: train_loss=0.025602017607225424, val_loss=0.333568493404323
15: train_loss=0.021672668059807026, val_loss=0.36784740373657265
16: train_loss=0.018539806566663222, val_loss=0.3628304196889753
17: train_loss=0.01605393113766281, val_loss=0.3765625893056272
18: train_loss=0.015171173715189695, val_loss=0.38211667196015414
19: train_loss=0.013864371253933578, val_loss=0.40238162122563403
20: train_loss=0.012238399993574334, val_loss=0.41286472649177103
21: train_loss=0.011237801364821741, val_loss=0.41340827864262525
22: train_loss=0.01023761840972473, val_loss=0.4160058144389923
23: train_loss=0.009817478678767552, val_loss=0.4326348903630643
24: train_loss=0.0094897869132525, val_loss=0.417005492221768
///////////////////////////////////////////
Accuracy
0: train_acc=0.7820783982399734, val_acc=0.8160973319005453
1: train_acc=0.9050103406257077, val_acc=0.8785680872734651
2: train_acc=0.9320478472164596, val_acc=0.8948681962748545
3: train_acc=0.9449262081691479, val_acc=0.9035251920447486
4: train_acc=0.9535274487909775, val_acc=0.9050850110313595
5: train_acc=0.9615764242855538, val_acc=0.9096864772612768
6: train_acc=0.968556119627378, val_acc=0.9119482149183052
7: train_acc=0.9736205116432621, val_acc=0.9124941515524624
8: train_acc=0.9792841701808785, val_acc=0.9142099525381447
9: train_acc=0.9837257942437991, val_acc=0.9145219163875316
10: train_acc=0.9879794137933734, val_acc=0.9120262058527603
11: train_acc=0.989918217609242, val_acc=0.9114022782283646
12: train_acc=0.9913635080549631, val_acc=0.9120262058527603
13: train_acc=0.9933258134477411, val_acc=0.9075027305944868
14: train_acc=0.9947476030225301, val_acc=0.9115582600600856
15: train_acc=0.9958051328686849, val_acc=0.9032912192599782
16: train_acc=0.996463151013051, val_acc=0.9060209024865473
17: train_acc=0.9971329194864265, val_acc=0.9051630020401924
18: train_acc=0.9971564204021826, val_acc=0.9051630020401924
19: train_acc=0.9973444259464797, val_acc=0.9021213548898586
20: train_acc=0.9978731907743063, val_acc=0.9024333186834618
21: train_acc=0.9979906940419864, val_acc=0.9025113096551057
22: train_acc=0.9980494451491451, val_acc=0.9036811739694419
23: train_acc=0.9983079526450854, val_acc=0.9022773368331464
24: train_acc=0.9981786992837218, val_acc=0.905864920599043
///////////////////////////////////////////
Validation metrics
Number of 0 predicted: 5859
Number of 1 predicted: 6963
Validation precision: 0.9606274007682458
Validation recall: 0.8619847766767198
Validation F1-score: 0.9086367421088486
File fc2_20_2dense_2019-01-07_06:47_gpu-0-1_adam_0.001_2048_25_mirror-medium_test-val.txt
fc2_20_2dense, epochs=25, batch=2048, optimizer=adam, learning rate=0.001, patience=25
Test loss: 0.26316193254256914
Test accuracy: 0.9035251910964295
Number of 0 predicted: 6417
Number of 1 predicted: 6405
Test precision: 0.913572343149808
Test recall: 0.8911787665886026
Test F1-score: 0.9022366237255987
File fc2_20_2dense_2019-01-07_06:49_gpu-0-1_adam_0.001_2048_3_test-mirror-medium_train-val.txt
fc2_20_2dense, epochs=3, batch=2048, optimizer=adam, learning rate=0.001, patience=3
Number of training samples: 97926
Test loss: 0.28117575176326665
Test accuracy: 0.8906762455099172
Number of 0 predicted: 6148
Number of 1 predicted: 6658
Test precision: 0.9218549422336328
Test recall: 0.8628717332532292
Test F1-score: 0.891388673390225
File lstm32_3conv3_2dense_shared_2019-01-02_10:58_gpu-0-1_nadam_0.002_1024_200_mirror_double_train-val.txt
lstm32_3conv3_2dense_shared, epochs=200, batch=1024, optimizer=nadam, learning rate=0.002, patience=20
Number of training samples: 103542
Test loss: 1.0490216652552287
Test accuracy: 0.7777777777777778
Number of 0 predicted: 538
Number of 1 predicted: 182
Test precision: 0.5423076923076923
Test recall: 0.7747252747252747
Test F1-score: 0.6380090497737556
File lstm32_3conv3_2dense_shared_2019-01-04_05:58_gpu-2-1_adam_0.0012_1024_300_test-mirror-double_train-val.txt
lstm32_3conv3_2dense_shared, epochs=300, batch=1024, optimizer=adam, learning rate=0.0012, patience=300
Number of training samples: 103542
Test loss: 1.2903765810860528
Test accuracy: 0.7333333333333333
Number of 0 predicted: 572
Number of 1 predicted: 148
Test precision: 0.4153846153846154
Test recall: 0.7297297297297297
Test F1-score: 0.5294117647058824
File lstm32_3conv3_2dense_shared_2019-01-04_05:59_gpu-1-1_adam_0.0012_512_300_test-mirror-double_train-val.txt
lstm32_3conv3_2dense_shared, epochs=300, batch=512, optimizer=adam, learning rate=0.0012, patience=300
Number of training samples: 103542
Test loss: 1.2712968932257758
Test accuracy: 0.7361111111111112
Number of 0 predicted: 526
Number of 1 predicted: 194
Test precision: 0.5076923076923077
Test recall: 0.6804123711340206
Test F1-score: 0.5814977973568282
File lstm32_3conv3_2dense_shared_2019-01-06_03:45_gpu-3-1_adam_0.001_2048_55_mirror-medium.txt
lstm32_3conv3_2dense_shared, epochs=55, batch=2048, optimizer=adam, learning rate=0.001, patience=55
Number of training samples: 85104
Loss
0: train_loss=0.6799900982697715, val_loss=0.6997605065737141
1: train_loss=0.6280333764415461, val_loss=0.6289812705108109
2: train_loss=0.4837541959859034, val_loss=1.834091910592964
3: train_loss=0.41386795336981874, val_loss=2.3090519681584536
4: train_loss=0.37924608289579137, val_loss=1.4066053382358543
5: train_loss=0.35979775642627027, val_loss=2.0909988249432296
6: train_loss=0.3480355662720435, val_loss=1.9914535727335052
7: train_loss=0.335607010695378, val_loss=1.4803673835203106
8: train_loss=0.32806927303574007, val_loss=0.45117945285524746
9: train_loss=0.3220277748950152, val_loss=0.5947588221124248
10: train_loss=0.31877007339766744, val_loss=2.1863369878118264
11: train_loss=0.3125224515535169, val_loss=0.9199483262694881
12: train_loss=0.31047472207589716, val_loss=1.305245375116313
13: train_loss=0.3069303399258302, val_loss=0.6270787233198848
14: train_loss=0.3014528531904036, val_loss=0.6049941075961985
15: train_loss=0.29758701638902885, val_loss=0.4425861762217437
16: train_loss=0.29387483910121154, val_loss=0.7240578894439641
17: train_loss=0.2928977593328523, val_loss=1.0219202274336046
18: train_loss=0.2905614507626402, val_loss=1.3090468079265427
19: train_loss=0.28768124118584576, val_loss=0.7512391149821064
20: train_loss=0.2869332683536876, val_loss=0.3947440334497683
21: train_loss=0.28439446455062073, val_loss=0.6892616856936786
22: train_loss=0.28543079556054385, val_loss=0.5889539578053569
23: train_loss=0.28126003662543453, val_loss=0.5191930367057436
24: train_loss=0.2783720697489733, val_loss=0.38270354658165356
25: train_loss=0.2784205080927662, val_loss=0.4396661711248625
26: train_loss=0.27664039945844465, val_loss=0.38816280928217406
27: train_loss=0.27638360475279106, val_loss=0.6077685693351252
28: train_loss=0.27516298582957727, val_loss=0.4000612534704641
29: train_loss=0.2777540695624684, val_loss=0.8647484898548605
30: train_loss=0.2709167645342436, val_loss=0.5361901867912184
31: train_loss=0.26895173705034314, val_loss=0.5117508500158982
32: train_loss=0.26853321996661234, val_loss=0.38446994621153835
33: train_loss=0.26788876109451343, val_loss=0.6343548430739727
34: train_loss=0.26751430926297837, val_loss=0.3910971844722544
35: train_loss=0.26628884894997135, val_loss=0.5104015339482614
36: train_loss=0.2654115781566213, val_loss=0.37558072473028115
37: train_loss=0.26501847214998214, val_loss=0.3841704403651224
38: train_loss=0.2625839278749067, val_loss=0.3869497032856127
39: train_loss=0.26252552468562895, val_loss=0.3848755787322979
40: train_loss=0.2625487379829201, val_loss=0.44221002853636326
41: train_loss=0.25997104140954214, val_loss=0.529710524368725
42: train_loss=0.2594636706766676, val_loss=0.9966231871554937
43: train_loss=0.25630343598538624, val_loss=0.5845257073775634
44: train_loss=0.25726745291230463, val_loss=0.39202547393189524
45: train_loss=0.2559448567340418, val_loss=0.4027062015356056
46: train_loss=0.25465056742239456, val_loss=0.4341520394527341
47: train_loss=0.2582665998744301, val_loss=0.3742032842282069
48: train_loss=0.2569219624890849, val_loss=0.4901083228768843
49: train_loss=0.2529401668291563, val_loss=0.4931294202265325
50: train_loss=0.25420817284571495, val_loss=0.3672841096971247
51: train_loss=0.2541189195462514, val_loss=0.37641034593375383
52: train_loss=0.25212358359354603, val_loss=0.370302761494509
53: train_loss=0.2520684733227649, val_loss=0.364379785893358
54: train_loss=0.2513853672819717, val_loss=0.38268557368472483
///////////////////////////////////////////
Accuracy
0: train_acc=0.5763888887207991, val_acc=0.5238652315401581
1: train_acc=0.6529305318419838, val_acc=0.6987989397042256
2: train_acc=0.7754277118862875, val_acc=0.5329121820195158
3: train_acc=0.820760481461566, val_acc=0.5289346434325887
4: train_acc=0.8416290658603689, val_acc=0.5710497576888053
5: train_acc=0.8540256625280543, val_acc=0.5290906253758765
6: train_acc=0.8595013159584072, val_acc=0.5237092496154647
7: train_acc=0.8667277682094248, val_acc=0.5425830600488913
8: train_acc=0.8698416053296301, val_acc=0.8019029786128965
9: train_acc=0.8735782100778254, val_acc=0.7423958821799487
10: train_acc=0.8739777217011722, val_acc=0.5158321634504431
11: train_acc=0.877855329568235, val_acc=0.6262673541492093
12: train_acc=0.8793241208624764, val_acc=0.5492122908907437
13: train_acc=0.8800643920158843, val_acc=0.7234440805517443
14: train_acc=0.883307482631925, val_acc=0.7348307597713414
15: train_acc=0.8845882687404609, val_acc=0.8135236305056971
16: train_acc=0.8872673437881793, val_acc=0.6973171115962862
17: train_acc=0.8869148332455437, val_acc=0.597254717448557
18: train_acc=0.8877373565227202, val_acc=0.5527998747868018
19: train_acc=0.8893706526360043, val_acc=0.6781313373506618
20: train_acc=0.889582158591788, val_acc=0.8433161741615067
21: train_acc=0.8908276928056114, val_acc=0.7019185777146365
22: train_acc=0.8897349126111701, val_acc=0.7429418185165939
23: train_acc=0.8913094568328764, val_acc=0.7735922628047035
24: train_acc=0.8941295356830274, val_acc=0.8469817489548307
25: train_acc=0.893424515751973, val_acc=0.8179691146063816
26: train_acc=0.8945760479276399, val_acc=0.8493214774905306
27: train_acc=0.8944585451754351, val_acc=0.7276555921093868
28: train_acc=0.8936595226123069, val_acc=0.8422243008560034
29: train_acc=0.8933892651011249, val_acc=0.6293090013181377
30: train_acc=0.8966558563751842, val_acc=0.768990796760731
31: train_acc=0.897983643710109, val_acc=0.7700826699732618
32: train_acc=0.8976781350438096, val_acc=0.8564186542366247
33: train_acc=0.8979131419041435, val_acc=0.7261737637969079
34: train_acc=0.8973726264895701, val_acc=0.8447980023438242
35: train_acc=0.899299680413463, val_acc=0.7725783802477102
36: train_acc=0.8996169394872134, val_acc=0.8569645909451598
37: train_acc=0.8990529237732131, val_acc=0.8540009347292808
38: train_acc=0.9002749580349951, val_acc=0.850101387021025
39: train_acc=0.9001222031975761, val_acc=0.8555607537530807
40: train_acc=0.8994994362979752, val_acc=0.8127437209566081
41: train_acc=0.9008272232182785, val_acc=0.7724223984531783
42: train_acc=0.9012267342813258, val_acc=0.6077834959614762
43: train_acc=0.9029657828402174, val_acc=0.733348931477457
44: train_acc=0.9023430152682573, val_acc=0.8412884094380045
45: train_acc=0.9029540325112082, val_acc=0.8380127893727388
46: train_acc=0.9029070317106468, val_acc=0.8444860384386539
47: train_acc=0.9008272234311923, val_acc=0.8611761047248442
48: train_acc=0.9012854860496378, val_acc=0.7858368424148721
49: train_acc=0.9044463247660296, val_acc=0.7958196841820673
50: train_acc=0.9038000560093697, val_acc=0.8600842314193409
51: train_acc=0.9033770446356895, val_acc=0.8568086113540752
52: train_acc=0.9037295547637035, val_acc=0.860630168127876
53: train_acc=0.9047753339782876, val_acc=0.8604741861473991
54: train_acc=0.9041525660253239, val_acc=0.8537669619073215
///////////////////////////////////////////
Validation metrics
Number of 0 predicted: 6875
Number of 1 predicted: 5947
Validation precision: 0.8258642765685019
Validation recall: 0.8676643685892046
Validation F1-score: 0.8462484624846249
File lstm32_3conv3_2dense_shared_2019-01-07_06:03_gpu-2-1_adam_0.001_1024_300_test-val.txt
lstm32_3conv3_2dense_shared, epochs=300, batch=1024, optimizer=adam, learning rate=0.001, patience=300
Test loss: 0.27187935314273026
Test accuracy: 0.9101231408447108
Number of 0 predicted: 6054
Number of 1 predicted: 6452
Test precision: 0.9371512963570725
Test recall: 0.8851518908865468
Test F1-score: 0.9104096923322175
File lstm32_3conv3_2dense_shared_2019-01-07_06:08_gpu-2-1_adam_0.001_2048_300_mirror-double_train-val.txt
lstm32_3conv3_2dense_shared, epochs=300, batch=2048, optimizer=adam, learning rate=0.001, patience=300
Number of training samples: 103542
Test loss: 1.1449660314453973
Test accuracy: 0.7472222222222222
Number of 0 predicted: 524
Number of 1 predicted: 196
Test precision: 0.5269230769230769
Test recall: 0.6989795918367347
Test F1-score: 0.6008771929824562
File lstm32_3conv3_2dense_shared_2019-01-07_07:51_gpu-0-1_adam_0.001_2048_300.txt
lstm32_3conv3_2dense_shared, epochs=300, batch=2048, optimizer=adam, learning rate=0.001, patience=300
Test loss: 0.3714198509303554
Test accuracy: 0.8615660583279356
Number of 0 predicted: 6697
Number of 1 predicted: 6125
Test precision: 0.8481113956466069
Test recall: 0.8651428571428571
Test F1-score: 0.8565424715105471
File lstm32_3conv3_2dense_shared_2019-01-07_08:28_gpu-2-1_adam_0.001_2048_265_test-mirror-medium_train-val.txt
lstm32_3conv3_2dense_shared, epochs=265, batch=2048, optimizer=adam, learning rate=0.001, patience=265
Number of training samples: 97926
Test loss: 0.37797366749881006
Test accuracy: 0.8677963454537553
Number of 0 predicted: 6531
Number of 1 predicted: 6275
Test precision: 0.8676187419768935
Test recall: 0.8616733067729083
Test F1-score: 0.8646358039497881
......@@ -17,7 +17,7 @@ def extract_min_loss(data):
min_loss = loss
epoch = line.split(":")[0]
continue
if epoch in line and "val_acc" in line:
elif line.startswith(epoch+":") and "val_acc" in line:
val_acc = line.split(", val_acc=")[1]
lines.close()
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment