Execution Time0.17s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-centos7-gcc62-opt-master (olsnba08.cern.ch) on 2019-11-13 23:14:33
Repository revision: 32b17abcda23e44b64218a42d0ca69cb30cda7e0

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var01.3787
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      2.582      0.3561 
   1 |     0.3575     -0.2254 
   2 |      1.735      0.2621 
   3 |     0.5395        -1.5 
   4 |      2.408    -0.01466 
   5 |      5.283      0.2896 
   6 |     0.2602     -0.1836 
   7 |     -2.591       1.053 
   8 |       3.45       1.293 
   9 |    -0.2375     -0.2982 

output BN 
output DL feature 0 mean 1.3787	output DL std 2.19941
output DL feature 1 mean 0.103293	output DL std 0.772974
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.5768      0.3447 
   1 |    -0.4894     -0.4482 
   2 |     0.1707      0.2166 
   3 |    -0.4022      -2.186 
   4 |     0.4932     -0.1608 
   5 |      1.871       0.254 
   6 |    -0.5361     -0.3912 
   7 |     -1.902       1.295 
   8 |     0.9927       1.623 
   9 |    -0.7746     -0.5474 

output BN feature 0 mean -1.55431e-16	output BN std 1.05408
output BN feature 1 mean 0	output BN std 1.05399
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |   -0.01697     -0.0627    -0.05022     -0.0329 
   1 |    -0.1574      0.1929     -0.1134     0.01203 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |      1.371      -1.127      0.9259     0.02748 
   1 |     0.2654      0.5845      0.4461      0.1223 

 training batch 2 mu var01.37871
compute loss for weight  1.37068  1.37067 result 0.854992
 training batch 3 mu var01.3787
compute loss for weight  1.37066  1.37067 result 0.854992
 training batch 4 mu var01.3787
compute loss for weight  1.37068  1.37067 result 0.854992
 training batch 5 mu var01.3787
compute loss for weight  1.37067  1.37067 result 0.854992
   --dy = -0.0169745 dy_ref = -0.0169745
 training batch 6 mu var01.3787
compute loss for weight  -1.12714  -1.12715 result 0.854991
 training batch 7 mu var01.3787
compute loss for weight  -1.12716  -1.12715 result 0.854993
 training batch 8 mu var01.3787
compute loss for weight  -1.12715  -1.12715 result 0.854992
 training batch 9 mu var01.3787
compute loss for weight  -1.12716  -1.12715 result 0.854992
   --dy = -0.0626963 dy_ref = -0.0626963
 training batch 10 mu var01.3787
compute loss for weight  0.925931  0.925921 result 0.854992
 training batch 11 mu var01.3787
compute loss for weight  0.925911  0.925921 result 0.854993
 training batch 12 mu var01.3787
compute loss for weight  0.925926  0.925921 result 0.854992
 training batch 13 mu var01.3787
compute loss for weight  0.925916  0.925921 result 0.854992
   --dy = -0.0502181 dy_ref = -0.0502181
 training batch 14 mu var01.3787
compute loss for weight  0.0274875  0.0274775 result 0.854992
 training batch 15 mu var01.3787
compute loss for weight  0.0274675  0.0274775 result 0.854992
 training batch 16 mu var01.3787
compute loss for weight  0.0274825  0.0274775 result 0.854992
 training batch 17 mu var01.3787
compute loss for weight  0.0274725  0.0274775 result 0.854992
   --dy = -0.0328992 dy_ref = -0.0328992
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |  -0.007565       1.718 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var01.3787
compute loss for weight  1.00001  1 result 0.854992
 training batch 19 mu var01.3787
compute loss for weight  0.99999  1 result 0.854992
 training batch 20 mu var01.3787
compute loss for weight  1.00001  1 result 0.854992
 training batch 21 mu var01.3787
compute loss for weight  0.999995  1 result 0.854992
   --dy = -0.00756478 dy_ref = -0.00756478
 training batch 22 mu var01.3787
compute loss for weight  1.00001  1 result 0.855009
 training batch 23 mu var01.3787
compute loss for weight  0.99999  1 result 0.854975
 training batch 24 mu var01.3787
compute loss for weight  1.00001  1 result 0.855001
 training batch 25 mu var01.3787
compute loss for weight  0.999995  1 result 0.854983
   --dy = 1.71755 dy_ref = 1.71755
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 | -6.939e-18           0 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var01.3787
compute loss for weight  1e-05  0 result 0.854992
 training batch 27 mu var01.3787
compute loss for weight  -1e-05  0 result 0.854992
 training batch 28 mu var01.3787
compute loss for weight  5e-06  0 result 0.854992
 training batch 29 mu var01.3787
compute loss for weight  -5e-06  0 result 0.854992
   --dy = 0 dy_ref = -6.93889e-18
 training batch 30 mu var01.3787
compute loss for weight  1e-05  0 result 0.854992
 training batch 31 mu var01.3787
compute loss for weight  -1e-05  0 result 0.854992
 training batch 32 mu var01.3787
compute loss for weight  5e-06  0 result 0.854992
 training batch 33 mu var01.3787
compute loss for weight  -5e-06  0 result 0.854992
   --dy = -3.70074e-12 dy_ref = 0
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |   -0.07186      -1.837 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.1053     -0.9348 

 training batch 34 mu var01.3787
compute loss for weight  0.105279  0.105269 result 0.854991
 training batch 35 mu var01.3787
compute loss for weight  0.105259  0.105269 result 0.854993
 training batch 36 mu var01.3787
compute loss for weight  0.105274  0.105269 result 0.854992
 training batch 37 mu var01.3787
compute loss for weight  0.105264  0.105269 result 0.854992
   --dy = -0.0718613 dy_ref = -0.0718613
 training batch 38 mu var01.3787
compute loss for weight  -0.934763  -0.934773 result 0.854974
 training batch 39 mu var01.3787
compute loss for weight  -0.934783  -0.934773 result 0.85501
 training batch 40 mu var01.3787
compute loss for weight  -0.934768  -0.934773 result 0.854983
 training batch 41 mu var01.3787
compute loss for weight  -0.934778  -0.934773 result 0.855001
   --dy = -1.8374 dy_ref = -1.8374
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][33m2.03698e-09[NON-XML-CHAR-0x1B][39m