Execution Time0.05s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: PR-4624-x86_64-ubuntu16-gcc54-opt (sft-ubuntu-1604-4) on 2019-11-14 18:11:36
Repository revision: 4926cfd669e03f1f1420093eabbb039a54014a54

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.651581
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      1.016      0.3658 
   1 |      1.344       1.289 
   2 |    -0.5005      -1.271 
   3 |     0.9991     -0.7612 
   4 |     0.9584    -0.05455 
   5 |      2.422      0.6745 
   6 |     0.4211      0.2093 
   7 |     -1.388      0.8048 
   8 |      1.179       1.227 
   9 |    0.06463     -0.1409 

output BN 
output DL feature 0 mean 0.651581	output DL std 1.06067
output DL feature 1 mean 0.234375	output DL std 0.824512
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.3623       0.168 
   1 |     0.6885       1.349 
   2 |     -1.145      -1.924 
   3 |     0.3454      -1.273 
   4 |     0.3049     -0.3693 
   5 |      1.759      0.5626 
   6 |    -0.2291    -0.03199 
   7 |     -2.027      0.7292 
   8 |     0.5244        1.27 
   9 |    -0.5833     -0.4797 

output BN feature 0 mean 6.66134e-17	output BN std 1.05404
output BN feature 1 mean 6.10623e-17	output BN std 1.05401
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 | -0.0006391   -0.001727   5.275e-05   -0.001332 
   1 |   -0.03868    -0.03106   -0.001403    -0.04221 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |       1.13     -0.5304     -0.4349      0.1242 
   1 |      1.051      0.5953     -0.5322      0.2989 

 training batch 2 mu var00.651584
compute loss for weight  1.13047  1.13046 result 0.0272169
 training batch 3 mu var00.651581
compute loss for weight  1.13045  1.13046 result 0.0272169
 training batch 4 mu var00.651581
compute loss for weight  1.13047  1.13046 result 0.0272169
 training batch 5 mu var00.651581
compute loss for weight  1.13046  1.13046 result 0.0272169
   --dy = -0.00063909 dy_ref = -0.00063909
 training batch 6 mu var00.65158
compute loss for weight  -0.530431  -0.530441 result 0.0272169
 training batch 7 mu var00.651581
compute loss for weight  -0.530451  -0.530441 result 0.0272169
 training batch 8 mu var00.651581
compute loss for weight  -0.530436  -0.530441 result 0.0272169
 training batch 9 mu var00.651581
compute loss for weight  -0.530446  -0.530441 result 0.0272169
   --dy = -0.00172734 dy_ref = -0.00172734
 training batch 10 mu var00.651581
compute loss for weight  -0.434876  -0.434886 result 0.0272169
 training batch 11 mu var00.651581
compute loss for weight  -0.434896  -0.434886 result 0.0272169
 training batch 12 mu var00.651581
compute loss for weight  -0.434881  -0.434886 result 0.0272169
 training batch 13 mu var00.651581
compute loss for weight  -0.434891  -0.434886 result 0.0272169
   --dy = 5.27538e-05 dy_ref = 5.27538e-05
 training batch 14 mu var00.651581
compute loss for weight  0.124214  0.124204 result 0.0272169
 training batch 15 mu var00.651581
compute loss for weight  0.124194  0.124204 result 0.0272169
 training batch 16 mu var00.651581
compute loss for weight  0.124209  0.124204 result 0.0272169
 training batch 17 mu var00.651581
compute loss for weight  0.124199  0.124204 result 0.0272169
   --dy = -0.00133159 dy_ref = -0.00133159
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    0.05521  -0.0007772 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.651581
compute loss for weight  1.00001  1 result 0.0272175
 training batch 19 mu var00.651581
compute loss for weight  0.99999  1 result 0.0272163
 training batch 20 mu var00.651581
compute loss for weight  1.00001  1 result 0.0272172
 training batch 21 mu var00.651581
compute loss for weight  0.999995  1 result 0.0272166
   --dy = 0.055211 dy_ref = 0.055211
 training batch 22 mu var00.651581
compute loss for weight  1.00001  1 result 0.0272169
 training batch 23 mu var00.651581
compute loss for weight  0.99999  1 result 0.0272169
 training batch 24 mu var00.651581
compute loss for weight  1.00001  1 result 0.0272169
 training batch 25 mu var00.651581
compute loss for weight  0.999995  1 result 0.0272169
   --dy = -0.000777174 dy_ref = -0.000777174
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |  5.204e-18  -2.982e-19 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.651581
compute loss for weight  1e-05  0 result 0.0272169
 training batch 27 mu var00.651581
compute loss for weight  -1e-05  0 result 0.0272169
 training batch 28 mu var00.651581
compute loss for weight  5e-06  0 result 0.0272169
 training batch 29 mu var00.651581
compute loss for weight  -5e-06  0 result 0.0272169
   --dy = 0 dy_ref = 5.20417e-18
 training batch 30 mu var00.651581
compute loss for weight  1e-05  0 result 0.0272169
 training batch 31 mu var00.651581
compute loss for weight  -1e-05  0 result 0.0272169
 training batch 32 mu var00.651581
compute loss for weight  5e-06  0 result 0.0272169
 training batch 33 mu var00.651581
compute loss for weight  -5e-06  0 result 0.0272169
   --dy = 9.25186e-13 dy_ref = -2.98156e-19
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.3295     0.08598 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.1676   -0.009038 

 training batch 34 mu var00.651581
compute loss for weight  0.167576  0.167566 result 0.0272202
 training batch 35 mu var00.651581
compute loss for weight  0.167556  0.167566 result 0.0272136
 training batch 36 mu var00.651581
compute loss for weight  0.167571  0.167566 result 0.0272185
 training batch 37 mu var00.651581
compute loss for weight  0.167561  0.167566 result 0.0272153
   --dy = 0.329487 dy_ref = 0.329487
 training batch 38 mu var00.651581
compute loss for weight  -0.00902849  -0.00903849 result 0.0272178
 training batch 39 mu var00.651581
compute loss for weight  -0.00904849  -0.00903849 result 0.027216
 training batch 40 mu var00.651581
compute loss for weight  -0.00903349  -0.00903849 result 0.0272173
 training batch 41 mu var00.651581
compute loss for weight  -0.00904349  -0.00903849 result 0.0272165
   --dy = 0.0859849 dy_ref = 0.0859849
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][33m4.95356e-09[NON-XML-CHAR-0x1B][39m