Execution Time0.05s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: PR-4624-x86_64-ubuntu16-gcc54-opt (sft-ubuntu-1604-4) on 2019-11-14 19:02:07
Repository revision: ee743e1638624a8a6fed6adde874e58bb5acc139

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.126836
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.4033      0.3611 
   1 |     -1.124      0.5956 
   2 |      1.791     -0.2489 
   3 |    -0.2528     -0.4433 
   4 |      0.481      0.1315 
   5 |     0.6111      0.6744 
   6 |    -0.9379     -0.4145 
   7 |     0.1626      0.7407 
   8 |     0.2934      0.7634 
   9 |    -0.1589     -0.1326 

output BN 
output DL feature 0 mean 0.126836	output DL std 0.827335
output DL feature 1 mean 0.202749	output DL std 0.485971
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.3522      0.3435 
   1 |     -1.594      0.8519 
   2 |       2.12     -0.9793 
   3 |    -0.4837      -1.401 
   4 |     0.4512     -0.1545 
   5 |      0.617       1.023 
   6 |     -1.356      -1.339 
   7 |     0.0456       1.167 
   8 |     0.2122       1.216 
   9 |    -0.3641     -0.7271 

output BN feature 0 mean -5.55112e-18	output BN std 1.05401
output BN feature 1 mean 4.44089e-17	output BN std 1.05384
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     -2.082      -1.839      -1.194     -0.4912 
   1 |     -14.35      -11.42      -13.17     0.05991 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |    -0.3775      -0.064       0.945       -0.46 
   1 |     0.6303      0.4326     -0.1057    -0.07194 

 training batch 2 mu var00.126839
compute loss for weight  -0.377508  -0.377518 result 4.38274
 training batch 3 mu var00.126836
compute loss for weight  -0.377528  -0.377518 result 4.38278
 training batch 4 mu var00.126837
compute loss for weight  -0.377513  -0.377518 result 4.38275
 training batch 5 mu var00.126836
compute loss for weight  -0.377523  -0.377518 result 4.38277
   --dy = -2.08158 dy_ref = -2.08158
 training batch 6 mu var00.126836
compute loss for weight  -0.0639893  -0.0639993 result 4.38275
 training batch 7 mu var00.126836
compute loss for weight  -0.0640093  -0.0639993 result 4.38278
 training batch 8 mu var00.126836
compute loss for weight  -0.0639943  -0.0639993 result 4.38275
 training batch 9 mu var00.126836
compute loss for weight  -0.0640043  -0.0639993 result 4.38277
   --dy = -1.83907 dy_ref = -1.83907
 training batch 10 mu var00.126836
compute loss for weight  0.944967  0.944957 result 4.38275
 training batch 11 mu var00.126836
compute loss for weight  0.944947  0.944957 result 4.38278
 training batch 12 mu var00.126836
compute loss for weight  0.944962  0.944957 result 4.38276
 training batch 13 mu var00.126836
compute loss for weight  0.944952  0.944957 result 4.38277
   --dy = -1.19403 dy_ref = -1.19403
 training batch 14 mu var00.126836
compute loss for weight  -0.459993  -0.460003 result 4.38276
 training batch 15 mu var00.126836
compute loss for weight  -0.460013  -0.460003 result 4.38277
 training batch 16 mu var00.126836
compute loss for weight  -0.459998  -0.460003 result 4.38276
 training batch 17 mu var00.126836
compute loss for weight  -0.460008  -0.460003 result 4.38277
   --dy = -0.49124 dy_ref = -0.49124
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      7.406       1.359 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.126836
compute loss for weight  1.00001  1 result 4.38284
 training batch 19 mu var00.126836
compute loss for weight  0.99999  1 result 4.38269
 training batch 20 mu var00.126836
compute loss for weight  1.00001  1 result 4.3828
 training batch 21 mu var00.126836
compute loss for weight  0.999995  1 result 4.38273
   --dy = 7.4062 dy_ref = 7.4062
 training batch 22 mu var00.126836
compute loss for weight  1.00001  1 result 4.38278
 training batch 23 mu var00.126836
compute loss for weight  0.99999  1 result 4.38275
 training batch 24 mu var00.126836
compute loss for weight  1.00001  1 result 4.38277
 training batch 25 mu var00.126836
compute loss for weight  0.999995  1 result 4.38276
   --dy = 1.35932 dy_ref = 1.35932
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 | -5.621e-16   2.151e-16 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.126836
compute loss for weight  1e-05  0 result 4.38276
 training batch 27 mu var00.126836
compute loss for weight  -1e-05  0 result 4.38276
 training batch 28 mu var00.126836
compute loss for weight  5e-06  0 result 4.38276
 training batch 29 mu var00.126836
compute loss for weight  -5e-06  0 result 4.38276
   --dy = -1.33227e-10 dy_ref = -5.6205e-16
 training batch 30 mu var00.126836
compute loss for weight  1e-05  0 result 4.38276
 training batch 31 mu var00.126836
compute loss for weight  -1e-05  0 result 4.38276
 training batch 32 mu var00.126836
compute loss for weight  5e-06  0 result 4.38276
 training batch 33 mu var00.126836
compute loss for weight  -5e-06  0 result 4.38276
   --dy = 1.18424e-10 dy_ref = 2.15106e-16
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      3.821      -1.588 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      1.938     -0.8559 

 training batch 34 mu var00.126836
compute loss for weight  1.93813  1.93812 result 4.3828
 training batch 35 mu var00.126836
compute loss for weight  1.93811  1.93812 result 4.38273
 training batch 36 mu var00.126836
compute loss for weight  1.93813  1.93812 result 4.38278
 training batch 37 mu var00.126836
compute loss for weight  1.93812  1.93812 result 4.38274
   --dy = 3.82132 dy_ref = 3.82132
 training batch 38 mu var00.126836
compute loss for weight  -0.855924  -0.855934 result 4.38275
 training batch 39 mu var00.126836
compute loss for weight  -0.855944  -0.855934 result 4.38278
 training batch 40 mu var00.126836
compute loss for weight  -0.855929  -0.855934 result 4.38276
 training batch 41 mu var00.126836
compute loss for weight  -0.855939  -0.855934 result 4.38277
   --dy = -1.58812 dy_ref = -1.58812
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m6.97574e-10[NON-XML-CHAR-0x1B][39m