Execution Time0.05s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-centos7-gcc48 (lcgapp-centos7-x86-64-25.cern.ch) on 2019-11-15 01:35:29

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.536869
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      1.077      0.3366 
   1 |    -0.4042      0.2918 
   2 |      1.795      0.4243 
   3 |     0.1661      0.1991 
   4 |      1.088      0.3216 
   5 |       2.14      0.7654 
   6 |     -1.199      -1.023 
   7 |    -0.1618      0.5898 
   8 |      1.053      0.1509 
   9 |     -0.186    -0.06406 

output BN 
output DL feature 0 mean 0.536869	output DL std 1.05739
output DL feature 1 mean 0.199209	output DL std 0.486772
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.5383      0.2975 
   1 |    -0.9381      0.2004 
   2 |      1.254      0.4872 
   3 |    -0.3696   -0.000161 
   4 |     0.5493      0.2649 
   5 |      1.598       1.226 
   6 |      -1.73      -2.647 
   7 |    -0.6965      0.8457 
   8 |      0.515     -0.1046 
   9 |    -0.7206       -0.57 

output BN feature 0 mean -1.11022e-16	output BN std 1.05404
output BN feature 1 mean 0	output BN std 1.05385
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |    0.02382      -0.641       0.424      0.7874 
   1 |   -0.06923        1.98      -1.305      -2.439 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.3443     -0.1656      0.8229     -0.5884 
   1 |     0.3885      0.2464     0.05376     -0.4544 

 training batch 2 mu var00.536872
compute loss for weight  0.344311  0.344301 result 1.71741
 training batch 3 mu var00.536869
compute loss for weight  0.344291  0.344301 result 1.71741
 training batch 4 mu var00.53687
compute loss for weight  0.344306  0.344301 result 1.71741
 training batch 5 mu var00.536869
compute loss for weight  0.344296  0.344301 result 1.71741
   --dy = 0.0238229 dy_ref = 0.0238229
 training batch 6 mu var00.536869
compute loss for weight  -0.165541  -0.165551 result 1.71741
 training batch 7 mu var00.536869
compute loss for weight  -0.165561  -0.165551 result 1.71742
 training batch 8 mu var00.536869
compute loss for weight  -0.165546  -0.165551 result 1.71741
 training batch 9 mu var00.536869
compute loss for weight  -0.165556  -0.165551 result 1.71742
   --dy = -0.640967 dy_ref = -0.640967
 training batch 10 mu var00.536869
compute loss for weight  0.82289  0.82288 result 1.71742
 training batch 11 mu var00.536869
compute loss for weight  0.82287  0.82288 result 1.71741
 training batch 12 mu var00.536869
compute loss for weight  0.822885  0.82288 result 1.71742
 training batch 13 mu var00.536869
compute loss for weight  0.822875  0.82288 result 1.71741
   --dy = 0.424012 dy_ref = 0.424012
 training batch 14 mu var00.536869
compute loss for weight  -0.588353  -0.588363 result 1.71742
 training batch 15 mu var00.536869
compute loss for weight  -0.588373  -0.588363 result 1.71741
 training batch 16 mu var00.536869
compute loss for weight  -0.588358  -0.588363 result 1.71742
 training batch 17 mu var00.536869
compute loss for weight  -0.588368  -0.588363 result 1.71741
   --dy = 0.78743 dy_ref = 0.78743
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    -0.6897       4.125 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.536869
compute loss for weight  1.00001  1 result 1.71741
 training batch 19 mu var00.536869
compute loss for weight  0.99999  1 result 1.71742
 training batch 20 mu var00.536869
compute loss for weight  1.00001  1 result 1.71741
 training batch 21 mu var00.536869
compute loss for weight  0.999995  1 result 1.71742
   --dy = -0.689729 dy_ref = -0.689729
 training batch 22 mu var00.536869
compute loss for weight  1.00001  1 result 1.71746
 training batch 23 mu var00.536869
compute loss for weight  0.99999  1 result 1.71737
 training batch 24 mu var00.536869
compute loss for weight  1.00001  1 result 1.71743
 training batch 25 mu var00.536869
compute loss for weight  0.999995  1 result 1.71739
   --dy = 4.12456 dy_ref = 4.12456
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 | -2.776e-17    2.22e-16 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.536869
compute loss for weight  1e-05  0 result 1.71741
 training batch 27 mu var00.536869
compute loss for weight  -1e-05  0 result 1.71741
 training batch 28 mu var00.536869
compute loss for weight  5e-06  0 result 1.71741
 training batch 29 mu var00.536869
compute loss for weight  -5e-06  0 result 1.71741
   --dy = -6.66134e-11 dy_ref = -2.77556e-17
 training batch 30 mu var00.536869
compute loss for weight  1e-05  0 result 1.71741
 training batch 31 mu var00.536869
compute loss for weight  -1e-05  0 result 1.71741
 training batch 32 mu var00.536869
compute loss for weight  5e-06  0 result 1.71741
 training batch 33 mu var00.536869
compute loss for weight  -5e-06  0 result 1.71741
   --dy = 8.14164e-11 dy_ref = 2.22045e-16
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      1.108       2.467 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    -0.6226       1.672 

 training batch 34 mu var00.536869
compute loss for weight  -0.62263  -0.62264 result 1.71743
 training batch 35 mu var00.536869
compute loss for weight  -0.62265  -0.62264 result 1.7174
 training batch 36 mu var00.536869
compute loss for weight  -0.622635  -0.62264 result 1.71742
 training batch 37 mu var00.536869
compute loss for weight  -0.622645  -0.62264 result 1.71741
   --dy = 1.10775 dy_ref = 1.10775
 training batch 38 mu var00.536869
compute loss for weight  1.67216  1.67215 result 1.71744
 training batch 39 mu var00.536869
compute loss for weight  1.67214  1.67215 result 1.71739
 training batch 40 mu var00.536869
compute loss for weight  1.67216  1.67215 result 1.71743
 training batch 41 mu var00.536869
compute loss for weight  1.67215  1.67215 result 1.7174
   --dy = 2.46661 dy_ref = 2.46661
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m9.08327e-10[NON-XML-CHAR-0x1B][39m