Execution Time1.29s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-ubuntu18-clang91-dbg (sft-ubuntu-1804-3) on 2019-11-15 20:01:52
Repository revision: 998ea1938e6504f209c357da54f2e162dced0d00

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.16466
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.5572      0.6412 
   1 |    -0.7369     -0.4214 
   2 |     0.3251      0.5348 
   3 |     -1.786     0.07422 
   4 |     0.1558      0.6475 
   5 |     0.5331       1.222 
   6 |      1.223       1.081 
   7 |     -0.333      -1.723 
   8 |       2.01       1.061 
   9 |    -0.3015    -0.02067 

output BN 
output DL feature 0 mean 0.16466	output DL std 1.05009
output DL feature 1 mean 0.309599	output DL std 0.888045
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      0.394      0.3935 
   1 |    -0.9049     -0.8677 
   2 |      0.161      0.2672 
   3 |     -1.958     -0.2794 
   4 |  -0.008875      0.4011 
   5 |     0.3698       1.083 
   6 |      1.062      0.9153 
   7 |    -0.4995      -2.413 
   8 |      1.853      0.8913 
   9 |    -0.4679      -0.392 

output BN feature 0 mean 2.22045e-17	output BN std 1.05404
output BN feature 1 mean 3.88578e-17	output BN std 1.05402
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.3114      -1.053     0.09178     -0.1098 
   1 |     -3.639        6.91      -3.512      -2.248 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |   -0.01182    -0.01528      0.7474      0.7318 
   1 |   -0.03663     -0.7805      0.4489      0.4163 

 training batch 2 mu var00.164663
compute loss for weight  -0.0118052  -0.0118152 result 3.91414
 training batch 3 mu var00.16466
compute loss for weight  -0.0118252  -0.0118152 result 3.91413
 training batch 4 mu var00.164661
compute loss for weight  -0.0118102  -0.0118152 result 3.91413
 training batch 5 mu var00.16466
compute loss for weight  -0.0118202  -0.0118152 result 3.91413
   --dy = 0.31138 dy_ref = 0.31138
 training batch 6 mu var00.164659
compute loss for weight  -0.0152743  -0.0152843 result 3.91412
 training batch 7 mu var00.16466
compute loss for weight  -0.0152943  -0.0152843 result 3.91414
 training batch 8 mu var00.16466
compute loss for weight  -0.0152793  -0.0152843 result 3.91413
 training batch 9 mu var00.16466
compute loss for weight  -0.0152893  -0.0152843 result 3.91414
   --dy = -1.0529 dy_ref = -1.0529
 training batch 10 mu var00.16466
compute loss for weight  0.747407  0.747397 result 3.91413
 training batch 11 mu var00.16466
compute loss for weight  0.747387  0.747397 result 3.91413
 training batch 12 mu var00.16466
compute loss for weight  0.747402  0.747397 result 3.91413
 training batch 13 mu var00.16466
compute loss for weight  0.747392  0.747397 result 3.91413
   --dy = 0.0917844 dy_ref = 0.0917844
 training batch 14 mu var00.16466
compute loss for weight  0.731811  0.731801 result 3.91413
 training batch 15 mu var00.16466
compute loss for weight  0.731791  0.731801 result 3.91413
 training batch 16 mu var00.16466
compute loss for weight  0.731806  0.731801 result 3.91413
 training batch 17 mu var00.16466
compute loss for weight  0.731796  0.731801 result 3.91413
   --dy = -0.109799 dy_ref = -0.109799
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      6.574       1.254 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.16466
compute loss for weight  1.00001  1 result 3.9142
 training batch 19 mu var00.16466
compute loss for weight  0.99999  1 result 3.91407
 training batch 20 mu var00.16466
compute loss for weight  1.00001  1 result 3.91417
 training batch 21 mu var00.16466
compute loss for weight  0.999995  1 result 3.9141
   --dy = 6.574 dy_ref = 6.574
 training batch 22 mu var00.16466
compute loss for weight  1.00001  1 result 3.91414
 training batch 23 mu var00.16466
compute loss for weight  0.99999  1 result 3.91412
 training batch 24 mu var00.16466
compute loss for weight  1.00001  1 result 3.91414
 training batch 25 mu var00.16466
compute loss for weight  0.999995  1 result 3.91413
   --dy = 1.25427 dy_ref = 1.25427
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 | -2.776e-16  -4.163e-17 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.16466
compute loss for weight  1e-05  0 result 3.91413
 training batch 27 mu var00.16466
compute loss for weight  -1e-05  0 result 3.91413
 training batch 28 mu var00.16466
compute loss for weight  5e-06  0 result 3.91413
 training batch 29 mu var00.16466
compute loss for weight  -5e-06  0 result 3.91413
   --dy = -4.44089e-11 dy_ref = -2.77556e-16
 training batch 30 mu var00.16466
compute loss for weight  1e-05  0 result 3.91413
 training batch 31 mu var00.16466
compute loss for weight  -1e-05  0 result 3.91413
 training batch 32 mu var00.16466
compute loss for weight  5e-06  0 result 3.91413
 training batch 33 mu var00.16466
compute loss for weight  -5e-06  0 result 3.91413
   --dy = 1.18424e-10 dy_ref = -4.16334e-17
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     -3.894      -2.877 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     -1.688     -0.4359 

 training batch 34 mu var00.16466
compute loss for weight  -1.68824  -1.68825 result 3.91409
 training batch 35 mu var00.16466
compute loss for weight  -1.68826  -1.68825 result 3.91417
 training batch 36 mu var00.16466
compute loss for weight  -1.68824  -1.68825 result 3.91411
 training batch 37 mu var00.16466
compute loss for weight  -1.68825  -1.68825 result 3.91415
   --dy = -3.89398 dy_ref = -3.89398
 training batch 38 mu var00.16466
compute loss for weight  -0.435935  -0.435945 result 3.9141
 training batch 39 mu var00.16466
compute loss for weight  -0.435955  -0.435945 result 3.91416
 training batch 40 mu var00.16466
compute loss for weight  -0.43594  -0.435945 result 3.91412
 training batch 41 mu var00.16466
compute loss for weight  -0.43595  -0.435945 result 3.91415
   --dy = -2.87712 dy_ref = -2.87712
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][33m1.62213e-09[NON-XML-CHAR-0x1B][39m