Execution Time0.31s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-ubuntu18-gcc7 (sft-ubuntu-1804-3) on 2019-11-16 02:54:36
Repository revision: cdf8874e8c83beaf9bc39a224629667fbb7903bc

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.0348102
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.1759     -0.2838 
   1 |    -0.6267     -0.2418 
   2 |     0.9403      0.5039 
   3 |    -0.3788      0.9685 
   4 |     0.1659     0.02857 
   5 |     0.1832     -0.3282 
   6 |    -0.6075     -0.6895 
   7 |     0.4081     -0.1226 
   8 |     0.2122      -1.178 
   9 |    -0.1244      0.1511 

output BN 
output DL feature 0 mean 0.0348102	output DL std 0.482625
output DL feature 1 mean -0.119192	output DL std 0.596158
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      0.308      -0.291 
   1 |     -1.444     -0.2168 
   2 |      1.977       1.102 
   3 |    -0.9032       1.923 
   4 |     0.2863      0.2612 
   5 |      0.324     -0.3696 
   6 |     -1.402      -1.008 
   7 |     0.8151     -0.0061 
   8 |     0.3873      -1.872 
   9 |    -0.3477      0.4778 

output BN feature 0 mean -1.11022e-17	output BN std 1.05384
output BN feature 1 mean -1.11022e-17	output BN std 1.05393
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     -1.288     -0.9201      -1.412      -2.407 
   1 |      2.233        1.95       3.651       3.248 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |    -0.1935      0.1252       0.532      -0.257 
   1 |    -0.3707     -0.2033    -0.01996      -0.458 

 training batch 2 mu var00.034813
compute loss for weight  -0.193529  -0.193539 result 2.1352
 training batch 3 mu var00.0348102
compute loss for weight  -0.193549  -0.193539 result 2.13522
 training batch 4 mu var00.0348109
compute loss for weight  -0.193534  -0.193539 result 2.1352
 training batch 5 mu var00.0348102
compute loss for weight  -0.193544  -0.193539 result 2.13522
   --dy = -1.28832 dy_ref = -1.28832
 training batch 6 mu var00.0348097
compute loss for weight  0.125243  0.125233 result 2.1352
 training batch 7 mu var00.0348102
compute loss for weight  0.125223  0.125233 result 2.13522
 training batch 8 mu var00.03481
compute loss for weight  0.125238  0.125233 result 2.13521
 training batch 9 mu var00.0348102
compute loss for weight  0.125228  0.125233 result 2.13521
   --dy = -0.92006 dy_ref = -0.92006
 training batch 10 mu var00.0348105
compute loss for weight  0.532012  0.532002 result 2.1352
 training batch 11 mu var00.0348102
compute loss for weight  0.531992  0.532002 result 2.13522
 training batch 12 mu var00.0348103
compute loss for weight  0.532007  0.532002 result 2.1352
 training batch 13 mu var00.0348102
compute loss for weight  0.531997  0.532002 result 2.13522
   --dy = -1.41152 dy_ref = -1.41152
 training batch 14 mu var00.0348101
compute loss for weight  -0.256957  -0.256967 result 2.13519
 training batch 15 mu var00.0348102
compute loss for weight  -0.256977  -0.256967 result 2.13523
 training batch 16 mu var00.0348102
compute loss for weight  -0.256962  -0.256967 result 2.1352
 training batch 17 mu var00.0348102
compute loss for weight  -0.256972  -0.256967 result 2.13522
   --dy = -2.40675 dy_ref = -2.40675
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      3.441      0.8296 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.0348102
compute loss for weight  1.00001  1 result 2.13524
 training batch 19 mu var00.0348102
compute loss for weight  0.99999  1 result 2.13518
 training batch 20 mu var00.0348102
compute loss for weight  1.00001  1 result 2.13523
 training batch 21 mu var00.0348102
compute loss for weight  0.999995  1 result 2.13519
   --dy = 3.4408 dy_ref = 3.4408
 training batch 22 mu var00.0348102
compute loss for weight  1.00001  1 result 2.13522
 training batch 23 mu var00.0348102
compute loss for weight  0.99999  1 result 2.1352
 training batch 24 mu var00.0348102
compute loss for weight  1.00001  1 result 2.13521
 training batch 25 mu var00.0348102
compute loss for weight  0.999995  1 result 2.13521
   --dy = 0.829618 dy_ref = 0.829618
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 | -2.637e-16  -1.145e-16 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.0348102
compute loss for weight  1e-05  0 result 2.13521
 training batch 27 mu var00.0348102
compute loss for weight  -1e-05  0 result 2.13521
 training batch 28 mu var00.0348102
compute loss for weight  5e-06  0 result 2.13521
 training batch 29 mu var00.0348102
compute loss for weight  -5e-06  0 result 2.13521
   --dy = 7.40149e-12 dy_ref = -2.63678e-16
 training batch 30 mu var00.0348102
compute loss for weight  1e-05  0 result 2.13521
 training batch 31 mu var00.0348102
compute loss for weight  -1e-05  0 result 2.13521
 training batch 32 mu var00.0348102
compute loss for weight  5e-06  0 result 2.13521
 training batch 33 mu var00.0348102
compute loss for weight  -5e-06  0 result 2.13521
   --dy = -6.66134e-11 dy_ref = -1.14492e-16
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     -2.689      -1.442 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      -1.28     -0.5754 

 training batch 34 mu var00.0348102
compute loss for weight  -1.2796  -1.27961 result 2.13518
 training batch 35 mu var00.0348102
compute loss for weight  -1.27962  -1.27961 result 2.13524
 training batch 36 mu var00.0348102
compute loss for weight  -1.27961  -1.27961 result 2.1352
 training batch 37 mu var00.0348102
compute loss for weight  -1.27962  -1.27961 result 2.13522
   --dy = -2.68895 dy_ref = -2.68895
 training batch 38 mu var00.0348102
compute loss for weight  -0.575432  -0.575442 result 2.1352
 training batch 39 mu var00.0348102
compute loss for weight  -0.575452  -0.575442 result 2.13522
 training batch 40 mu var00.0348102
compute loss for weight  -0.575437  -0.575442 result 2.1352
 training batch 41 mu var00.0348102
compute loss for weight  -0.575447  -0.575442 result 2.13522
   --dy = -1.44171 dy_ref = -1.44171
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m1.4785e-10[NON-XML-CHAR-0x1B][39m