Execution Time1.08s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-ubuntu18-clang91-opt (sft-ubuntu-1804-3) on 2019-11-12 23:53:16
Repository revision: 30660dce2d9e89e4852dbf83dbd8b2cfcc137eff

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.496887
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      1.099    -0.02069 
   1 |     0.2035      0.4582 
   2 |      0.361      -1.211 
   3 |     -1.861       0.177 
   4 |     0.5023    -0.06627 
   5 |      1.715     0.03337 
   6 |     -0.224       1.926 
   7 |      1.011      -1.588 
   8 |      2.613      0.4263 
   9 |    -0.4504      0.1405 

output BN 
output DL feature 0 mean 0.496887	output DL std 1.23239
output DL feature 1 mean 0.0275593	output DL std 0.951067
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      0.515    -0.05348 
   1 |    -0.2509      0.4773 
   2 |    -0.1163      -1.373 
   3 |     -2.017      0.1656 
   4 |   0.004628      -0.104 
   5 |      1.042    0.006445 
   6 |    -0.6166       2.104 
   7 |     0.4394       -1.79 
   8 |       1.81      0.4419 
   9 |    -0.8102      0.1251 

output BN feature 0 mean 3.33067e-17	output BN std 1.05405
output BN feature 1 mean 8.32667e-18	output BN std 1.05403
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.3317     -0.6179     -0.1973       1.143 
   1 |       1.12       0.844       1.609       0.436 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9399      0.6576      0.5775      0.1825 
   1 |    0.09599     -0.6041     -0.4527      0.8431 

 training batch 2 mu var00.49689
compute loss for weight  0.939913  0.939903 result 2.18534
 training batch 3 mu var00.496887
compute loss for weight  0.939893  0.939903 result 2.18533
 training batch 4 mu var00.496888
compute loss for weight  0.939908  0.939903 result 2.18534
 training batch 5 mu var00.496887
compute loss for weight  0.939898  0.939903 result 2.18534
   --dy = 0.331727 dy_ref = 0.331727
 training batch 6 mu var00.496887
compute loss for weight  0.657635  0.657625 result 2.18533
 training batch 7 mu var00.496887
compute loss for weight  0.657615  0.657625 result 2.18534
 training batch 8 mu var00.496887
compute loss for weight  0.65763  0.657625 result 2.18533
 training batch 9 mu var00.496887
compute loss for weight  0.65762  0.657625 result 2.18534
   --dy = -0.617858 dy_ref = -0.617858
 training batch 10 mu var00.496887
compute loss for weight  0.577502  0.577492 result 2.18534
 training batch 11 mu var00.496887
compute loss for weight  0.577482  0.577492 result 2.18534
 training batch 12 mu var00.496887
compute loss for weight  0.577497  0.577492 result 2.18534
 training batch 13 mu var00.496887
compute loss for weight  0.577487  0.577492 result 2.18534
   --dy = -0.197318 dy_ref = -0.197318
 training batch 14 mu var00.496887
compute loss for weight  0.182494  0.182484 result 2.18535
 training batch 15 mu var00.496887
compute loss for weight  0.182474  0.182484 result 2.18533
 training batch 16 mu var00.496887
compute loss for weight  0.182489  0.182484 result 2.18534
 training batch 17 mu var00.496887
compute loss for weight  0.182479  0.182484 result 2.18533
   --dy = 1.14268 dy_ref = 1.14268
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.5892       3.781 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.496887
compute loss for weight  1.00001  1 result 2.18534
 training batch 19 mu var00.496887
compute loss for weight  0.99999  1 result 2.18533
 training batch 20 mu var00.496887
compute loss for weight  1.00001  1 result 2.18534
 training batch 21 mu var00.496887
compute loss for weight  0.999995  1 result 2.18534
   --dy = 0.589238 dy_ref = 0.589238
 training batch 22 mu var00.496887
compute loss for weight  1.00001  1 result 2.18538
 training batch 23 mu var00.496887
compute loss for weight  0.99999  1 result 2.1853
 training batch 24 mu var00.496887
compute loss for weight  1.00001  1 result 2.18536
 training batch 25 mu var00.496887
compute loss for weight  0.999995  1 result 2.18532
   --dy = 3.78144 dy_ref = 3.78144
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 | -6.939e-18  -2.776e-17 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.496887
compute loss for weight  1e-05  0 result 2.18534
 training batch 27 mu var00.496887
compute loss for weight  -1e-05  0 result 2.18534
 training batch 28 mu var00.496887
compute loss for weight  5e-06  0 result 2.18534
 training batch 29 mu var00.496887
compute loss for weight  -5e-06  0 result 2.18534
   --dy = 1.25825e-10 dy_ref = -6.93889e-18
 training batch 30 mu var00.496887
compute loss for weight  1e-05  0 result 2.18534
 training batch 31 mu var00.496887
compute loss for weight  -1e-05  0 result 2.18534
 training batch 32 mu var00.496887
compute loss for weight  5e-06  0 result 2.18534
 training batch 33 mu var00.496887
compute loss for weight  -5e-06  0 result 2.18534
   --dy = 0 dy_ref = -2.77556e-17
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    -0.8686      -2.637 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    -0.6784      -1.434 

 training batch 34 mu var00.496887
compute loss for weight  -0.67834  -0.67835 result 2.18533
 training batch 35 mu var00.496887
compute loss for weight  -0.67836  -0.67835 result 2.18535
 training batch 36 mu var00.496887
compute loss for weight  -0.678345  -0.67835 result 2.18533
 training batch 37 mu var00.496887
compute loss for weight  -0.678355  -0.67835 result 2.18534
   --dy = -0.868634 dy_ref = -0.868634
 training batch 38 mu var00.496887
compute loss for weight  -1.43403  -1.43404 result 2.18531
 training batch 39 mu var00.496887
compute loss for weight  -1.43405  -1.43404 result 2.18536
 training batch 40 mu var00.496887
compute loss for weight  -1.43404  -1.43404 result 2.18532
 training batch 41 mu var00.496887
compute loss for weight  -1.43405  -1.43404 result 2.18535
   --dy = -2.63691 dy_ref = -2.63691
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m1.51193e-10[NON-XML-CHAR-0x1B][39m