Execution Time0.06s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-fedora31-gcc9 (root-fedora-31-1.cern.ch) on 2019-11-15 00:48:29

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.149017
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.3258    -0.08172 
   1 |      -0.63      -0.396 
   2 |     0.7449      0.4826 
   3 |     0.4241      -1.183 
   4 |     0.4861     -0.2958 
   5 |     0.6712     -0.5025 
   6 |     0.6397      -1.015 
   7 |     -1.488       1.773 
   8 |     0.2666      0.2548 
   9 |    0.05018     -0.2368 

output BN 
output DL feature 0 mean 0.149017	output DL std 0.699255
output DL feature 1 mean -0.120079	output DL std 0.835696
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.2664     0.04838 
   1 |     -1.174     -0.3481 
   2 |     0.8982      0.7601 
   3 |     0.4146      -1.341 
   4 |     0.5081     -0.2217 
   5 |     0.7871     -0.4823 
   6 |     0.7396      -1.129 
   7 |     -2.468       2.387 
   8 |     0.1773      0.4729 
   9 |     -0.149     -0.1473 

output BN feature 0 mean 3.05311e-17	output BN std 1.05397
output BN feature 1 mean -2.498e-17	output BN std 1.05401
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |    0.04786      0.3645      0.7078    -0.07755 
   1 |      0.606      -3.738     -0.7843      0.4712 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |    -0.3325     -0.7528      0.4291       0.161 
   1 |   -0.03255       0.821      0.3824     -0.2925 

 training batch 2 mu var00.14902
compute loss for weight  -0.3325  -0.33251 result 1.9009
 training batch 3 mu var00.149017
compute loss for weight  -0.33252  -0.33251 result 1.90089
 training batch 4 mu var00.149018
compute loss for weight  -0.332505  -0.33251 result 1.9009
 training batch 5 mu var00.149017
compute loss for weight  -0.332515  -0.33251 result 1.90089
   --dy = 0.0478554 dy_ref = 0.0478554
 training batch 6 mu var00.149017
compute loss for weight  -0.752787  -0.752797 result 1.9009
 training batch 7 mu var00.149017
compute loss for weight  -0.752807  -0.752797 result 1.90089
 training batch 8 mu var00.149017
compute loss for weight  -0.752792  -0.752797 result 1.9009
 training batch 9 mu var00.149017
compute loss for weight  -0.752802  -0.752797 result 1.90089
   --dy = 0.364464 dy_ref = 0.364464
 training batch 10 mu var00.149017
compute loss for weight  0.429102  0.429092 result 1.9009
 training batch 11 mu var00.149017
compute loss for weight  0.429082  0.429092 result 1.90089
 training batch 12 mu var00.149017
compute loss for weight  0.429097  0.429092 result 1.9009
 training batch 13 mu var00.149017
compute loss for weight  0.429087  0.429092 result 1.90089
   --dy = 0.707836 dy_ref = 0.707836
 training batch 14 mu var00.149017
compute loss for weight  0.161059  0.161049 result 1.90089
 training batch 15 mu var00.149017
compute loss for weight  0.161039  0.161049 result 1.9009
 training batch 16 mu var00.149017
compute loss for weight  0.161054  0.161049 result 1.90089
 training batch 17 mu var00.149017
compute loss for weight  0.161044  0.161049 result 1.9009
   --dy = -0.0775504 dy_ref = -0.0775504
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      4.215     -0.4132 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.149017
compute loss for weight  1.00001  1 result 1.90094
 training batch 19 mu var00.149017
compute loss for weight  0.99999  1 result 1.90085
 training batch 20 mu var00.149017
compute loss for weight  1.00001  1 result 1.90092
 training batch 21 mu var00.149017
compute loss for weight  0.999995  1 result 1.90087
   --dy = 4.21502 dy_ref = 4.21502
 training batch 22 mu var00.149017
compute loss for weight  1.00001  1 result 1.90089
 training batch 23 mu var00.149017
compute loss for weight  0.99999  1 result 1.9009
 training batch 24 mu var00.149017
compute loss for weight  1.00001  1 result 1.90089
 training batch 25 mu var00.149017
compute loss for weight  0.999995  1 result 1.9009
   --dy = -0.413232 dy_ref = -0.413232
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |  2.637e-16   5.378e-17 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.149017
compute loss for weight  1e-05  0 result 1.90089
 training batch 27 mu var00.149017
compute loss for weight  -1e-05  0 result 1.90089
 training batch 28 mu var00.149017
compute loss for weight  5e-06  0 result 1.90089
 training batch 29 mu var00.149017
compute loss for weight  -5e-06  0 result 1.90089
   --dy = 9.62193e-11 dy_ref = 2.63678e-16
 training batch 30 mu var00.149017
compute loss for weight  1e-05  0 result 1.90089
 training batch 31 mu var00.149017
compute loss for weight  -1e-05  0 result 1.90089
 training batch 32 mu var00.149017
compute loss for weight  5e-06  0 result 1.90089
 training batch 33 mu var00.149017
compute loss for weight  -5e-06  0 result 1.90089
   --dy = 0 dy_ref = 5.37764e-17
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     -2.724       1.469 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     -1.547     -0.2813 

 training batch 34 mu var00.149017
compute loss for weight  -1.54719  -1.5472 result 1.90087
 training batch 35 mu var00.149017
compute loss for weight  -1.54721  -1.5472 result 1.90092
 training batch 36 mu var00.149017
compute loss for weight  -1.54719  -1.5472 result 1.90088
 training batch 37 mu var00.149017
compute loss for weight  -1.5472  -1.5472 result 1.90091
   --dy = -2.7243 dy_ref = -2.7243
 training batch 38 mu var00.149017
compute loss for weight  -0.281337  -0.281347 result 1.90091
 training batch 39 mu var00.149017
compute loss for weight  -0.281357  -0.281347 result 1.90088
 training batch 40 mu var00.149017
compute loss for weight  -0.281342  -0.281347 result 1.9009
 training batch 41 mu var00.149017
compute loss for weight  -0.281352  -0.281347 result 1.90089
   --dy = 1.46876 dy_ref = 1.46876
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][33m3.39054e-09[NON-XML-CHAR-0x1B][39m