Execution Time0.61s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-mac1013-clang100 (macphsft16.dyndns.cern.ch) on 2019-11-15 00:49:52

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.100768
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.1277    -0.03469 
   1 |     0.3503     -0.5589 
   2 |    -0.9182      0.1916 
   3 |     0.2532       0.362 
   4 |     0.1042      0.1295 
   5 |     0.3354    -0.05497 
   6 |      1.809       1.083 
   7 |     -1.723      -1.496 
   8 |     0.5437    -0.08244 
   9 |     0.1252      0.1128 

output BN 
output DL feature 0 mean 0.100768	output DL std 0.920319
output DL feature 1 mean -0.0348476	output DL std 0.659881
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    0.03087   0.0002557 
   1 |     0.2858      -0.837 
   2 |     -1.167      0.3617 
   3 |     0.1746      0.6338 
   4 |    0.00388      0.2625 
   5 |     0.2687    -0.03214 
   6 |      1.957       1.785 
   7 |     -2.089      -2.334 
   8 |     0.5072    -0.07602 
   9 |    0.02798      0.2358 

output BN feature 0 mean 2.60209e-17	output BN std 1.05402
output BN feature 1 mean 2.22045e-17	output BN std 1.05396
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |   -0.06297    -0.09301     0.04176    -0.05742 
   1 |     0.1307      0.4647    -0.08269    -0.02444 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.1006      -0.687     -0.3175      0.7709 
   1 |    -0.4888     -0.7305      0.1883      0.3721 

 training batch 2 mu var00.100771
compute loss for weight  0.100567  0.100557 result 0.890808
 training batch 3 mu var00.100768
compute loss for weight  0.100547  0.100557 result 0.890809
 training batch 4 mu var00.100769
compute loss for weight  0.100562  0.100557 result 0.890808
 training batch 5 mu var00.100768
compute loss for weight  0.100552  0.100557 result 0.890809
   --dy = -0.0629749 dy_ref = -0.0629749
 training batch 6 mu var00.100768
compute loss for weight  -0.687001  -0.687011 result 0.890808
 training batch 7 mu var00.100768
compute loss for weight  -0.687021  -0.687011 result 0.890809
 training batch 8 mu var00.100768
compute loss for weight  -0.687006  -0.687011 result 0.890808
 training batch 9 mu var00.100768
compute loss for weight  -0.687016  -0.687011 result 0.890809
   --dy = -0.0930111 dy_ref = -0.0930111
 training batch 10 mu var00.100768
compute loss for weight  -0.317528  -0.317538 result 0.890809
 training batch 11 mu var00.100768
compute loss for weight  -0.317548  -0.317538 result 0.890808
 training batch 12 mu var00.100768
compute loss for weight  -0.317533  -0.317538 result 0.890809
 training batch 13 mu var00.100768
compute loss for weight  -0.317543  -0.317538 result 0.890808
   --dy = 0.0417591 dy_ref = 0.0417591
 training batch 14 mu var00.100768
compute loss for weight  0.770913  0.770903 result 0.890808
 training batch 15 mu var00.100768
compute loss for weight  0.770893  0.770903 result 0.890809
 training batch 16 mu var00.100768
compute loss for weight  0.770908  0.770903 result 0.890808
 training batch 17 mu var00.100768
compute loss for weight  0.770898  0.770903 result 0.890809
   --dy = -0.0574219 dy_ref = -0.0574219
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.3073       1.474 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.100768
compute loss for weight  1.00001  1 result 0.890812
 training batch 19 mu var00.100768
compute loss for weight  0.99999  1 result 0.890805
 training batch 20 mu var00.100768
compute loss for weight  1.00001  1 result 0.89081
 training batch 21 mu var00.100768
compute loss for weight  0.999995  1 result 0.890807
   --dy = 0.307252 dy_ref = 0.307252
 training batch 22 mu var00.100768
compute loss for weight  1.00001  1 result 0.890823
 training batch 23 mu var00.100768
compute loss for weight  0.99999  1 result 0.890794
 training batch 24 mu var00.100768
compute loss for weight  1.00001  1 result 0.890816
 training batch 25 mu var00.100768
compute loss for weight  0.999995  1 result 0.890801
   --dy = 1.47437 dy_ref = 1.47437
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |  1.128e-17   6.939e-17 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.100768
compute loss for weight  1e-05  0 result 0.890809
 training batch 27 mu var00.100768
compute loss for weight  -1e-05  0 result 0.890809
 training batch 28 mu var00.100768
compute loss for weight  5e-06  0 result 0.890809
 training batch 29 mu var00.100768
compute loss for weight  -5e-06  0 result 0.890809
   --dy = 2.59052e-11 dy_ref = 1.12757e-17
 training batch 30 mu var00.100768
compute loss for weight  1e-05  0 result 0.890809
 training batch 31 mu var00.100768
compute loss for weight  -1e-05  0 result 0.890809
 training batch 32 mu var00.100768
compute loss for weight  5e-06  0 result 0.890809
 training batch 33 mu var00.100768
compute loss for weight  -5e-06  0 result 0.890809
   --dy = 0 dy_ref = 6.93889e-17
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      1.607       1.872 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.1911      0.7876 

 training batch 34 mu var00.100768
compute loss for weight  0.191155  0.191145 result 0.890825
 training batch 35 mu var00.100768
compute loss for weight  0.191135  0.191145 result 0.890792
 training batch 36 mu var00.100768
compute loss for weight  0.19115  0.191145 result 0.890817
 training batch 37 mu var00.100768
compute loss for weight  0.19114  0.191145 result 0.890801
   --dy = 1.60742 dy_ref = 1.60742
 training batch 38 mu var00.100768
compute loss for weight  0.787568  0.787558 result 0.890827
 training batch 39 mu var00.100768
compute loss for weight  0.787548  0.787558 result 0.89079
 training batch 40 mu var00.100768
compute loss for weight  0.787563  0.787558 result 0.890818
 training batch 41 mu var00.100768
compute loss for weight  0.787553  0.787558 result 0.890799
   --dy = 1.87207 dy_ref = 1.87207
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m3.88733e-10[NON-XML-CHAR-0x1B][39m