Execution Time0.50s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: PR-4279-x86_64-fedora27-gcc7-opt (sft-fedora-27-2.cern.ch) on 2019-11-14 21:00:11
Repository revision: b4342cbdd777caa8ed4342972de30ba66413ed7e

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.0463646
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.4159     -0.1607 
   1 |     -1.242     -0.9626 
   2 |      0.944      0.7151 
   3 |     -2.093      -0.638 
   4 |     0.0365     -0.1589 
   5 |     0.1241     -0.5972 
   6 |     0.5064   -0.004569 
   7 |     0.4213      0.2139 
   8 |      1.726    -0.01621 
   9 |    -0.3767     -0.0931 

output BN 
output DL feature 0 mean 0.0463646	output DL std 1.082
output DL feature 1 mean -0.170224	output DL std 0.473477
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |       0.36     0.02122 
   1 |     -1.255      -1.764 
   2 |     0.8745        1.97 
   3 |     -2.084      -1.041 
   4 |  -0.009613     0.02526 
   5 |    0.07576     -0.9504 
   6 |     0.4481      0.3687 
   7 |     0.3652       0.855 
   8 |      1.637      0.3428 
   9 |    -0.4121      0.1717 

output BN feature 0 mean 1.66533e-17	output BN std 1.05404
output BN feature 1 mean -5.55112e-18	output BN std 1.05383
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |   -0.07552     0.02036   -0.007764    -0.04374 
   1 |     0.5367     -0.3951     -0.4276     0.06915 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |    -0.2876      0.2644       1.063      0.4303 
   1 |    -0.5683     0.04114      0.5442     0.02098 

 training batch 2 mu var00.0463675
compute loss for weight  -0.287555  -0.287565 result 0.306776
 training batch 3 mu var00.0463646
compute loss for weight  -0.287575  -0.287565 result 0.306777
 training batch 4 mu var00.0463654
compute loss for weight  -0.28756  -0.287565 result 0.306776
 training batch 5 mu var00.0463646
compute loss for weight  -0.28757  -0.287565 result 0.306777
   --dy = -0.0755179 dy_ref = -0.0755179
 training batch 6 mu var00.0463642
compute loss for weight  0.264436  0.264426 result 0.306777
 training batch 7 mu var00.0463646
compute loss for weight  0.264416  0.264426 result 0.306776
 training batch 8 mu var00.0463645
compute loss for weight  0.264431  0.264426 result 0.306777
 training batch 9 mu var00.0463646
compute loss for weight  0.264421  0.264426 result 0.306776
   --dy = 0.0203625 dy_ref = 0.0203625
 training batch 10 mu var00.046365
compute loss for weight  1.06301  1.063 result 0.306776
 training batch 11 mu var00.0463646
compute loss for weight  1.06299  1.063 result 0.306777
 training batch 12 mu var00.0463648
compute loss for weight  1.06301  1.063 result 0.306776
 training batch 13 mu var00.0463646
compute loss for weight  1.063  1.063 result 0.306777
   --dy = -0.0077642 dy_ref = -0.0077642
 training batch 14 mu var00.0463646
compute loss for weight  0.43035  0.43034 result 0.306776
 training batch 15 mu var00.0463646
compute loss for weight  0.43033  0.43034 result 0.306777
 training batch 16 mu var00.0463646
compute loss for weight  0.430345  0.43034 result 0.306776
 training batch 17 mu var00.0463646
compute loss for weight  0.430335  0.43034 result 0.306777
   --dy = -0.043743 dy_ref = -0.043743
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      0.242      0.3715 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.0463646
compute loss for weight  1.00001  1 result 0.306779
 training batch 19 mu var00.0463646
compute loss for weight  0.99999  1 result 0.306774
 training batch 20 mu var00.0463646
compute loss for weight  1.00001  1 result 0.306778
 training batch 21 mu var00.0463646
compute loss for weight  0.999995  1 result 0.306775
   --dy = 0.24203 dy_ref = 0.24203
 training batch 22 mu var00.0463646
compute loss for weight  1.00001  1 result 0.30678
 training batch 23 mu var00.0463646
compute loss for weight  0.99999  1 result 0.306773
 training batch 24 mu var00.0463646
compute loss for weight  1.00001  1 result 0.306778
 training batch 25 mu var00.0463646
compute loss for weight  0.999995  1 result 0.306775
   --dy = 0.371523 dy_ref = 0.371523
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0  -1.171e-17 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.0463646
compute loss for weight  1e-05  0 result 0.306776
 training batch 27 mu var00.0463646
compute loss for weight  -1e-05  0 result 0.306776
 training batch 28 mu var00.0463646
compute loss for weight  5e-06  0 result 0.306776
 training batch 29 mu var00.0463646
compute loss for weight  -5e-06  0 result 0.306776
   --dy = -9.25186e-12 dy_ref = 0
 training batch 30 mu var00.0463646
compute loss for weight  1e-05  0 result 0.306776
 training batch 31 mu var00.0463646
compute loss for weight  -1e-05  0 result 0.306776
 training batch 32 mu var00.0463646
compute loss for weight  5e-06  0 result 0.306776
 training batch 33 mu var00.0463646
compute loss for weight  -5e-06  0 result 0.306776
   --dy = -9.25186e-13 dy_ref = -1.17094e-17
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.9863       1.051 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.2454      0.3536 

 training batch 34 mu var00.0463646
compute loss for weight  0.245394  0.245384 result 0.306786
 training batch 35 mu var00.0463646
compute loss for weight  0.245374  0.245384 result 0.306767
 training batch 36 mu var00.0463646
compute loss for weight  0.245389  0.245384 result 0.306781
 training batch 37 mu var00.0463646
compute loss for weight  0.245379  0.245384 result 0.306772
   --dy = 0.986332 dy_ref = 0.986332
 training batch 38 mu var00.0463646
compute loss for weight  0.353586  0.353576 result 0.306787
 training batch 39 mu var00.0463646
compute loss for weight  0.353566  0.353576 result 0.306766
 training batch 40 mu var00.0463646
compute loss for weight  0.353581  0.353576 result 0.306782
 training batch 41 mu var00.0463646
compute loss for weight  0.353571  0.353576 result 0.306771
   --dy = 1.05076 dy_ref = 1.05076
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][33m1.2582e-09[NON-XML-CHAR-0x1B][39m