Execution Time0.16s

Test: TMVA-DNN-BatchNormalization (Passed)
Build: master-x86_64-centos7-gcc62-opt-master (olsnba08.cern.ch) on 2019-11-14 23:14:32
Repository revision: 14de58de35eff907054671888ccc2de0f7f27e77

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 

10x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.9989     -0.4348      0.7818    -0.03005 
   1 |     0.8243    -0.05672     -0.9009     -0.0747 
   2 |   0.007912     -0.4108       1.391     -0.9851 
   3 |   -0.04894      -1.443      -1.061      -1.388 
   4 |     0.7674      -0.736      0.5797     -0.3821 
   5 |      2.061      -1.235       1.165     -0.4542 
   6 |    -0.1348     -0.4996     -0.1824       1.844 
   7 |    -0.2428       1.997    0.004806     -0.4222 
   8 |      1.541     0.09474       1.525       1.217 
   9 |    -0.1363     -0.1992     -0.2938     -0.1184 

 training batch 1 mu var00.87821
output DL 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      1.576      -1.182 
   1 |     0.3573      0.8102 
   2 |       1.12      -1.352 
   3 |     0.9762     -0.7821 
   4 |      1.611      -1.365 
   5 |       3.41      -2.424 
   6 |   0.003683      -1.512 
   7 |     -1.935        3.33 
   8 |      1.712      -1.453 
   9 |   -0.04867    -0.03806 

output BN 
output DL feature 0 mean 0.87821	output DL std 1.41437
output DL feature 1 mean -0.596893	output DL std 1.6385
output of BN 

10x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |     0.5197     -0.3766 
   1 |    -0.3882      0.9052 
   2 |     0.1801      -0.486 
   3 |    0.07305     -0.1191 
   4 |     0.5458      -0.494 
   5 |      1.887      -1.176 
   6 |    -0.6517     -0.5887 
   7 |     -2.096       2.526 
   8 |     0.6213      -0.551 
   9 |    -0.6908      0.3595 

output BN feature 0 mean -4.44089e-17	output BN std 1.05406
output BN feature 1 mean -2.22045e-17	output BN std 1.05407
Testing weight gradients   for    layer 0
weight gradient for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |  0.0003489   0.0003996   -1.95e-05  -0.0005422 
   1 | -2.184e-05    0.001121   -0.000417  -0.0006913 

weights for layer 0

2x4 matrix is as follows

     |      0    |      1    |      2    |      3    |
---------------------------------------------------------
   0 |     0.8393     -0.8966      0.4391     -0.1361 
   1 |     0.1438        1.59     -0.8297     -0.4607 

 training batch 2 mu var00.878213
compute loss for weight  0.83932  0.83931 result 0.550324
 training batch 3 mu var00.87821
compute loss for weight  0.8393  0.83931 result 0.550324
 training batch 4 mu var00.878211
compute loss for weight  0.839315  0.83931 result 0.550324
 training batch 5 mu var00.87821
compute loss for weight  0.839305  0.83931 result 0.550324
   --dy = 0.0003489 dy_ref = 0.0003489
 training batch 6 mu var00.878209
compute loss for weight  -0.89655  -0.89656 result 0.550324
 training batch 7 mu var00.87821
compute loss for weight  -0.89657  -0.89656 result 0.550324
 training batch 8 mu var00.87821
compute loss for weight  -0.896555  -0.89656 result 0.550324
 training batch 9 mu var00.87821
compute loss for weight  -0.896565  -0.89656 result 0.550324
   --dy = 0.000399572 dy_ref = 0.000399572
 training batch 10 mu var00.87821
compute loss for weight  0.439083  0.439073 result 0.550324
 training batch 11 mu var00.87821
compute loss for weight  0.439063  0.439073 result 0.550324
 training batch 12 mu var00.87821
compute loss for weight  0.439078  0.439073 result 0.550324
 training batch 13 mu var00.87821
compute loss for weight  0.439068  0.439073 result 0.550324
   --dy = -1.94965e-05 dy_ref = -1.94965e-05
 training batch 14 mu var00.87821
compute loss for weight  -0.136114  -0.136124 result 0.550324
 training batch 15 mu var00.87821
compute loss for weight  -0.136134  -0.136124 result 0.550324
 training batch 16 mu var00.87821
compute loss for weight  -0.136119  -0.136124 result 0.550324
 training batch 17 mu var00.87821
compute loss for weight  -0.136129  -0.136124 result 0.550324
   --dy = -0.000542238 dy_ref = -0.000542238
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |  -0.002777       1.103 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          1           1 

 training batch 18 mu var00.87821
compute loss for weight  1.00001  1 result 0.550324
 training batch 19 mu var00.87821
compute loss for weight  0.99999  1 result 0.550324
 training batch 20 mu var00.87821
compute loss for weight  1.00001  1 result 0.550324
 training batch 21 mu var00.87821
compute loss for weight  0.999995  1 result 0.550324
   --dy = -0.00277724 dy_ref = -0.00277724
 training batch 22 mu var00.87821
compute loss for weight  1.00001  1 result 0.550335
 training batch 23 mu var00.87821
compute loss for weight  0.99999  1 result 0.550313
 training batch 24 mu var00.87821
compute loss for weight  1.00001  1 result 0.55033
 training batch 25 mu var00.87821
compute loss for weight  0.999995  1 result 0.550319
   --dy = 1.10343 dy_ref = 1.10343
Testing weight gradients   for    layer 1
weight gradient for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |   1.22e-19   8.327e-17 

weights for layer 1

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |          0           0 

 training batch 26 mu var00.87821
compute loss for weight  1e-05  0 result 0.550324
 training batch 27 mu var00.87821
compute loss for weight  -1e-05  0 result 0.550324
 training batch 28 mu var00.87821
compute loss for weight  5e-06  0 result 0.550324
 training batch 29 mu var00.87821
compute loss for weight  -5e-06  0 result 0.550324
   --dy = 3.14563e-11 dy_ref = 1.21973e-19
 training batch 30 mu var00.87821
compute loss for weight  1e-05  0 result 0.550324
 training batch 31 mu var00.87821
compute loss for weight  -1e-05  0 result 0.550324
 training batch 32 mu var00.87821
compute loss for weight  5e-06  0 result 0.550324
 training batch 33 mu var00.87821
compute loss for weight  -5e-06  0 result 0.550324
   --dy = -2.96059e-11 dy_ref = 8.32667e-17
Testing weight gradients   for    layer 2
weight gradient for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |      -1.28       1.484 

weights for layer 2

1x2 matrix is as follows

     |      0    |      1    |
-------------------------------
   0 |    0.00217      0.7437 

 training batch 34 mu var00.87821
compute loss for weight  0.00217978  0.00216978 result 0.550311
 training batch 35 mu var00.87821
compute loss for weight  0.00215978  0.00216978 result 0.550337
 training batch 36 mu var00.87821
compute loss for weight  0.00217478  0.00216978 result 0.550318
 training batch 37 mu var00.87821
compute loss for weight  0.00216478  0.00216978 result 0.550331
   --dy = -1.27996 dy_ref = -1.27996
 training batch 38 mu var00.87821
compute loss for weight  0.743737  0.743727 result 0.550339
 training batch 39 mu var00.87821
compute loss for weight  0.743717  0.743727 result 0.550309
 training batch 40 mu var00.87821
compute loss for weight  0.743732  0.743727 result 0.550332
 training batch 41 mu var00.87821
compute loss for weight  0.743722  0.743727 result 0.550317
   --dy = 1.48364 dy_ref = 1.48364
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][33m1.18594e-07[NON-XML-CHAR-0x1B][39m