Execution Time0.25s

Test: TMVA-DNN-BatchNormalization-Cpu (Passed)
Build: master-x86_64-fedora29-gcc8-dbg (root-fedora29-2.cern.ch) on 2019-11-14 11:43:11

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 
 training batch 1 mu var00.0519883
output DL 
output BN 
output DL feature 0 mean 0.0519883	output DL std 1.1268
output DL feature 1 mean -0.679137	output DL std 1.19792
output of BN 
output BN feature 0 mean 1.11022e-17	output BN std 1.05405
output BN feature 1 mean 6.66134e-17	output BN std 1.05405
Testing weight gradients   for    layer 0
weight gradient for layer 0
weights for layer 0
 training batch 2 mu var00.0519911
compute loss for weight  -0.52962  -0.52963 result 2.67968
 training batch 3 mu var00.0519883
compute loss for weight  -0.52964  -0.52963 result 2.67968
 training batch 4 mu var00.051989
compute loss for weight  -0.529625  -0.52963 result 2.67968
 training batch 5 mu var00.0519883
compute loss for weight  -0.529635  -0.52963 result 2.67968
   --dy = -0.459945 dy_ref = -0.459945
 training batch 6 mu var00.0519878
compute loss for weight  -0.116127  -0.116137 result 2.67969
 training batch 7 mu var00.0519883
compute loss for weight  -0.116147  -0.116137 result 2.67967
 training batch 8 mu var00.0519881
compute loss for weight  -0.116132  -0.116137 result 2.67968
 training batch 9 mu var00.0519883
compute loss for weight  -0.116142  -0.116137 result 2.67968
   --dy = 0.576903 dy_ref = 0.576903
 training batch 10 mu var00.0519886
compute loss for weight  1.20049  1.20048 result 2.67968
 training batch 11 mu var00.0519883
compute loss for weight  1.20047  1.20048 result 2.67968
 training batch 12 mu var00.0519884
compute loss for weight  1.20048  1.20048 result 2.67968
 training batch 13 mu var00.0519883
compute loss for weight  1.20047  1.20048 result 2.67968
   --dy = -0.358942 dy_ref = -0.358942
 training batch 14 mu var00.0519882
compute loss for weight  0.563419  0.563409 result 2.67968
 training batch 15 mu var00.0519883
compute loss for weight  0.563399  0.563409 result 2.67968
 training batch 16 mu var00.0519883
compute loss for weight  0.563414  0.563409 result 2.67968
 training batch 17 mu var00.0519883
compute loss for weight  0.563404  0.563409 result 2.67968
   --dy = 0.451409 dy_ref = 0.451409
Testing weight gradients   for    layer 1
weight gradient for layer 1
weights for layer 1
 training batch 18 mu var00.0519883
compute loss for weight  1.00001  1 result 2.67968
 training batch 19 mu var00.0519883
compute loss for weight  0.99999  1 result 2.67968
 training batch 20 mu var00.0519883
compute loss for weight  1.00001  1 result 2.67968
 training batch 21 mu var00.0519883
compute loss for weight  0.999995  1 result 2.67968
   --dy = 0.295245 dy_ref = 0.295245
 training batch 22 mu var00.0519883
compute loss for weight  1.00001  1 result 2.67973
 training batch 23 mu var00.0519883
compute loss for weight  0.99999  1 result 2.67963
 training batch 24 mu var00.0519883
compute loss for weight  1.00001  1 result 2.67971
 training batch 25 mu var00.0519883
compute loss for weight  0.999995  1 result 2.67965
   --dy = 5.06411 dy_ref = 5.06411
Testing weight gradients   for    layer 1
weight gradient for layer 1
weights for layer 1
 training batch 26 mu var00.0519883
compute loss for weight  1e-05  0 result 2.67968
 training batch 27 mu var00.0519883
compute loss for weight  -1e-05  0 result 2.67968
 training batch 28 mu var00.0519883
compute loss for weight  5e-06  0 result 2.67968
 training batch 29 mu var00.0519883
compute loss for weight  -5e-06  0 result 2.67968
   --dy = 7.40149e-12 dy_ref = 1.04083e-16
 training batch 30 mu var00.0519883
compute loss for weight  1e-05  0 result 2.67968
 training batch 31 mu var00.0519883
compute loss for weight  -1e-05  0 result 2.67968
 training batch 32 mu var00.0519883
compute loss for weight  5e-06  0 result 2.67968
 training batch 33 mu var00.0519883
compute loss for weight  -5e-06  0 result 2.67968
   --dy = 7.40149e-12 dy_ref = 3.33067e-16
Testing weight gradients   for    layer 2
weight gradient for layer 2
weights for layer 2
 training batch 34 mu var00.0519883
compute loss for weight  -0.319689  -0.319699 result 2.67967
 training batch 35 mu var00.0519883
compute loss for weight  -0.319709  -0.319699 result 2.67969
 training batch 36 mu var00.0519883
compute loss for weight  -0.319694  -0.319699 result 2.67968
 training batch 37 mu var00.0519883
compute loss for weight  -0.319704  -0.319699 result 2.67968
   --dy = -0.92351 dy_ref = -0.92351
 training batch 38 mu var00.0519883
compute loss for weight  -1.57696  -1.57697 result 2.67965
 training batch 39 mu var00.0519883
compute loss for weight  -1.57698  -1.57697 result 2.67971
 training batch 40 mu var00.0519883
compute loss for weight  -1.57696  -1.57697 result 2.67966
 training batch 41 mu var00.0519883
compute loss for weight  -1.57697  -1.57697 result 2.6797
   --dy = -3.2113 dy_ref = -3.2113
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m2.43123e-10[NON-XML-CHAR-0x1B][39m