Execution Time0.06s

Test: TMVA-DNN-BatchNormalization-Cpu (Passed)
Build: master-x86_64-fedora31-gcc9 (root-fedora-31-1.cern.ch) on 2019-11-14 00:48:30

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing Backpropagation:
DEEP NEURAL NETWORK:   Depth = 3  Input = ( 1, 10, 4 )  Batch size = 10  Loss function = R
	Layer 0	 DENSE Layer: 	 ( Input =     4 , Width =     2 ) 	Output = (  1 ,    10 ,     2 ) 	 Activation Function = Identity
	Layer 1	 BATCH NORM Layer: 	 ( Input =     2 ) 
	Layer 2	 DENSE Layer: 	 ( Input =     2 , Width =     1 ) 	Output = (  1 ,    10 ,     1 ) 	 Activation Function = Identity
input 
 training batch 1 mu var00.244167
output DL 
output BN 
output DL feature 0 mean 0.244167	output DL std 1.42145
output DL feature 1 mean 0.106441	output DL std 0.368346
output of BN 
output BN feature 0 mean 3.33067e-17	output BN std 1.05406
output BN feature 1 mean 3.33067e-17	output BN std 1.05366
Testing weight gradients   for    layer 0
weight gradient for layer 0
weights for layer 0
 training batch 2 mu var00.24417
compute loss for weight  0.349951  0.349941 result 0.429239
 training batch 3 mu var00.244167
compute loss for weight  0.349931  0.349941 result 0.429235
 training batch 4 mu var00.244167
compute loss for weight  0.349946  0.349941 result 0.429238
 training batch 5 mu var00.244167
compute loss for weight  0.349936  0.349941 result 0.429236
   --dy = 0.190953 dy_ref = 0.190953
 training batch 6 mu var00.244166
compute loss for weight  0.510954  0.510944 result 0.429238
 training batch 7 mu var00.244167
compute loss for weight  0.510934  0.510944 result 0.429235
 training batch 8 mu var00.244167
compute loss for weight  0.510949  0.510944 result 0.429238
 training batch 9 mu var00.244167
compute loss for weight  0.510939  0.510944 result 0.429236
   --dy = 0.156956 dy_ref = 0.156956
 training batch 10 mu var00.244167
compute loss for weight  0.251041  0.251031 result 0.42924
 training batch 11 mu var00.244167
compute loss for weight  0.251021  0.251031 result 0.429234
 training batch 12 mu var00.244167
compute loss for weight  0.251036  0.251031 result 0.429238
 training batch 13 mu var00.244167
compute loss for weight  0.251026  0.251031 result 0.429236
   --dy = 0.266237 dy_ref = 0.266237
 training batch 14 mu var00.244167
compute loss for weight  -1.52066  -1.52067 result 0.429238
 training batch 15 mu var00.244167
compute loss for weight  -1.52068  -1.52067 result 0.429236
 training batch 16 mu var00.244167
compute loss for weight  -1.52067  -1.52067 result 0.429238
 training batch 17 mu var00.244167
compute loss for weight  -1.52068  -1.52067 result 0.429236
   --dy = 0.140613 dy_ref = 0.140613
Testing weight gradients   for    layer 1
weight gradient for layer 1
weights for layer 1
 training batch 18 mu var00.244167
compute loss for weight  1.00001  1 result 0.429242
 training batch 19 mu var00.244167
compute loss for weight  0.99999  1 result 0.429232
 training batch 20 mu var00.244167
compute loss for weight  1.00001  1 result 0.429239
 training batch 21 mu var00.244167
compute loss for weight  0.999995  1 result 0.429235
   --dy = 0.461753 dy_ref = 0.461753
 training batch 22 mu var00.244167
compute loss for weight  1.00001  1 result 0.429241
 training batch 23 mu var00.244167
compute loss for weight  0.99999  1 result 0.429233
 training batch 24 mu var00.244167
compute loss for weight  1.00001  1 result 0.429239
 training batch 25 mu var00.244167
compute loss for weight  0.999995  1 result 0.429235
   --dy = 0.396721 dy_ref = 0.396721
Testing weight gradients   for    layer 1
weight gradient for layer 1
weights for layer 1
 training batch 26 mu var00.244167
compute loss for weight  1e-05  0 result 0.429237
 training batch 27 mu var00.244167
compute loss for weight  -1e-05  0 result 0.429237
 training batch 28 mu var00.244167
compute loss for weight  5e-06  0 result 0.429237
 training batch 29 mu var00.244167
compute loss for weight  -5e-06  0 result 0.429237
   --dy = 7.40149e-12 dy_ref = 6.93889e-18
 training batch 30 mu var00.244167
compute loss for weight  1e-05  0 result 0.429237
 training batch 31 mu var00.244167
compute loss for weight  -1e-05  0 result 0.429237
 training batch 32 mu var00.244167
compute loss for weight  5e-06  0 result 0.429237
 training batch 33 mu var00.244167
compute loss for weight  -5e-06  0 result 0.429237
   --dy = 0 dy_ref = 2.08167e-17
Testing weight gradients   for    layer 2
weight gradient for layer 2
weights for layer 2
 training batch 34 mu var00.244167
compute loss for weight  0.540452  0.540442 result 0.429245
 training batch 35 mu var00.244167
compute loss for weight  0.540432  0.540442 result 0.429228
 training batch 36 mu var00.244167
compute loss for weight  0.540447  0.540442 result 0.429241
 training batch 37 mu var00.244167
compute loss for weight  0.540437  0.540442 result 0.429233
   --dy = 0.854398 dy_ref = 0.854398
 training batch 38 mu var00.244167
compute loss for weight  0.509675  0.509665 result 0.429245
 training batch 39 mu var00.244167
compute loss for weight  0.509655  0.509665 result 0.429229
 training batch 40 mu var00.244167
compute loss for weight  0.50967  0.509665 result 0.429241
 training batch 41 mu var00.244167
compute loss for weight  0.50966  0.509665 result 0.429233
   --dy = 0.778396 dy_ref = 0.778396
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m1.14655e-10[NON-XML-CHAR-0x1B][39m