Execution Time3.42s

Test: TMVA-DNN-CNN-Backpropagation-CPU (Passed)
Build: master-x86_64-fedora29-gcc8-dbg (root-fedora29-2.cern.ch) on 2019-11-14 11:43:11

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing CNN Backward Pass:
Test1, backward pass with linear activation network - compare with finite difference
added Conv layer 2 x 5 x 5
added Conv layer 2 x 3 x 3
added MaxPool layer 2 x 2 x 2
Do Forward Pass 
Do Backward Pass 
Testing weight gradients:      layer: 0 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 2
Layer 0 :  output  D x H x W 2  5  5	 input D x H x W 2  4  4
layer output size 2
Evaluate the Derivatives with Finite difference and compare with BP for Layer 0
0 - 0 , 0 : -7.18855 from BP -7.18855   2.93887e-11
0 - 0 , 1 : -18.557 from BP -18.557   2.13343e-11
0 - 0 , 2 : 31.0615 from BP 31.0615   2.94845e-11
Testing weight gradients:      layer: 1 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 2
Layer 1 :  output  D x H x W 2  3  3	 input D x H x W 2  5  5
layer output size 2
Evaluate the Derivatives with Finite difference and compare with BP for Layer 1
0 - 0 , 0 : 5.04341 from BP 5.04341   1.79592e-10
0 - 0 , 1 : -54.9362 from BP -54.9362   6.48186e-12
0 - 0 , 2 : 22.3178 from BP 22.3178   1.59399e-11
Testing weight gradients:      layer: 2 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 2
Layer 2 :  output  D x H x W 2  2  2	 input D x H x W 2  3  3
layer output size 2
Evaluate the Derivatives with Finite difference and compare with BP for Layer 2
0 - 0 , 0 : 0 from BP 0   0
0 - 0 , 1 : 0 from BP 0   0
0 - 0 , 2 : 0 from BP 0   0
Testing weight gradients:      layer: 3 / 6
Layer 3 has no weights 
Activation gradient from back-propagation  - vector size is 1
Layer 3 :  output  D x H x W 1  1  8	 input D x H x W 2  2  2
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 3
Testing weight gradients:      layer: 4 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 1
Layer 4 :  output  D x H x W 1  1  3	 input D x H x W 1  1  8
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 4
0 - 0 , 0 : -14.3381 from BP -14.3381   2.36358e-11
0 - 0 , 1 : -15.986 from BP -15.986   9.62775e-12
0 - 0 , 2 : -14.3381 from BP -14.3381   2.36358e-11
Testing weight gradients:      layer: 5 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 1
Layer 5 :  output  D x H x W 1  1  1	 input D x H x W 1  1  3
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 5
0 - 0 , 0 : 52.6036 from BP 52.6036   5.42298e-12
0 - 0 , 1 : -14.99 from BP -14.99   6.89533e-12
0 - 0 , 2 : -2.0514 from BP -2.0514   7.69301e-11
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m6.06754e-10[NON-XML-CHAR-0x1B][39m
Test2, more complex network architecture no dropout
added Conv layer 12 x 7 x 7
added Conv layer 6 x 5 x 5
added MaxPool layer 6 x 3 x 3
Do Forward Pass 
Do Backward Pass 
Testing weight gradients:      layer: 0 / 6
Weight gradient from back-propagation - vector size is 1
BP Weight Gradient ( 12 x 16 ) , ...... skip printing (too many elements ) 
Activation gradient from back-propagation  - vector size is 4
Activation Gradient ( 12 x 49 ) , ...... skip printing (too many elements ) 
Layer 0 :  output  D x H x W 12  7  7	 input D x H x W 1  8  8
layer output size 4
Layer Output ( 12 x 49 ) , ...... skip printing (too many elements ) 
Evaluate the Derivatives with Finite difference and compare with BP for Layer 0
0 - 0 , 0 : -4.57912 from BP -4.57912   3.04381e-10
0 - 0 , 1 : 2.29655 from BP 2.29655   2.04321e-10
0 - 0 , 2 : 9.06929 from BP 9.06929   1.602e-11
Testing weight gradients:      layer: 1 / 6
Weight gradient from back-propagation - vector size is 1
BP Weight Gradient ( 6 x 108 ) , ...... skip printing (too many elements ) 
Activation gradient from back-propagation  - vector size is 4
Activation Gradient ( 6 x 25 ) , ...... skip printing (too many elements ) 
Layer 1 :  output  D x H x W 6  5  5	 input D x H x W 12  7  7
layer output size 4
Layer Output ( 6 x 25 ) , ...... skip printing (too many elements ) 
Evaluate the Derivatives with Finite difference and compare with BP for Layer 1
0 - 0 , 0 : 0.314135 from BP 0.314135   2.46366e-10
0 - 0 , 1 : -2.54656 from BP -2.54656   6.43818e-11
0 - 0 , 2 : -4.51668 from BP -4.51668   2.05629e-10
Testing weight gradients:      layer: 2 / 6
Weight gradient from back-propagation - vector size is 1
BP Weight Gradient ( 6 x 54 ) , ...... skip printing (too many elements ) 
Activation gradient from back-propagation  - vector size is 4
Layer 2 :  output  D x H x W 6  3  3	 input D x H x W 6  5  5
layer output size 4
Evaluate the Derivatives with Finite difference and compare with BP for Layer 2
0 - 0 , 0 : 0 from BP 0   0
0 - 0 , 1 : 0 from BP 0   0
0 - 0 , 2 : 0 from BP 0   0
Testing weight gradients:      layer: 3 / 6
Layer 3 has no weights 
Activation gradient from back-propagation  - vector size is 1
Activation Gradient ( 4 x 54 ) , ...... skip printing (too many elements ) 
Layer 3 :  output  D x H x W 1  1  54	 input D x H x W 6  3  3
layer output size 1
Layer Output ( 4 x 54 ) , ...... skip printing (too many elements ) 
Evaluate the Derivatives with Finite difference and compare with BP for Layer 3
Testing weight gradients:      layer: 4 / 6
Weight gradient from back-propagation - vector size is 1
BP Weight Gradient ( 20 x 54 ) , ...... skip printing (too many elements ) 
Activation gradient from back-propagation  - vector size is 1
Layer 4 :  output  D x H x W 1  1  20	 input D x H x W 1  1  54
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 4
0 - 0 , 0 : 5.07252 from BP 5.07252   8.81259e-11
0 - 0 , 1 : 4.70495 from BP 4.70495   7.70506e-12
0 - 0 , 2 : 5.58944 from BP 5.58944   1.11448e-10
Testing weight gradients:      layer: 5 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 1
Layer 5 :  output  D x H x W 1  1  2	 input D x H x W 1  1  20
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 5
0 - 0 , 0 : 20.763 from BP 20.763   1.34535e-11
0 - 0 , 1 : 1.82507 from BP 1.82507   5.62853e-12
0 - 0 , 2 : 1.7897 from BP 1.7897   1.75695e-10
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][33m1.45951e-07[NON-XML-CHAR-0x1B][39m