Execution Time4.89s

Test: TMVA-DNN-CNN-Backpropagation-CPU (Passed)
Build: master-x86_64-centos7-gcc62-opt-no-rt-cxxmodules (olsnba08.cern.ch) on 2019-11-14 01:02:24
Repository revision: 32b17abcda23e44b64218a42d0ca69cb30cda7e0

Test Timing: Passed
Processors1

Show Command Line
Display graphs:

Test output
Testing CNN Backward Pass:
Test1, backward pass with linear activation network - compare with finite difference
added Conv layer 2 x 5 x 5
added Conv layer 2 x 3 x 3
added MaxPool layer 2 x 2 x 2
Do Forward Pass 
Do Backward Pass 
Testing weight gradients:      layer: 0 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 2
Layer 0 :  output  D x H x W 2  5  5	 input D x H x W 2  4  4
layer output size 2
Evaluate the Derivatives with Finite difference and compare with BP for Layer 0
0 - 0 , 0 : -7.18855 from BP -7.18855   2.00335e-11
0 - 0 , 1 : -18.557 from BP -18.557   2.1896e-12
0 - 0 , 2 : 31.0615 from BP 31.0615   1.80472e-11
Testing weight gradients:      layer: 1 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 2
Layer 1 :  output  D x H x W 2  3  3	 input D x H x W 2  5  5
layer output size 2
Evaluate the Derivatives with Finite difference and compare with BP for Layer 1
0 - 0 , 0 : 5.04341 from BP 5.04341   1.72622e-10
0 - 0 , 1 : -54.9362 from BP -54.9362   1.29486e-11
0 - 0 , 2 : 22.3178 from BP 22.3178   4.77348e-11
Testing weight gradients:      layer: 2 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 2
Layer 2 :  output  D x H x W 2  2  2	 input D x H x W 2  3  3
layer output size 2
Evaluate the Derivatives with Finite difference and compare with BP for Layer 2
0 - 0 , 0 : 0 from BP 0   0
0 - 0 , 1 : 0 from BP 0   0
0 - 0 , 2 : 0 from BP 0   0
Testing weight gradients:      layer: 3 / 6
Layer 3 has no weights 
Activation gradient from back-propagation  - vector size is 1
Layer 3 :  output  D x H x W 1  1  8	 input D x H x W 2  2  2
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 3
Testing weight gradients:      layer: 4 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 1
Layer 4 :  output  D x H x W 1  1  3	 input D x H x W 1  1  8
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 4
0 - 0 , 0 : -14.3381 from BP -14.3381   2.59206e-11
0 - 0 , 1 : -15.986 from BP -15.986   9.62831e-12
0 - 0 , 2 : -14.3381 from BP -14.3381   2.59206e-11
Testing weight gradients:      layer: 5 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 1
Layer 5 :  output  D x H x W 1  1  1	 input D x H x W 1  1  3
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 5
0 - 0 , 0 : 52.6036 from BP 52.6036   1.33035e-12
0 - 0 , 1 : -14.99 from BP -14.99   6.89569e-12
0 - 0 , 2 : -2.0514 from BP -2.0514   7.69305e-11
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][32m7.48414e-10[NON-XML-CHAR-0x1B][39m
Test2, more complex network architecture no dropout
added Conv layer 12 x 7 x 7
added Conv layer 6 x 5 x 5
added MaxPool layer 6 x 3 x 3
Do Forward Pass 
Do Backward Pass 
Testing weight gradients:      layer: 0 / 6
Weight gradient from back-propagation - vector size is 1
BP Weight Gradient ( 12 x 16 ) , ...... skip printing (too many elements ) 
Activation gradient from back-propagation  - vector size is 4
Activation Gradient ( 12 x 49 ) , ...... skip printing (too many elements ) 
Layer 0 :  output  D x H x W 12  7  7	 input D x H x W 1  8  8
layer output size 4
Layer Output ( 12 x 49 ) , ...... skip printing (too many elements ) 
Evaluate the Derivatives with Finite difference and compare with BP for Layer 0
0 - 0 , 0 : -4.57912 from BP -4.57912   1.6113e-10
0 - 0 , 1 : 2.29655 from BP 2.29655   1.05075e-10
0 - 0 , 2 : 9.06929 from BP 9.06929   9.43657e-11
Testing weight gradients:      layer: 1 / 6
Weight gradient from back-propagation - vector size is 1
BP Weight Gradient ( 6 x 108 ) , ...... skip printing (too many elements ) 
Activation gradient from back-propagation  - vector size is 4
Activation Gradient ( 6 x 25 ) , ...... skip printing (too many elements ) 
Layer 1 :  output  D x H x W 6  5  5	 input D x H x W 12  7  7
layer output size 4
Layer Output ( 6 x 25 ) , ...... skip printing (too many elements ) 
Evaluate the Derivatives with Finite difference and compare with BP for Layer 1
0 - 0 , 0 : 0.314135 from BP 0.314135   1.37732e-09
0 - 0 , 1 : -2.54656 from BP -2.54656   2.03892e-10
0 - 0 , 2 : -4.51668 from BP -4.51668   1.09001e-10
Testing weight gradients:      layer: 2 / 6
Weight gradient from back-propagation - vector size is 1
BP Weight Gradient ( 6 x 54 ) , ...... skip printing (too many elements ) 
Activation gradient from back-propagation  - vector size is 4
Layer 2 :  output  D x H x W 6  3  3	 input D x H x W 6  5  5
layer output size 4
Evaluate the Derivatives with Finite difference and compare with BP for Layer 2
0 - 0 , 0 : 0 from BP 0   0
0 - 0 , 1 : 0 from BP 0   0
0 - 0 , 2 : 0 from BP 0   0
Testing weight gradients:      layer: 3 / 6
Layer 3 has no weights 
Activation gradient from back-propagation  - vector size is 1
Activation Gradient ( 4 x 54 ) , ...... skip printing (too many elements ) 
Layer 3 :  output  D x H x W 1  1  54	 input D x H x W 6  3  3
layer output size 1
Layer Output ( 4 x 54 ) , ...... skip printing (too many elements ) 
Evaluate the Derivatives with Finite difference and compare with BP for Layer 3
Testing weight gradients:      layer: 4 / 6
Weight gradient from back-propagation - vector size is 1
BP Weight Gradient ( 20 x 54 ) , ...... skip printing (too many elements ) 
Activation gradient from back-propagation  - vector size is 1
Layer 4 :  output  D x H x W 1  1  20	 input D x H x W 1  1  54
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 4
0 - 0 , 0 : 5.07252 from BP 5.07252   5.19509e-11
0 - 0 , 1 : 4.70495 from BP 4.70495   7.70506e-12
0 - 0 , 2 : 5.58944 from BP 5.58944   4.78862e-11
Testing weight gradients:      layer: 5 / 6
Weight gradient from back-propagation - vector size is 1
Activation gradient from back-propagation  - vector size is 1
Layer 5 :  output  D x H x W 1  1  2	 input D x H x W 1  1  20
layer output size 1
Evaluate the Derivatives with Finite difference and compare with BP for Layer 5
0 - 0 , 0 : 20.763 from BP 20.763   2.07679e-11
0 - 0 , 1 : 1.82507 from BP 1.82507   5.62671e-12
0 - 0 , 2 : 1.7897 from BP 1.7897   1.75692e-10
Testing weight gradients:      maximum relative error: [NON-XML-CHAR-0x1B][33m1.74945e-07[NON-XML-CHAR-0x1B][39m