[MKL-DNN] Integrate Conv3d and Pool3d/1d#17884
Conversation
|
|
||
| return (dtype == mshadow::kFloat32 || dtype == mshadow::kBfloat16) && | ||
| (ndim == 1 || ndim == 2 || ndim == 4); | ||
| (ndim >= 1 && ndim <= 5); |
There was a problem hiding this comment.
Please fix the indent and make sure you also want to enable ndim=3 here.
There was a problem hiding this comment.
Maybe it's better not to touch this function. I will enable 5d layout in SupportPooling instead.
| const int D = (ndim == 5) ? 2 : 1; | ||
| const int N = 0, C = 1, H = D + 1, W = D + 2; |
There was a problem hiding this comment.
Let's be more descriptive here:
| const int D = (ndim == 5) ? 2 : 1; | |
| const int N = 0, C = 1, H = D + 1, W = D + 2; | |
| int N = 0, C = 1, H = 2, W = 3; | |
| int D = -1; | |
| if (ndim == 5) { | |
| D = 2; | |
| H = 3; | |
| W = 4; | |
| } |
| param.pad[2], param.pad[2], param.kernel[2], param.stride[2])); | ||
| case 4: | ||
| is_symmetric = is_symmetric && (param.pad[1] == GetPaddingSizeFull(dshape[3], | ||
| param.pad[1], param.pad[1], param.kernel[1], param.stride[1])); |
There was a problem hiding this comment.
I see both pad[0] and pad[1] are checked in previous code.
There was a problem hiding this comment.
Could you please show me where you saw these checks? Thanks
There was a problem hiding this comment.
okay, i didn't realize that you don't have break for each case. So if ndim == 4, both pad[1] and pad[0] are checked.
|
|
||
| void InitPoolingPrimitiveParams(const PoolingParam ¶m, | ||
| const mkldnn::memory::desc &data_md, | ||
| mkldnn::memory::dims *new_kernel, |
8f7fc5b to
3bb434f
Compare
| param.pad[2], param.pad[2], param.kernel[2], param.stride[2])); | ||
| case 4: | ||
| is_symmetric = is_symmetric && (param.pad[1] == GetPaddingSizeFull(dshape[3], | ||
| param.pad[1], param.pad[1], param.kernel[1], param.stride[1])); |
There was a problem hiding this comment.
okay, i didn't realize that you don't have break for each case. So if ndim == 4, both pad[1] and pad[0] are checked.
|
|
||
| // Pooling does not currently support working with views | ||
| if (inputs[0].IsView() || outputs[0].IsView()) { | ||
| std::cout << "Fall back to Pooling backward pass..." << std::endl; |
There was a problem hiding this comment.
Is this intended? We didn't have any log message for fallback execution.
There was a problem hiding this comment.
Sorry. Forgot to remove this line. Will fix.
|
@mxnet-bot run ci [centos-cpu, unix-gpu, windows-gpu] |
|
Jenkins CI successfully triggered : [windows-gpu, unix-gpu, centos-cpu] |
|
@wuxun-zhang please take a look for CI |
bdb94de to
94acedd
Compare
|
Seems CI system is now experiencing failures. The errors are unrelated to this PR. |
|
The CI issue should be already addressed. Please rebase your PR and resolve the comments. Thanks. |
94acedd to
437622f
Compare
|
Now DNNL version in master branch has been wrongly changed (from v1.2.2 to v1.1.2). This PR is now waiting for DNNL v1.2 or higher version coming back. |
|
PR #17084 updated 3rdparty/mkldnn from 8e96ef to cb2cc7 [1.1.0 to 1.1.2] |
|
Thank you @ChaiBapchya for investigating this. 8e96ef should be OneDNN v1.2.2. Actually we are working on confirming with the PR author of #17084 if such downgrade is intended or a mistake. |
437622f to
4cc8f9b
Compare
|
Now CI has finally passed. @pengzhao-intel @TaoLv please help review again. |
|
@wuxun-zhang Thanks a lot for the work. |
| } | ||
|
|
||
| return (dtype == mshadow::kFloat32 || dtype == mshadow::kBfloat16) && | ||
| (ndim == 1 || ndim == 2 || ndim == 4); |
There was a problem hiding this comment.
What's the issue with 3D tensor?
There was a problem hiding this comment.
No. This is not needed for this PR. We have already enabled mkldnn conv with 3D tensor previously.
There was a problem hiding this comment.
can you point me to where it's handled? I didn't understand the separate treatment of 3D
|
Thanks for the improvement. Merging now. We will share the performance data soon. |
* Integrate MKl-DNN conv3d and pool3d/1d * fix UT & address comments * clean code * rebase against latest master
* Integrate MKl-DNN conv3d and pool3d/1d * fix UT & address comments * clean code * rebase against latest master
* Integrate MKl-DNN conv3d and pool3d/1d * fix UT & address comments * clean code * rebase against latest master
* Integrate MKl-DNN conv3d and pool3d/1d * fix UT & address comments * clean code * rebase against latest master
* [MKLDNN] apply MKLDNNRun to quantized_act/transpose (#17689) * apply MKLDNNRun to quantized_act/transpose ops * run CI * [MKL-DNN] Integrate Conv3d and Pool3d/1d (#17884) * Integrate MKl-DNN conv3d and pool3d/1d * fix UT & address comments * clean code * rebase against latest master * fix conflicts * fix CI * rebase
|
@pengzhao-intel @wuxun-zhang Gentle ping. Can we get the perf numbers after this PR changes? |
|
Performance numbers for Conv3d op:
Test script: |
|
Some more performance numbers for Conv3d:
Test I used: Sorry for the delay. |
* Integrate MKl-DNN conv3d and pool3d/1d * fix UT & address comments * clean code * rebase against latest master
* Integrate MKl-DNN conv3d and pool3d/1d * fix UT & address comments * clean code * rebase against latest master
…nd Fix Sanity pipeline in 1.6.x (#18206) * [MKL-DNN] Integrate Conv3d and Pool3d/1d (#17884) * Integrate MKl-DNN conv3d and pool3d/1d * fix UT & address comments * clean code * rebase against latest master * pylint astroid sanity issue * astroid and pylint versions only supported in py3 * remove kBFloat16 as its not supported int 1.6 * added missing definition GetPaddingSizeFull * Remove dilation restriction for conv3d (#17491) * Remove conv3d dilation restriction * Remove comment * fix unix-gpu test for num_outputs and inputs Co-authored-by: Wuxun Zhang <wuxun.zhang@intel.com> Co-authored-by: reminisce <wujun.nju@gmail.com>
Description
This PR aims to integrate mkl-dnn 3d conv/pool primitive, including fp32 and quantization pass.
@pengzhao-intel @TaoLv @ciyongch @rongzha1 @zixuanweeei
Checklist
Essentials
Changes