Tan_pytorch_segmentation/pytorch_segmentation/PV_Attention/DANet Attention.py

14 lines
723 B
Python
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

"""
Dual Attention Network for Scene Segmentation---CVPR2019
论文地址https://arxiv.org/pdf/1809.02983.pdf
这是CVPR2019的文章思想上非常简单就是将self-attention用到场景分割的任务中
不同的是self-attention是关注每个position之间的注意力而本文将self-attention做了一个拓展
还做了一个通道注意力的分支操作上和self-attention一样不同的通道attention中把生成QKV的三个Linear去掉了。最后将两个attention之后的特征进行element-wise sum。
"""
from attention.DANet import DAModule
import torch
input = torch.randn(50, 512, 7, 7)
danet = DAModule(d_model=512, kernel_size=3, H=7, W=7)
print(danet(input).shape)