当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
ISTR: Mask-Embedding-Based Instance Segmentation Transformer
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2024-04-12 , DOI: 10.1109/tip.2024.3385980
Jie Hu 1 , Yao Lu 1 , Shengchuan Zhang 1 , Liujuan Cao 1
Affiliation  

Transformer-based instance-level recognition has attracted increasing research attention recently due to the superior performance. However, although attempts have been made to encode masks as embeddings into Transformer-based frameworks, how to combine mask embeddings and spatial information for a transformer-based approach is still not fully explored. In this paper, we revisit the design of mask-embedding-based pipelines and propose an Instance Segmentation TRansformer (ISTR) with Mask Meta-Embeddings (MME), leveraging the strengths of transformer models in encoding embedding information and incorporating spatial information from mask embeddings. ISTR incorporates a recurrent refining head that consists of a Dynamic Box Predictor (DBP), a Mask Information Generator (MIG), and a Mask Meta-Decoder (MMD). To improve the quality of mask embeddings, MME interprets the mask encoding-decoding processes as a mutual information maximization problem, which unifies the objective functions of different decoding schemes such as Principal Component Analysis (PCA) and Discrete Cosine Transform (DCT) with a meta-formulation. Under the meta-formulation, a learnable Spatial Mask Tuner (SMT) is further proposed, which fuses the spatial and embedding information produced from MIG and can significantly boost the segmentation performance. The resulting varieties, i.e., ISTR-PCA, ISTR-DCT, and ISTR-SMT, demonstrate the effectiveness and efficiency of incorporating mask embeddings with the query-based instance segmentation pipelines. On the COCO dataset, ISTR surpasses all predominant mask-embedding-based models by a large margin, and achieves competitive performance compared to concurrent state-of-the-art models. On the Cityscapes dataset, ISTR also outperforms several strong baselines. Our code has been made available at: https://github.com/hujiecpp/ISTR .

中文翻译:

ISTR:基于掩码嵌入的实例分割变压器

基于 Transformer 的实例级识别由于其优越的性能最近引起了越来越多的研究关注。然而,尽管已经尝试将掩码作为嵌入编码到基于 Transformer 的框架中,但如何将掩码嵌入和空间信息结合起来用于基于 Transformer 的方法仍然尚未得到充分探索。在本文中,我们重新审视基于掩码嵌入的管道的设计,并提出了一种带有掩码元嵌入(MME)的实例分割 TRansformer(ISTR),利用 Transformer 模型在编码嵌入信息和合并来自掩码嵌入的空间信息方面的优势。 ISTR 包含一个循环精炼头,该头由动态框预测器 (DBP)、掩模信息生成器 (MIG) 和掩模元解码器 (MMD) 组成。为了提高掩码嵌入的质量,MME 将掩码编码-解码过程解释为互信息最大化问题,它将不同解码方案(例如主成分分析(PCA)和离散余弦变换(DCT))的目标函数与元-公式。在元公式下,进一步提出了一种可学习的空间掩模调谐器(SMT),它融合了 MIG 产生的空间和嵌入信息,可以显着提高分割性能。由此产生的变体,即 ISTR-PCA、ISTR-DCT 和 ISTR-SMT,证明了将掩码嵌入与基于查询的实例分割管道相结合的有效性和效率。在 COCO 数据集上,ISTR 大幅超越了所有主流的基于掩模嵌入的模型,并且与并发的最先进模型相比,实现了具有竞争力的性能。在 Cityscapes 数据集上,ISTR 的表现也优于几个强大的基线。我们的代码已在以下位置提供:https://github.com/hujiecpp/ISTR
更新日期:2024-04-12
down
wechat
bug