DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (2024)

Chengxiang Fan1Muzhi Zhu111footnotemark: 1Hao Chen1Yang Liu1Weijia Wu1Huaqi Zhang2Chunhua Shen122footnotemark: 2
1 Zhejiang University, China2vivo Mobile Communication Co
Equal contribution.Correspondence should be addressed to HC and CS.

Abstract

Instance segmentation is data-hungry, and as model capacity increases, data scale becomes crucial forimproving the accuracy.Most instance segmentation datasets today require costly manual annotation, limiting their data scale. Models trained on such data are prone to overfitting on the training set, especially for those rare categories. While recent works have delved intoexploitinggenerative models to create synthetic datasets for data augmentation, these approaches do not efficiently harness the full potential of generative models.

To address these issues,weintroduce a more efficient strategy to construct generative datasets for data augmentation, termed DiverGen. Firstly, we provide an explanation of the role of generative data from the perspective of distribution discrepancy. We investigate the impact of different data on the distribution learned by the model. We argue that generative data can expand the data distribution that the model can learn,thus mitigating overfitting.Additionally, we find that the diversity of generative data is crucial for improving model performance and enhance it through various strategies, including category diversity, prompt diversity, and generative model diversity.Withthese strategies, we can scale the data to millions while maintaining the trend of model performance improvement. On the LVIS dataset, DiverGensignificantly outperforms the strong model X-Paste, achieving +1.11.1\bf 1.1bold_1.1 box AP and +1.11.1\bf 1.1bold_1.1 mask AP across all categories, and +1.91.9\bf 1.9bold_1.9 box AP and +2.52.5\bf 2.5bold_2.5 mask AP for rare categories.Our codes are available at https://github.com/aim-uofa/DiverGen.

1 Introduction

Instance segmentation [9, 4, 2] is one of the challenging tasks in computer vision, requiring the prediction of masks and categories for instances in an image, which serves as the foundation for numerous visual applications. As models’ learning capabilities improve, the demand for training data increases. However, current datasets for instance segmentation heavily rely on manual annotation, which is time-consuming and costly, and the dataset scale cannot meet the training needs of models.Despite the recent emergence of the automatically annotated dataset SA-1B[12], it lacks category annotations, failing to meet the requirements of instance segmentation.Meanwhile, the ongoing development of the generative model haslargely improvedthe controllability and realism ofgenerated samples.For example, the recenttext2image diffusion model[22, 24] can generate high-quality images corresponding to input prompts. Therefore, current methods[34, 28, 27] use generative models for data augmentation by generating datasets to supplement the training of models on real datasets and improve model performance. Although current methods have proposed various strategies to enable generative data to boost model performance, there are still some limitations:1) Existing methods have not fully exploited the potential of generative models. First, some methods[34] not only use generative data but also need to crawl images from the internet, which is significantly challenging to obtain large-scale data. Meanwhile, the content of data crawled from the internet is uncontrollable and needs extra checking. Second, existing methods do not fully use the controllability of generative models. Current methods often adopt manually designed templates to construct prompts, limiting the potential output of generative models.2) Existing methods[28, 27] often explain the role of generative data from the perspective of class imbalance or data scarcity, without considering the discrepancy between real-world data and generative data. Moreover, these methods typically show improved model performance only in scenarios with a limited number of real samples, and the effectiveness of generative data on existing large-scale real datasets, like LVIS[8], is not thoroughly investigated.

In this paper, wefirstexplore the role of generative data from the perspective of distribution discrepancy, addressing two main questions:1) Why does generative data augmentation enhance model performance?2) What types of generative data are beneficial for improving model performance?First, we find that there exist discrepancies between the model learned distribution of the limited real training data and the distribution of real-world data. We visualize the data and find that compared to the real-world data, generative data can expand the data distribution that the model can learn. Furthermore, we find that the role of adding generative data is to alleviate the bias of the real training data, effectively mitigating overfitting the training data.Second, we find that there are also discrepancies between the distribution of the generative data and the real-world data distribution. If these discrepancies are not handled properly, the full potential of the generative model cannot be utilized. By conducting several experiments, we find that using diverse generative data enables models to better adapt to these discrepancies, improving model performance.

Based on the above analysis, we propose an efficient strategy for enhancing data diversity, namely, Generative Data Diversity Enhancement. We design various diversity enhancement strategies to increase data diversity from the perspectives of category diversity, prompt diversity, and generative model diversity.For category diversity, we observe that models trained with generative data covering all categories adapt better to distribution discrepancy than models trained with partial categories. Therefore, we introduce not only categories from LVIS[8] but also extra categories from ImageNet-1K[23] to enhance category diversity in data generation, thereby reinforcing the model’s adaptability to distribution discrepancy.For prompt diversity, we find that as the scale of the generative dataset increases, manually designed prompts cannot scale up to the corresponding level, limiting the diversity of output images from the generative model. Thus, we design a set of diverse prompt generation strategies to use large language models, like ChatGPT, for prompt generation, requiring the large language models to output maximally diverse prompts under constraints. By combining manually designed prompts and ChatGPT designed prompts, we effectively enrich prompt diversity and further improve generative data diversity.For generative model diversity, we find that data from different generative models also exhibit distribution discrepancies. Exposing models to data from different generative models during training can enhance adaptability to different distributions. Therefore, we employ Stable Diffusion[22] and DeepFloyd-IF[24] to generate images for all categories separately and mix the two types of data during training to increase data diversity.

At the same time, we optimize the data generation workflow and propose a four-stage generative pipeline consisting of instance generation, instance annotation, instance filtration, and instance augmentation.In the instance generation stage, we employ our proposed Generative Data Diversity Enhancement to enhance data diversity, producing diverse raw data.In the instance annotation stage, we introduce an annotation strategy called SAM-background. This strategy obtains high-quality annotations by using background points as input prompts for SAM[12], obtaining the annotations of raw data.In the instance filtration stage, we introduce a metric called CLIP inter-similarity. Utilizing the CLIP[21] image encoder, we extract embeddings from generative and real data, and then compute their similarity. A lower similarity indicates lower data quality. After filtration, we obtain the final generative dataset.In the instance augmentation stage, we use the instance paste strategy[34] to increase model learning efficiency on generative data.

Experiments demonstrate that our designed data diversity strategies can effectively improve model performance and maintain the trend of performance gains as the data scale increases to the million level, which enables large-scale generative data for data augmentation. On the LVIS dataset, DiverGensignificantly outperforms the strong model X-Paste[34], achieving +1.11.1\bf 1.1bold_1.1 box AP[8] and +1.11.1\bf 1.1bold_1.1 mask AP across all categories, and +1.91.9\bf 1.9bold_1.9 box AP and +2.52.5\bf 2.5bold_2.5 mask AP for rare categories.

In summary,our maincontributions are as follows:

  • We explain the role of generative data from the perspective of distribution discrepancy. We find that generative data can expand the data distribution that the model can learn, mitigating overfitting the training set and the diversity of generative data is crucial for improving model performance.

  • We propose the Generative Data Diversity Enhancement strategy to increase data diversity from the aspects of category diversity, prompt diversity, and generative model diversity. By enhancing data diversity, we can scale the data to millionswhile maintaining the trend of model performance improvement.

  • We optimize the data generation pipeline. We propose an annotation strategy SAM-background to obtain higher-quality annotations. We also introduce a filtration metric called CLIP inter-similarity to filter data and further improve the quality of the generative dataset.

2 Related Work

Instance segmentation.Instance segmentation is an important task in the field of computer vision and has been extensively studied. Unlike semantic segmentation, instance segmentation not only classifies the pixels at a pixel level but also distinguishes different instances of the same category. Previously, the focus of instance segmentation research has primarily been on the design of model structures. Mask-RCNN[9] unifies the tasks of object detection and instance segmentation. Subsequently, Mask2Former[4] further unified the tasks of semantic segmentation and instance segmentation by leveraging the structure of DETR[2].

DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (1)

Orthogonal to these studies focusing on model architecture, our work primarily investigates how to better utilize generated data for this task. We focus on the challenging long-tail dataset LVIS [8] because it is only the long-tailed categories that face the issue of limited real data and require generative images for augmentation, making it more practically meaningful.

Generative data augmentation.The use of generative models to synthesize training data for assisting perception tasks such as classification[6, 32], detection[34, 3], segmentation[14, 28, 27], etc. has received widespread attention from researchers.In the field of segmentation, early works[33, 13] utilize generative adversarial networks (GANs) to synthesize additional training samples. With the rise of diffusion models, there have been numerous efforts[34, 14, 28, 27, 30] to utilize text2image diffusion models, such as Stable Diffusion[22], to boost the segmentation performance. Li etal. [14] combine the Stable Diffusion model with a novel grounding module and establish an automatic pipeline for constructing a segmentation dataset. DiffuMask[28] exploits the potential of cross-attention maps between text and images to synthesize accurate semantic labels. More recently, FreeMask[30] uses a mask-to-image generation model to generate images conditioned on the provided semantic masks. However, the aforementioned work is only applicable to semantic segmentation. The most relevant work to ours is X-Paste [34], which promotes instance segmentation through copy-pasting the generative images and a filter strategy based on CLIP[21].

In summary, most methods only demonstrate significant advantages when training data is extremely limited. They consider generating data as a means to compensate for data scarcity or class imbalance. However, in this work, we take a further step to examine and analyze this problem from the perspective of data distribution. We propose a pipeline that enhances diversity from multiple levels to alleviate the impact of data distribution discrepancies.This provides new insights and inspirations for further advancements in this field.

3 Our Proposed DiverGen

3.1 Analysis of Data Distribution

Existing methods[34, 28, 29] often attribute the role of generative data to addressing class imbalance or data scarcity. In this paper, we provide an explanation for two main questions from the perspective of distribution discrepancy.

Why does generative data augmentation enhance model performance?We argue that there exist discrepancies between the model learned distribution of the limited real training data and the distribution of real-world data. The role of adding generative data is to alleviate the bias of the real training data, effectively mitigating overfitting the training data.

First, to intuitively understand the discrepancies between different data sources, we use CLIP[21] image encoder to extract the embeddings of images from different data sources, and then use UMAP[18] to reduce dimensions for visualization. Visualization of data distributions on different sources is shown in Figure1.Real-world data (LVIS[8] train and LVIS val) cluster near the center, while generative data (Stable Diffusion[22] and IF[24]) are more dispersed, indicating that generative data can expand the data distribution that the model can learn.

Then, to characterize the distribution learned by the model, we employ the free energy formulation used by Joseph etal. [10]. This formulation transforms the logits outputted by the classification head into an energy function. The formulation is shown below:

F(𝒒;h)=τlogc=1nexp(hc(𝒒)τ).𝐹𝒒𝜏superscriptsubscript𝑐1𝑛subscript𝑐𝒒𝜏F(\bm{q};h)=-\tau\log\sum_{c=1}^{n}\exp\left(\frac{h_{c}(\bm{q})}{\tau}\right).italic_F ( bold_italic_q ; italic_h ) = - italic_τ roman_log ∑ start_POSTSUBSCRIPT italic_c = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT roman_exp ( divide start_ARG italic_h start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( bold_italic_q ) end_ARG start_ARG italic_τ end_ARG ) .(1)

Here, 𝒒𝒒\bm{q}bold_italic_q is the feature of instance, hc(𝒒)subscript𝑐𝒒h_{c}(\bm{q})italic_h start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( bold_italic_q ) is the cthsuperscript𝑐𝑡c^{th}italic_c start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT logit outputted by classification head h(.)h(.)italic_h ( . ), n𝑛nitalic_n is the number of categories and τ𝜏\tauitalic_τ is the temperature parameter.We train one model using only the LVIS train set (θtrainsubscript𝜃train\theta_{\text{train}}italic_θ start_POSTSUBSCRIPT train end_POSTSUBSCRIPT), and another model using LVIS train with generative data (θgensubscript𝜃gen\theta_{\text{gen}}italic_θ start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT). Both models are evaluated on the LVIS val set and we use instances that are successfully matched by both models to obtain energy values. Additionally, we train another model using LVIS val (θvalsubscript𝜃val\theta_{\text{val}}italic_θ start_POSTSUBSCRIPT val end_POSTSUBSCRIPT), treating it as representative of real-world data distribution.Then, we further fit Gaussian distributions to the histograms of energy values to obtain the mean μ𝜇\muitalic_μ and standard deviation σ𝜎\sigmaitalic_σ for each model and compute the KL divergence[11] between them. DKL(pθtrainpθval)subscript𝐷𝐾𝐿conditionalsubscript𝑝subscript𝜃trainsubscript𝑝subscript𝜃valD_{KL}(p_{\theta_{\text{train}}}\|p_{\theta_{\text{val}}})italic_D start_POSTSUBSCRIPT italic_K italic_L end_POSTSUBSCRIPT ( italic_p start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT train end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∥ italic_p start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT val end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) is 0.063, and DKL(pθgenpθval)subscript𝐷𝐾𝐿conditionalsubscript𝑝subscript𝜃gensubscript𝑝subscript𝜃valD_{KL}(p_{\theta_{\text{gen}}}\|p_{\theta_{\text{val}}})italic_D start_POSTSUBSCRIPT italic_K italic_L end_POSTSUBSCRIPT ( italic_p start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∥ italic_p start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT val end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) is 0.019. The latter is lower, indicating that using generative data mitigates the bias of limited real training data.

Moreover, we also analyze the role of generative data from a metric perspective. We randomly select up to five images per category to form a minitrain set and then conduct inferences using θtrainsubscript𝜃train\theta_{\text{train}}italic_θ start_POSTSUBSCRIPT train end_POSTSUBSCRIPT and θgensubscript𝜃gen\theta_{\text{gen}}italic_θ start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT. Then, we define a metric,termedtrain-val gap (TVG), which is formulated as follows:

TVGwk=APwkminitrainAPwkval.superscriptsubscriptTVG𝑤𝑘superscriptsubscriptAP𝑤𝑘𝑚𝑖𝑛𝑖𝑡𝑟𝑎𝑖𝑛superscriptsubscriptAP𝑤𝑘𝑣𝑎𝑙\text{TVG}_{w}^{k}=\text{AP}_{w}^{k}minitrain-\text{AP}_{w}^{k}val.TVG start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT = AP start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT italic_m italic_i italic_n italic_i italic_t italic_r italic_a italic_i italic_n - AP start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT italic_v italic_a italic_l .(2)

Here, TVGwksuperscriptsubscriptTVG𝑤𝑘\text{TVG}_{w}^{k}TVG start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT is train-val gap of w𝑤witalic_w category on task k𝑘kitalic_k, APwkdsuperscriptsubscriptAP𝑤𝑘𝑑\text{AP}_{w}^{k}dAP start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT italic_d is AP[8] of w𝑤witalic_w category on k𝑘kitalic_k obtained on dataset d𝑑ditalic_d, w{f,c,r}𝑤𝑓𝑐𝑟w\in\{f,c,r\}italic_w ∈ { italic_f , italic_c , italic_r }, with f𝑓fitalic_f, c𝑐citalic_c, r𝑟ritalic_r standing for frequent, common, rare[8] respectively, and k{box,mask}𝑘𝑏𝑜𝑥𝑚𝑎𝑠𝑘k\in\{box,mask\}italic_k ∈ { italic_b italic_o italic_x , italic_m italic_a italic_s italic_k }, with box𝑏𝑜𝑥boxitalic_b italic_o italic_x, mask𝑚𝑎𝑠𝑘maskitalic_m italic_a italic_s italic_k referring to the object detection and instance segmentation.The train-val gap serves as a measure of the disparity in the model’s performance between the training and validation sets. A larger gap indicates a higher degree of overfitting the training set.The results, as presented in Table1, show that the metrics for the rare categories consistently surpass those of frequent and common. This observation suggests that the model tends to overfit more on the rare categories that have fewer examples. With the augmentation of generative data, all TVG of θgensubscript𝜃gen\theta_{\text{gen}}italic_θ start_POSTSUBSCRIPT gen end_POSTSUBSCRIPT are lower than θtrainsubscript𝜃train\theta_{\text{train}}italic_θ start_POSTSUBSCRIPT train end_POSTSUBSCRIPT, showing that adding generative data can effectively alleviate overfitting the training data.

Data SourceTVGfboxsuperscriptsubscriptTVG𝑓𝑏𝑜𝑥\text{TVG}_{f}^{box}TVG start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTTVGfmasksuperscriptsubscriptTVG𝑓𝑚𝑎𝑠𝑘\text{TVG}_{f}^{mask}TVG start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTTVGcboxsuperscriptsubscriptTVG𝑐𝑏𝑜𝑥\text{TVG}_{c}^{box}TVG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTTVGcmasksuperscriptsubscriptTVG𝑐𝑚𝑎𝑠𝑘\text{TVG}_{c}^{mask}TVG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTTVGrboxsuperscriptsubscriptTVG𝑟𝑏𝑜𝑥\text{TVG}_{r}^{box}TVG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTTVGrmasksuperscriptsubscriptTVG𝑟𝑚𝑎𝑠𝑘\text{TVG}_{r}^{mask}TVG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
LVIS13.1610.7121.8016.8039.5931.68
LVIS + Gen9.648.3815.6412.6929.3922.49

What types of generative data are beneficial for improving model performance?We arguethatthere are also discrepancies between the distribution of the generative data and the real-world data distribution. If these discrepancies are notproperly addressed, the full potential of the generative model cannot beattained.

We divide the generative data into ‘frequent’, ‘common’, and ‘rare’[8] groups, and train three models using each group of data as instance paste source. The inference results are shown in Table2. We find that the metrics on the corresponding category subset are lowest when training with only one group of data.We consider model performance to be primarily influenced by the quality and diversity of data. Given that the quality of generative data is relatively consistent, we contend insufficient diversity in the data can mislead the distribution that the model can learn and a more comprehensive understanding is obtained by the model from a diverse set of data. Therefore, we believethatusing diverse generative data enables models to better adapt to these discrepancies, improving model performance.

# Gen CategoryAPfboxsuperscriptsubscriptAP𝑓𝑏𝑜𝑥\text{AP}_{f}^{box}AP start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPfmasksuperscriptsubscriptAP𝑓𝑚𝑎𝑠𝑘\text{AP}_{f}^{mask}AP start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTAPcboxsuperscriptsubscriptAP𝑐𝑏𝑜𝑥\text{AP}_{c}^{box}AP start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPcmasksuperscriptsubscriptAP𝑐𝑚𝑎𝑠𝑘\text{AP}_{c}^{mask}AP start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTAPrboxsuperscriptsubscriptAP𝑟𝑏𝑜𝑥\text{AP}_{r}^{box}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPrmasksuperscriptsubscriptAP𝑟𝑚𝑎𝑠𝑘\text{AP}_{r}^{mask}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
none50.1443.8447.5443.1241.3936.83
f50.8144.2447.9643.5141.5137.92
c51.8645.2247.6942.7942.3237.30
r51.4644.9048.2443.5132.6729.04
all52.1045.4550.2944.8746.0341.86

3.2 Generative Data Diversity Enhancement

Through the analysis above, we find that the diversity of generative data is crucial for improving model performance. Therefore, we design a series of strategies to enhance data diversity at three levels: category diversity, prompt diversity, and generative model diversity, which help the model to better adapt to the distribution discrepancy between generative data and real data.

Category diversity.The above experiments show that including data from partial categories results in lower performance than incorporating data from all categories. We believe that, akin to human learning, the model can learn features beneficial to the current category from some other categories. Therefore, we consider increasing the diversity of data by adding extra categories.First, we select some extra categories besides LVIS from ImageNet-1K[23] categories based on WordNet[5] similarity. Then, the generative data from LVIS and extra categories are mixed for training, requiring the model to learn to distinguish all categories. Finally, we truncate the parameters in the classification head corresponding to the extra categories during inference, ensuring that the inferred category range remains within LVIS.

Prompt diversity.The output images of the text2image generative model typically rely on the input prompts. Existing methods[34] usually generate prompts by manually designing templates, such as “a photo of a single {category_name}." When the data scale is small, designing prompts manually is convenient and fast. However, when generating a large scale of data, it is challenging to scale the number of manually designed prompts correspondingly.Intuitively, it is essential to diversify the prompts to enhance data diversity. To easily generate a large number of prompts, we choose large language model, like ChatGPT, to enhance the prompt diversity. We have three requirements for the large language model: 1) each prompt should be as different as possible; 2) each prompt should ensure that there is only one object in the image; 3) prompts should describe different attributes of the category. For example, if the category is food, prompts should cover attributes like color, brand, size, freshness, packaging type, packaging color, etc. Limited by the inference cost of ChatGPT, we use the manually designed prompts as the base and only use ChatGPT to enhance the prompt diversity for a subset of categories.Moreover, we also leverage the controllability of the generative model, adding the constraint “in a white background" after each prompt to make the background of output images simple and clear, which reduces the difficulty of mask annotation.

Generative model diversity.The quality and style of output images vary across generative models, and the data distribution learned solely from one generative model’s data is limited. Therefore, we introduce multiple generative models to enhance the diversity of data, allowing the model to learn from wider data distributions. We selected two commonly used generative models, Stable Diffusion[22] (SD) and DeepFloyd-IF[24] (IF). We use Stable Diffusion V1.5, generating images with a resolution of 512×\times×512, and use images output from Stage II of IF with a resolution of 256×\times×256. For each category in LVIS, we generated 1k images using two models separately. Examples from different generative models are shown in Figure2.

DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (2)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (3)

3.3 Generative Pipeline

The generative pipeline of DiverGenis built upon X-Paste[34]. It can be divided into four stages: instance generation, instance annotation, instance filtration and instance augmentation. The overview of DiverGenis illustrated in Figure3.

Instance generation.Instance generation is a crucial stage for enhancing data diversity. In this stage, we employ our proposed Generative Data Diversity Enhancement (GDDE), as mentioned in Sec3.2. In category diversity enhancement, we utilize the category information from LVIS[8] categories and extra categories selected from ImageNet-1K[23]. In prompt diversity enhancement, we utilize manually designed prompts and ChatGPT designed prompts to enhance prompt diversity. In model diversity enhancement, we employ two generative models, SD and IF.

Instance annotation.We employ SAM[12] as our annotation model. SAM is a class-agnostic promptable segmenter that outputs corresponding masks based on input prompts, such as points, boxes, etc. In instance generation, leveraging the controllability of the generative model, the generative images have two characteristics: 1) each image predominantly contains only one foreground object; 2) the background of the images is relatively simple. Therefore, we introduce a SAM-background (SAM-bg) annotation strategy. SAM-bg takes the four corner points of an image as input prompts for SAM to obtain the background mask, then inverts the background mask as the mask of the foreground object. Due to the conditional constraints during the instance generation stage, this strategy is simple but effective in producing high-quality masks.

Instance filtration.In the instance filtration stage, X-Paste utilizes the CLIP score (similarity between images and text) as the metric for image filtering. However, we observe that the CLIP score is ineffective in filtering low-quality images. In contrast to the similarity between images and text, we think the similarity between images can better filter out low-quality images. Therefore, we propose a new metric called CLIP inter-similarity. We use the image encoder of CLIP[21] to extract image embeddings for objects in the training set and generative images, then calculate the similarity between them. If the similarity is too low, it indicates a significant disparity between the generative and real images, suggesting that it is probably a poor-quality image and needs to be filtered.

Instance augmentation.We use the augmentation strategy proposed by X-Paste[34] but do not use the data retrieved from the network or the instances in LVIS[8] training set as the paste data source, only use the generative data as the paste data source.

4 Experiments

4.1 Settings

Datasets.We choose LVIS[8] for our experiments. LVIS is a large-scale instance segmentation dataset, containing 164k images with approximately two million high-quality annotations of instance segmentation and object detection. LVIS dataset uses images from COCO 2017[15] dataset, but redefines the train/val/test splits, with around 100k images in the training set and around 20k images in the validation set. The annotations in LVIS cover 1,203 categories, with a typical long-tailed distribution of categories, so LVIS further divides the categories into frequent, common, and rare based on the frequency of each category in the dataset. We use the official LVIS training split and the validation split.

Evaluation metrics.The evaluation metrics are LVIS box average precision (APboxsuperscriptAP𝑏𝑜𝑥\text{AP}^{box}AP start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPT) and mask average precision (APmasksuperscriptAP𝑚𝑎𝑠𝑘\text{AP}^{mask}AP start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT). We also provide the average precision of rare categories (APrboxsuperscriptsubscriptAP𝑟𝑏𝑜𝑥\text{AP}_{r}^{box}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPT and APrmasksuperscriptsubscriptAP𝑟𝑚𝑎𝑠𝑘\text{AP}_{r}^{mask}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT). The maximum number of detections per image is 300.

Implementation details.We use CenterNet2[35] as the baseline and Swin-L[16] as the backbone. In the training process, we initialize the parameters by the pre-trained Swin-L weights provided by Liu etal. [16]. The training size is 896 and the batch size is 16. The maximum training iterations is 180,000 with an initial learning rate of 0.0001. We use the instance paste strategy provided by Zhao etal. [34].

4.2 Main Results

Data diversity is more important than quantity.To investigate the impact of different scales of generative data, we use generative data of varying scales as paste data sources. We construct three datasets using only DeepFloyd-IF[24] with manually designed prompts, all containing original LVIS 1,203 categories, but with per-category quantities of 0.25k, 0.5k, and 1k, resulting in total dataset scales of 300k, 600k, and 1,200k. As shown in Table3, we find that using generative data improves model performance compared to the baseline. However, as the dataset scale increases, the model performance initially improves but then declines. The model performance using 1,200k data is lower than that using 600k data. Due to the limited number of manually designed prompts, the generative model produces similar data, as shown in Figure4(a). Consequently, the model can not gain benefits from more data. However, when using our proposed Generative Data Diversity Enhancement (GDDE), due to the increased data diversity, the model trained with 1,200k images achieves better results than using 600k images, with an improvement of 1.21 box AP and 1.04 mask AP. Moreover, when using the same data scale of 600k, the mask AP increased by 0.64 AP and the box AP increased by 0.55 AP when using GDDE compared to not using it. The results demonstrate that data diversity is more important than quantity. When the scale of data is small, increasing the quantity of data can improve model performance, which we consider is an indirect way of increasing data diversity. However, this simplistic approach of solely increasing quantity to increase diversity has an upper limit. When it reaches this limit, explicit data diversity enhancement strategies become necessary to maintain the trend of model performance improvement.

# Gen DataGDDEAPboxsuperscriptAP𝑏𝑜𝑥\text{AP}^{box}AP start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPmasksuperscriptAP𝑚𝑎𝑠𝑘\text{AP}^{mask}AP start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTAPrboxsuperscriptsubscriptAP𝑟𝑏𝑜𝑥\text{AP}_{r}^{box}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPrmasksuperscriptsubscriptAP𝑟𝑚𝑎𝑠𝑘\text{AP}_{r}^{mask}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
047.5042.3241.3936.83
300k49.6544.0145.6841.11
600k50.0344.4447.1541.96
1200k49.4443.7542.9637.91
600k50.6744.9948.5243.63
1200k51.2445.4850.0745.85

DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (4)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (5)

Comparision with previous methods.We compare DiverGenwith previous data-augmentation related methods in Table4. Compared to the baseline CenterNet2[35], our method significantly improves, increasing box AP by +3.7 and mask AP by +3.2. Regarding rare categories, our method surpasses the baseline with +8.7 in box AP and +9.0 in mask AP. Compared to the previous strong model X-Paste[34], we outperform it with +1.1 in box AP and +1.1 in mask AP of all categories, and +1.9 in box AP and +2.5 in mask AP of rare categories. It is worth mentioning that, X-Paste utilizes both generative data and web-retrieved data as paste data sources during training, while our method exclusively uses generative data as the paste data source. We achieve this by designing diversity enhancement strategies, further unlocking the potential of generative models.

MethodBackboneAPboxAPmaskAPboxrsuperscriptsubscriptabsent𝑟𝑏𝑜𝑥{}_{r}^{box}start_FLOATSUBSCRIPT italic_r end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPmaskrsuperscriptsubscriptabsent𝑟𝑚𝑎𝑠𝑘{}_{r}^{mask}start_FLOATSUBSCRIPT italic_r end_FLOATSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
Copy-Paste [7]EfficientNet-B741.638.1-32.1
Tan et al.[26]ResNeSt-269-41.5-30.0
Detic [36]Swin-B46.941.745.941.7
CenterNet2 [35]Swin-L47.542.341.436.8
X-Paste [34]Swin-L50.144.448.243.3
DiverGen(Ours)Swin-L51.245.550.145.8
(+1.1)(+1.1)(+1.9)(+2.5)

4.3 Ablation Studies

We analyze the effects of the proposed strategies in DiverGenthrough a series of ablation studies using the Swin-L[16] backbone.

Effect of category diversity.We select 50, 250, and 566 extra categories from ImagNet-1K[23], and generate 0.5k images for each category, which are added to the baseline. The baseline only uses 1,203 categories of LIVS[8] to generate data. We show the results in Table5. Generally, increasing the number of extra categories initially improves then declines model performance, peaking at 250 extra categories. The trend suggests that using extra categories to enhance category diversity can improve the model’s generalization capabilities, but too many extra categories may mislead the model, leading to a decrease in performance.

# Extra CategoryAPboxsuperscriptAP𝑏𝑜𝑥\text{AP}^{box}AP start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPmasksuperscriptAP𝑚𝑎𝑠𝑘\text{AP}^{mask}AP start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTAPrboxsuperscriptsubscriptAP𝑟𝑏𝑜𝑥\text{AP}_{r}^{box}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPrmasksuperscriptsubscriptAP𝑟𝑚𝑎𝑠𝑘\text{AP}_{r}^{mask}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
049.4443.7542.9637.91
5049.9244.1744.9439.86
25050.5944.7747.9942.91
56650.3544.6347.6842.53

Effect of prompt diversity.We select a subset of categories and use ChatGPT to generate 32 and 128 prompts for each category, with each prompt being used to generate 8 and 2 images, respectively, ensuring that the image count for each category is 0.25k. The baseline uses only one prompt per category to generate 0.25k images. The regenerated images will replace the corresponding categories in the baseline to ensure that the final data scale is consistent. The results are presented in Table6. With the increase in prompt diversity, there is a continuous improvement in model performance, indicating that prompt diversity is indeed beneficial for enhancing model performance.

# PromptAPboxsuperscriptAP𝑏𝑜𝑥\text{AP}^{box}AP start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPmasksuperscriptAP𝑚𝑎𝑠𝑘\text{AP}^{mask}AP start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTAPrboxsuperscriptsubscriptAP𝑟𝑏𝑜𝑥\text{AP}_{r}^{box}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPrmasksuperscriptsubscriptAP𝑟𝑚𝑎𝑠𝑘\text{AP}_{r}^{mask}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
149.6544.0145.6841.11
3250.0344.3945.8341.32
12850.2744.5046.4941.25

Effect of generative model diversity.We choose two commonly used generative models, Stable Diffusion[22] (SD) and DeepFloyd-IF[24] (IF). We generate 1k images per category for each generative model, totaling 1,200k. When using a mixed dataset (SD + IF), we take 600k from SD and 600k from IF per category, respectively, to ensure the total dataset scale is consistent. The baseline does not use any generative data (none). As shown in Table7, using data generated by either SD or IF alone can improve performance, further mixing the generative data of both leads to significant performance gains. This demonstrates that increasing model diversity is beneficial for improving model performance.

ModelAPboxsuperscriptAP𝑏𝑜𝑥\text{AP}^{box}AP start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPmasksuperscriptAP𝑚𝑎𝑠𝑘\text{AP}^{mask}AP start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTAPrboxsuperscriptsubscriptAP𝑟𝑏𝑜𝑥\text{AP}_{r}^{box}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPrmasksuperscriptsubscriptAP𝑟𝑚𝑎𝑠𝑘\text{AP}_{r}^{mask}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
none47.5042.3241.3936.83
SD[22]48.1342.8243.6839.15
IF[24]49.4443.7542.9637.91
SD + IF50.7845.2748.9444.35

Effect of annotation strategy.X-Paste[34] uses four models (U2Net[20], SelfReformer[31], UFO[25] and CLIPseg[17]) to generate masks and selects the one with the highest CLIP score. We compare our proposed annotation strategy (SAM-bg) to that proposed by X-Paste (max CLIP). In Table8, SAM-bg outperforms max CLIP strategy across all metrics, indicating that our proposed strategy can produce better annotations, improving model performance. As shown in Figure5, SAM-bg unlocks the potential capability of SAM, obtaining precise and refined masks.

DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (6)

StrategyAPboxsuperscriptAP𝑏𝑜𝑥\text{AP}^{box}AP start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPmasksuperscriptAP𝑚𝑎𝑠𝑘\text{AP}^{mask}AP start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTAPrboxsuperscriptsubscriptAP𝑟𝑏𝑜𝑥\text{AP}_{r}^{box}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPrmasksuperscriptsubscriptAP𝑟𝑚𝑎𝑠𝑘\text{AP}_{r}^{mask}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
max CLIP[34]49.1043.4542.7537.55
SAM-bg49.4443.7542.9637.91

Effect of CLIP inter-similarity.We compare our proposed CLIP inter-similarity to CLIP score[34]. The results are shown in Table9. The performance of data filtered by CLIP inter-similarity is higher than that of CLIP score, demonstrating that CLIP inter-similarity can filter low-quality images more effectively.

StrategyAPboxsuperscriptAP𝑏𝑜𝑥\text{AP}^{box}AP start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPmasksuperscriptAP𝑚𝑎𝑠𝑘\text{AP}^{mask}AP start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTAPrboxsuperscriptsubscriptAP𝑟𝑏𝑜𝑥\text{AP}_{r}^{box}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPrmasksuperscriptsubscriptAP𝑟𝑚𝑎𝑠𝑘\text{AP}_{r}^{mask}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
none49.4443.7542.9637.91
CLIP score[34]49.8444.2744.8340.82
CLIP inter-similarity50.0744.4445.5341.16

5 Conclusions

In this paper, we explain the role of generative data augmentation from the perspective of data distribution discrepancies and find that generative data can expand the data distribution that the model can learn, mitigating overfitting the training set. Furthermore, we find that data diversity of generative data is crucial for improving model performance. Therefore, we design an efficient data diversity enhancement strategy, Generative Data Diversity Enhancement. We design various diversity enhancement strategies to increase data diversity from the aspects of category diversity, prompt diversity, and generative model diversity. Finally, we optimize the data generative pipeline by designing the annotation strategy SAM-background to obtain higher quality annotations and introducing the metric CLIP inter-similarity to filter data, which further improves the quality of the generative dataset. Through these designed strategies, our proposed method significantly outperforms the existing strong models. We hope DiverGencan provide new insights and inspirations for future research on the effectiveness and efficiency of generative data augmentation.

Acknowledgments

This work was in partsupported by National Key R&D Program of China (No. 2022ZD0118700).

References

  • Arthur and Vassilvitskii [2007]David Arthur and Sergei Vassilvitskii.K-means++ the advantages of careful seeding.In Proc. Annual ACM-SIAM Symposium on Discrete algorithms, pages 1027–1035, 2007.
  • Carion etal. [2020]Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko.End-to-end object detection with transformers.In Proc. Eur. Conf. Comp. Vis. Springer, 2020.
  • Chen etal. [2023]Kai Chen, Enze Xie, Zhe Chen, Lanqing Hong, Zhenguo Li, and Dit-Yan Yeung.Integrating geometric control into text-to-image diffusion models for high-quality detection data generation via text prompt.arXiv: Comp. Res. Repository, 2023.
  • Cheng etal. [2022]Bowen Cheng, Ishan Misra, AlexanderG Schwing, Alexander Kirillov, and Rohit Girdhar.Masked-attention mask transformer for universal image segmentation.In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 1290–1299, 2022.
  • Fellbaum [2010]Christiane Fellbaum.Wordnet.In Theory and applications of ontology: computer applications, pages 231–243. Springer, 2010.
  • Feng etal. [2023]Chun-Mei Feng, Kai Yu, Yong Liu, Salman Khan, and Wangmeng Zuo.Diverse data augmentation with diffusions for effective test-time prompt tuning.In Proc. IEEE Int. Conf. Comp. Vis., pages 2704–2714, 2023.
  • Ghiasi etal. [2021]Golnaz Ghiasi, Yin Cui, Aravind Srinivas, Rui Qian, Tsung-Yi Lin, EkinD. Cubuk, QuocV. Le, and Barret Zoph.Simple copy-paste is a strong data augmentation method for instance segmentation.In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2918–2928, 2021.
  • Gupta etal. [2019]Agrim Gupta, Piotr Dollar, and Ross Girshick.Lvis: A dataset for large vocabulary instance segmentation.In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5356–5364, 2019.
  • He etal. [2017]Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick.Mask r-cnn.In Proc. IEEE Int. Conf. Comp. Vis., pages 2961–2969, 2017.
  • Joseph etal. [2021]KJ Joseph, Salman Khan, FahadShahbaz Khan, and VineethN Balasubramanian.Towards open world object detection.In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5830–5840, 2021.
  • Joyce [2011]JamesM Joyce.Kullback-leibler divergence.In International Encyclopedia of Statistical Science, pages 720–722. Springer, 2011.
  • Kirillov etal. [2023]Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander Berg, Wan-Yen Lo, etal.Segment anything.In Proc. IEEE Int. Conf. Comp. Vis., pages 4015–4026, 2023.
  • Li etal. [2022]Daiqing Li, Huan Ling, SeungWook Kim, Karsten Kreis, Sanja Fidler, and Antonio Torralba.Bigdatasetgan: Synthesizing imagenet with pixel-wise annotations.In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 21330–21340, 2022.
  • Li etal. [2023]Ziyi Li, Qinye Zhou, Xiaoyun Zhang, Ya Zhang, Yanfeng Wang, and Weidi Xie.Open-vocabulary object segmentation with diffusion models.In Proc. IEEE Int. Conf. Comp. Vis., pages 7667–7676, 2023.
  • Lin etal. [2014]Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and CLawrence Zitnick.Microsoft coco: Common objects in context.In Proc. Eur. Conf. Comp. Vis., pages 740–755. Springer, 2014.
  • Liu etal. [2021]Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.Swin transformer: Hierarchical vision transformer using shifted windows.In Proc. IEEE Int. Conf. Comp. Vis., pages 10012–10022, 2021.
  • Lüddecke and Ecker [2022]Timo Lüddecke and Alexander Ecker.Image segmentation using text and image prompts.In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 7086–7096, 2022.
  • McInnes etal. [2018]Leland McInnes, John Healy, and James Melville.Umap: Uniform manifold approximation and projection for dimension reduction.arXiv: Comp. Res. Repository, 2018.
  • Oquab etal. [2023]Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, etal.Dinov2: Learning robust visual features without supervision.Trans. Mach. Learn. Research, 2023.
  • Qin etal. [2020]Xuebin Qin, Zichen Zhang, Chenyang Huang, Masood Dehghan, OsmarR Zaiane, and Martin Jagersand.U2-net: Going deeper with nested u-structure for salient object detection.Pattern Recognition, 106:107404, 2020.
  • Radford etal. [2021]Alec Radford, JongWook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, etal.Learning transferable visual models from natural language supervision.In Proc. Int. Conf. Mach. Learn., pages 8748–8763. PMLR, 2021.
  • Rombach etal. [2022]Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer.High-resolution image synthesis with latent diffusion models.In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 10684–10695, 2022.
  • Russakovsky etal. [2015]Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, etal.Imagenet large scale visual recognition challenge.Int. J. Comput. Vision, 115:211–252, 2015.
  • Shonenkov etal. [2023]Alex Shonenkov, Misha Konstantinov, Daria Bakshandaeva, Christoph Schuhmann, Ksenia Ivanova, and Nadiia Klokova.Deepfloyd-if, 2023.
  • Su etal. [2023]Yukun Su, Jingliang Deng, Ruizhou Sun, Guosheng Lin, Hanjing Su, and Qingyao Wu.A unified transformer framework for group-based segmentation: Co-segmentation, co-saliency detection and video salient object detection.IEEE Trans. Multimedia, 2023.
  • Tan etal. [2020]Jingru Tan, Gang Zhang, Hanming Deng, Changbao Wang, Lewei Lu, Quanquan Li, and Jifeng Dai.1st place solution of LVIS challenge 2020: A good box is not a guarantee of a good mask.arXiv: Comp. Res. Repository, 2020.
  • Wu etal. [2023a]Weijia Wu, Yuzhong Zhao, Hao Chen, Yuchao Gu, Rui Zhao, Yefei He, Hong Zhou, MikeZheng Shou, and Chunhua Shen.DatasetDM: Synthesizing data with perception annotations using diffusion models.Proc. Advances in Neural Inf. Process. Syst., 2023a.
  • Wu etal. [2023b]Weijia Wu, Yuzhong Zhao, MikeZheng Shou, Hong Zhou, and Chunhua Shen.Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models.Proc. IEEE Int. Conf. Comp. Vis., 2023b.
  • Xie etal. [2023]Jiahao Xie, Wei Li, Xiangtai Li, Ziwei Liu, YewSoon Ong, and ChenChange Loy.Mosaicfusion: Diffusion models as data augmenters for large vocabulary instance segmentation.arXiv: Comp. Res. Repository, 2023.
  • Yang etal. [2023]Lihe Yang, Xiaogang Xu, Bingyi Kang, Yinghuan Shi, and Hengshuang Zhao.FreeMask: Synthetic images with dense annotations make stronger segmentation models.Proc. Advances in Neural Inf. Process. Syst., 2023.
  • Yun and Lin [2022]YiKe Yun and Weisi Lin.Selfreformer: Self-refined network with transformer for salient object detection.arXiv: Comp. Res. Repository, 2022.
  • Zhang etal. [2023]Renrui Zhang, Xiangfei Hu, Bohao Li, Siyuan Huang, Hanqiu Deng, Yu Qiao, Peng Gao, and Hongsheng Li.Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners.In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 15211–15222, 2023.
  • Zhang etal. [2021]Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean-Francois Lafleche, Adela Barriuso, Antonio Torralba, and Sanja Fidler.Datasetgan: Efficient labeled data factory with minimal human effort.In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 10145–10155, 2021.
  • Zhao etal. [2023]Hanqing Zhao, Dianmo Sheng, Jianmin Bao, Dongdong Chen, Dong Chen, Fang Wen, Lu Yuan, Ce Liu, Wenbo Zhou, Qi Chu, Weiming Zhang, and Nenghai Yu.X-paste: Revisiting scalable copy-paste for instance segmentation using CLIP and stablediffusion.Proc. Int. Conf. Mach. Learn., 2023.
  • Zhou etal. [2021]Xingyi Zhou, Vladlen Koltun, and Philipp Krähenbühl.Probabilistic two-stage detection.arXiv: Comp. Res. Repository, 2021.
  • Zhou etal. [2022]Xingyi Zhou, Rohit Girdhar, Armand Joulin, Philipp Krähenbühl, and Ishan Misra.Detecting twenty-thousand classes using image-level supervision.In Proc. Eur. Conf. Comp. Vis., pages 350–368. Springer, 2022.

Appendix

Appendix A Implementation Details

A.1 Data Distribution Analysis

We use the image encoder of CLIP[21] ViT-L/14 to extract image embeddings. For objects in the LVIS[8] dataset, we extract embeddings from the object regions instead of the whole images. First, we blur the regions outside the object masks using the normalized box filter, with the kernel size of (10, 10). Then, to prevent objects from being too small, we pad around the object boxes to ensure the minimum width of the padded boxes is 80 pixels, and crop the images according to the padded boxes. Finally, the cropped images are fed into the CLIP image encoder to extract embeddings. For generative images, the whole images are fed into the CLIP image encoder to extract embeddings. At last, we use UMAP[18] to reduce dimensions for visualization.τ𝜏\tauitalic_τ is set to 0.9 in the energy function.

To investigate the potential impact of noise in the rare classes to TVG metrics, we conduct additional experiments to demonstrate the validity of TVG. We randomly take five different models each for the LVIS and LVIS + Gen data sources, compute the mean (μ𝜇\muitalic_μ) and standard deviation (σ𝜎\sigmaitalic_σ) of their TVG, and calculate the 3 sigma range (μ+3σ𝜇3𝜎\mu+3\sigmaitalic_μ + 3 italic_σ and μ3σ𝜇3𝜎\mu-3\sigmaitalic_μ - 3 italic_σ), which we think represents the maximum fluctuation that potential noise could induce. As shown in Table10, we find that: 1) The TVGs of LVIS all exceed the 3 sigma upper bound of LVIS + Gen, while the TVGs of LVIS + Gen are all below the 3 sigma lower bound of LVIS, and there is no overlap between the 3 sigma ranges of LVIS and LVIS + Gen; 2) For both LVIS + Gen and LVIS, there is no overlap between the 3 sigma ranges of different groups, e.g. frequent and common, common and rare. These two findings suggest that even in the presence of potential noise, the results can not be attributed to those fluctuations. Therefore, we think our proposed TVG metrics are reasonable and can support the conclusions.

TVGfboxsuperscriptsubscriptTVG𝑓𝑏𝑜𝑥\text{TVG}_{f}^{box}TVG start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTTVGfmasksuperscriptsubscriptTVG𝑓𝑚𝑎𝑠𝑘\text{TVG}_{f}^{mask}TVG start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTTVGcboxsuperscriptsubscriptTVG𝑐𝑏𝑜𝑥\text{TVG}_{c}^{box}TVG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTTVGcmasksuperscriptsubscriptTVG𝑐𝑚𝑎𝑠𝑘\text{TVG}_{c}^{mask}TVG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTTVGrboxsuperscriptsubscriptTVG𝑟𝑏𝑜𝑥\text{TVG}_{r}^{box}TVG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTTVGrmasksuperscriptsubscriptTVG𝑟𝑚𝑎𝑠𝑘\text{TVG}_{r}^{mask}TVG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
μ𝜇\muitalic_μ9.988.6016.5913.3630.2324.22
σ𝜎\sigmaitalic_σ0.240.180.560.441.121.18
μ+3σ𝜇3𝜎\mu+3\sigmaitalic_μ + 3 italic_σ10.709.1518.2614.6933.5827.77
μ3σ𝜇3𝜎\mu-3\sigmaitalic_μ - 3 italic_σ9.258.0614.9112.0426.8820.68
LVIS13.1610.7121.8016.8039.5931.68

TVGfboxsuperscriptsubscriptTVG𝑓𝑏𝑜𝑥\text{TVG}_{f}^{box}TVG start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTTVGfmasksuperscriptsubscriptTVG𝑓𝑚𝑎𝑠𝑘\text{TVG}_{f}^{mask}TVG start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTTVGcboxsuperscriptsubscriptTVG𝑐𝑏𝑜𝑥\text{TVG}_{c}^{box}TVG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTTVGcmasksuperscriptsubscriptTVG𝑐𝑚𝑎𝑠𝑘\text{TVG}_{c}^{mask}TVG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTTVGrboxsuperscriptsubscriptTVG𝑟𝑏𝑜𝑥\text{TVG}_{r}^{box}TVG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTTVGrmasksuperscriptsubscriptTVG𝑟𝑚𝑎𝑠𝑘\text{TVG}_{r}^{mask}TVG start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
μ𝜇\muitalic_μ13.9511.4022.5317.1643.4635.10
σ𝜎\sigmaitalic_σ0.410.350.430.331.981.75
μ+3σ𝜇3𝜎\mu+3\sigmaitalic_μ + 3 italic_σ15.1712.4523.8118.1449.3940.37
μ3σ𝜇3𝜎\mu-3\sigmaitalic_μ - 3 italic_σ12.7310.3421.2516.1737.5329.84
LVIS + Gen9.648.3815.6412.6929.3922.49

A.2 Category Diversity

We compute the path similarity of WordNet[5] synsets between 1,000 categories in ImageNet-1K[23] and 1,203 categories in LVIS[8]. For each of the 1,000 categories in ImageNet-1K, if the highest similarity for that category is below 0.4, we consider the category to be non-existent in LVIS and designate it as an extra category. Based on this method, 566 categories can serve as extra categories. The names of these 566 categories are presented in TableLABEL:tab:imgnet_category.

A.3 Prompt Diversity

Limited by the inference cost of ChatGPT, we use the manually designed prompts as the base and only use ChatGPT to enhance the prompt diversity for a subset of categories.For manually designed prompts, the template of prompts is “a photo of a single {category_name}, {category_def}, in a white background". category_name and category_def are from LVIS[8] category information.For ChatGPT designed prompts, we select a subset of categories and use ChatGPT to enhance prompt diversity for these categories. The names of the 144 categories in this subset are shown in TableLABEL:tab:chatgpt_category.We use GPT-3.5-turbo and have three requirements for the ChatGPT: 1) each prompt should be as different as possible; 2) each prompt should ensure that there is only one object in the image; 3) prompts should describe different attributes of the category. Therefore, the input prompts to ChatGPT contain these three requirements. Examples of input prompts and the corresponding responses from ChatGPT are illustrated in Figure8. To conserve output token length, there is no strict requirement for ChatGPT designed prompts to end with “in a white background", and this constraint will be added when generating images.

A.4 Generative Model Diversity

We select two commonly used generative models, Stable Diffusion[22] and DeepFloyd-IF[24].For Stable Diffusion, we use Stable Diffusion V1.5, with 50 inference steps and a guidance scale of 7.5. All other parameters are set to their defaults.For DeepFloyd-IF, we use the output images from stage II, with stage I using the weight IF-I-XL-v1.0 and stage II using IF-II-L-v1.0. All parameters are set to their defaults.

A.5 Instance Annotation

We employ SAM[12] ViT-H as the annotation model. We explore two annotation strategies, namely SAM-foreground and SAM-background.SAM-foreground uses points sampled from foreground objects as input prompts. Specifically, we first obtain the approximate region of the foreground object based on the cross-attention map of the generative model using a threshold. Then, we use k-means++[1] clustering to transform dense points within the foreground region into cluster centers. Next, we randomly select some points from the cluster centers as inputs to SAM. We use various metrics to evaluate the quality of the output mask and select the mask with the highest score as the final mask. However, although SAM-foreground is intuitive, it also has some limitations. Firstly, cross-attention maps of different categories require different thresholds to obtain foreground regions, making it cumbersome to choose the optimal threshold for each category. Secondly, the number of points required for SAM to output mask varies for different foreground objects. Complex object needs more points than simple object, making it challenging to control the number of points. Additionally, the position of points significantly influences the quality of SAM’s output mask. If the position of points is not appropriate, this strategy is prone to generating incomplete masks.

Therefore, we discard SAM-foreground and propose a simpler and more effective annotation strategy, SAM-background. Due to our leveraging of the controllability of the generative model in instance generation, the generative images have two characteristics: 1) each image predominantly contains only one foreground object; 2) the background of the images is relatively simple. SAM-background directly uses the four corner points of the image as input prompts for SAM to obtain the background mask, then inverts the background mask as the mask of the foreground object.The illustrations of point selection for SAM-foreground and SAM-background are shown in Figure6. By using SAM-background for annotation, more refined masks can be obtained. Examples of annotations from SAM-foreground and SAM-background are shown in Figure7.

DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (7)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (8)

To further validate the effectiveness of SAM-background, we manually annotate masks for some images as ground truth (gt). We apply both strategies to annotate these images and calculate the mIoU between the resulting masks and the ground truth. The results in Table11 indicate that SAM-background achieves better annotation quality.

StrategymIoU
SAM-foreground0.8163
SAM-background0.9418

A.6 Instance Filtration

We use the image encoder of CLIP[21] ViT-L/14 to extract image embeddings. The embedding extraction process is consistent with SecA.1. Then we calculate the cosine similarity between embeddings of objects in LVIS training set and embeddings of generative images. For each generative image, the final CLIP inter-similarity is the average similarity with all objects of the same category in the training set. Through experiments, we find that when the filtering threshold is 0.6, the model achieves the best performance and strikes a balance between data diversity and quality, so we set the threshold to 0.6.

Furthermore, we also explore other filtration strategies. From our experiments, using pure image-trained models like DINOv2[19] as image encoder or combining CLIP score and CLIP inter-similarity is not as good as using just CLIP inter-similarity alone, as shown in Table12. Therefore, we ultimately opt to only use CLIP inter-similarity.

StrategyAPboxsuperscriptAP𝑏𝑜𝑥\text{AP}^{box}AP start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPmasksuperscriptAP𝑚𝑎𝑠𝑘\text{AP}^{mask}AP start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPTAPrboxsuperscriptsubscriptAP𝑟𝑏𝑜𝑥\text{AP}_{r}^{box}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b italic_o italic_x end_POSTSUPERSCRIPTAPrmasksuperscriptsubscriptAP𝑟𝑚𝑎𝑠𝑘\text{AP}_{r}^{mask}AP start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_a italic_s italic_k end_POSTSUPERSCRIPT
DINOv248.0242.3940.3135.27
CLIP score + CLIP inter-similarity49.8244.3045.2640.92
CLIP inter-similarity50.0744.4445.5341.16

A.7 Instance Augmentation

In instance augmentation, we use the instance paste strategy proposed byZhao etal. [34] to increase model learning efficiency on generative data. Each image contains up to 20 pasted instances at most.

The parameters not specified in the paper are consistent with X-Paste[34].

Appendix B Visualization

B.1 Prompt Diversity

We find that images generated from ChatGPT designed prompts have diverse textures, styles, patterns, etc., greatly enhancing data diversity. The ChatGPT designed prompts and the corresponding generative images are shown in Figure9.Compared to manually designed prompts, the diversity of images generated from ChatGPT designed prompts can be significantly improved. A visual comparison between generative images from manually designed prompts and ChatGPT designed prompts is shown in Figure10.

B.2 Generative Model Diversity

The images generated by Stable Diffusion and DeepFloyd-IF are different, even within the same category, significantly enhancing the data diversity. Both Stable Diffusion and DeepFloyd-IF are capable of producing images belonging to the target categories. However, the images generated by DeepFloyd-IF appear more photorealistic and consistent with the prompt texts. This indicates DeepFloyd-IF’s superiority in image generation quality and controllability through text prompts. Examples from Stable Diffusion and DeepFloyd-IF are shown in Figure11 and Figure12, respectively.

B.3 Instance Annotation

In terms of annotation quality, masks generated by max CLIP[34] tend to be incomplete, while our proposed SAM-bg is able to produce more refined and complete masks when processing images of multiple categories. As shown in Figure13, our proposed annotation strategy can output more precise and refined masks compared to max CLIP.

B.4 Instance Augmentation

The use of instance augmentation strategies helps alleviate the limitation in relatively simple scenes of generative data and improves the efficiency of model learning on the generative data. Examples of augmented data are shown in Figure14.

tenchgreat_white_sharktiger_sharkelectric_ray
stingraybramblinggoldfinchhouse_finch
juncoindigo_buntingAmerican_robinbulbul
jaymagpiechickadeeAmerican_dipper
kite_(bird_of_prey)fire_salamandersmooth_newtnewt
spotted_salamanderaxolotlAmerican_bullfrogloggerhead_sea_turtle
leatherback_sea_turtlebanded_geckogreen_iguanaCarolina_anole
desert_grassland_whiptail_lizardagamafrilled-necked_lizardalligator_lizard
Gila_monsterEuropean_green_lizardchameleonKomodo_dragon
Nile_crocodiletriceratopsworm_snakering-necked_snake
eastern_hog-nosed_snakesmooth_green_snakekingsnakegarter_snake
water_snakevine_snakenight_snakeboa_constrictor
African_rock_pythonIndian_cobragreen_mambaSaharan_horned_viper
eastern_diamondback_rattlesnakesidewinder_rattlesnaketrilobiteharvestman
scorpiontickcentipedeblack_grouse
ptarmiganruffed_grouseprairie_grousepeafowl
quailpartridgesulphur-crested_co*ckatoolorikeet
coucalbee_eaterhornbilljacamar
toucanred-breasted_merganserblack_swantusker
echidnaplatypuswallabywombat
jellyfishsea_anemonebrain_coralflatworm
nematodeconchsnailslug
sea_slugchitonchambered_nautilusAmerican_lobster
crayfishhermit_crabisopodwhite_stork
black_storkspoonbillgreat_egretcrane_bird
limpkincommon_gallinuleAmerican_cootbustard
ruddy_turnstonedunlincommon_redshankdowitcher
oystercatcheralbatrossgrey_whaledugong
sea_lionChihuahuaJapanese_ChinMaltese
PekingeseShih_TzuKing_Charles_SpanielPapillon
toy_terrierRhodesian_RidgebackAfghan_HoundBasset_Hound
BeagleBloodhoundBluetick_CoonhoundBlack_and_Tan_Coonhound
Treeing_Walker_CoonhoundEnglish_foxhoundRedbone_Coonhoundborzoi
Irish_WolfhoundItalian_GreyhoundWhippetIbizan_Hound
Norwegian_ElkhoundOtterhoundSalukiScottish_Deerhound
WeimaranerStaffordshire_Bull_TerrierAmerican_Staffordshire_TerrierBedlington_Terrier
Border_TerrierKerry_Blue_TerrierIrish_TerrierNorfolk_Terrier
Norwich_TerrierYorkshire_TerrierWire_Fox_TerrierLakeland_Terrier
Sealyham_TerrierAiredale_TerrierCairn_TerrierAustralian_Terrier
Dandie_Dinmont_TerrierBoston_TerrierMiniature_SchnauzerGiant_Schnauzer
Standard_SchnauzerScottish_TerrierTibetan_TerrierAustralian_Silky_Terrier
Soft-coated_Wheaten_TerrierWest_Highland_White_TerrierLhasa_ApsoFlat-Coated_Retriever
Curly-coated_RetrieverGolden_RetrieverLabrador_RetrieverChesapeake_Bay_Retriever
German_Shorthaired_PointerVizslaEnglish_SetterIrish_Setter
Gordon_SetterBrittany_dogClumber_SpanielEnglish_Springer_Spaniel
Welsh_Springer_Spanielco*cker_SpanielSussex_SpanielIrish_Water_Spaniel
KuvaszSchipperkeGroenendael_dogMalinois
DobermannMiniature_PinscherGreater_Swiss_Mountain_DogBernese_Mountain_Dog
Appenzeller_SennenhundEntlebucher_SennenhundBoxerBullmastiff
Tibetan_MastiffGreat_DaneSt._Bernardhusky
Alaskan_MalamuteSiberian_HuskyAffenpinscherSamoyed
PomeranianChow_ChowKeeshondbrussels_griffon
Pembroke_Welsh_CorgiCardigan_Welsh_CorgiToy_PoodleMiniature_Poodle
Standard_PoodledingodholeAfrican_wild_dog
hyenared_foxkit_foxArctic_fox
grey_foxtabby_cattiger_catPersian_cat
Siamese_catEgyptian_Maulynxleopard
snow_leopardjaguarcheetahmongoose
meerkatdung_beetlerhinoceros_beetlefly
beeantgrasshoppercricket_insect
stick_insectpraying_mantiscicadaleafhopper
lacewingdamselflyred_admiral_butterflymonarch_butterfly
small_white_butterflysea_urchinsea_cucumberhare
fox_squirrelguinea_pigwild_boarwarthog
oxwater_buffalobisonbighorn_sheep
Alpine_ibexhartebeestimpala_(antelope)llama
weaselminkblack-footed_ferretotter
skunkbadgerarmadillothree-toed_sloth
orangutanchimpanzeegibbonsiamang
guenonpatas_monkeymacaquelangur
black-and-white_colobusproboscis_monkeymarmosetwhite-headed_capuchin
howler_monkeytiti_monkeyGeoffroy’s_spider_monkeycommon_squirrel_monkey
ring-tailed_lemurindrired_pandasnoek_fish
eelrock_beauty_fishclownfishsturgeon
gar_fishlionfishacademic_gownaccordion
aircraft_carrieraltarapiaryassault_rifle
bakerybalance_beambaluster_or_handrailbarbershop
barnbarometerbassinetbassoon
lighthousebell_towerbaby_bibboathouse
bookstorebreakwaterbreastplatebutcher_shop
carouseltool_kitautomated_teller_machinecassette_player
castlecatamarancellochain
chain-link_fencechainsawchiffonierChristmas_stocking
churchmovie_theatercliff_dwellingcloak
clogsspiral_or_coilcandy_storecradle
construction_cranecroquet_ballcuirassdam
desktop_computerdisc_brakedockdome
drilling_rigelectric_locomotiveentertainment_centerface_powder
fire_screenflutefountainFrench_horn
gas_pumpgolf_ballgonggreenhouse
radiator_grillegrocery_storeguillotinehair_spray
half-trackhand-held_computerhard_disk_driveharmonica
harpcombine_harvesterholsterhome_theater
honeycombhookgymnastic_horizontal_barjigsaw_puzzle
knotlens_caplibrarylifeboat
lighterlipsticklotionloupe_magnifying_glass
sawmillmessenger_bagmaracamarimba
maskmatchstickmaypolemaze
megalithmilitary_uniformmissilemobile_home
modemmonasterymonitormoped
mortar_and_pestlemosquemosquito_nettent
mousetrapmoving_vanmuzzlemetal_nail
neck_bracenotebook_computerobeliskoboe
ocarinaodometeroil_filterpipe_organ
oscilloscopeoxygen_maskpalacepan_flute
parallel_barspatiopedestalphotocopier
plectrumPickelhaubepicket_fencepier
pirate_shipblock_planeplanetariumplastic_bag
plate_rackplungerpolice_vanprayer_rug
prisonhockey_puckpunching_bagpurse
radioradio_telescoperain_barrelfishing_casting_reel
restaurantrugby_ballsafescabbard
schoonerCRT_monitorseat_beltshoe_store
shoji_screen_or_room_dividerbalaclava_ski_maskslide_rulesliding_door
slot_machinesnorkelkeyboard_space_barspatula
motorboatspider_webspindlestage
steam_locomotivethrough_arch_bridgesteel_drumstethoscope
stone_walltramstretcherstupa
submarinesundialsunglassessunscreen
suspension_bridgeswingtape_playertelevision
thatched_roofthreshing_machinethronetile_roof
tobacco_shoptoilet_seattorchtotem_pole
toy_storetrimarantriumphal_archtrombone
turnstiletypewriter_keyboardvaulted_or_arched_ceilingvelvet_fabric
vestmentviaductsinkwhiskey_jug
whistlewindow_screenwindow_shadeairplane_wing
woolsplit-rail_fenceshipwrecksailboat
yurtwebsitecrossworddust_jacket
menuplateguacamoletrifle
baguettecabbagebroccolispaghetti_squash
acorn_squashbutternut_squashcardoonmushroom
Granny_Smith_applejackfruitcherimoya_(custard_apple)pomegranate
haycarbonarachocolate_syrupdough
meatloafpot_piered_wineespresso
tea_cupeggnogmountainbubble
cliffcoral_reefgeyserlakeshore
promontorysandbarbeachvalley
volcanobaseball_playerbridegroomscuba_diver
rapeseeddaisyyellow_lady’s_slippercorn
acornrose_hiphorse_chestnut_seedcoral_fungus
gyromitrastinkhorn_mushroomearth_star_fungushen_of_the_woods_mushroom
boletecorn_cob
Biblepirate_flagbookmarkbow_(weapon)
bubble_gumelevator_carchocolate_moussecompass
corkboardcougarcream_pitchercylinder
dollardolphineyepatchfruit_juice
golf_clubhandcuffhockey_stickpopsicle
pan_(metal_container)pew_(church_bench)piggy_bankpistol
road_mapsatchelsawhorseshawl
sparkler_(fireworks)spiderstring_cheeseTabasco_sauce
turtleneck_(clothing)violinwaffle_ironwhistle
wind_chimeheadstall_(for_horses)fishing_rodcoat_hanger
claspcrab_(animal)flamingostirrup
machine_gunpin_(non_jewelry)speardrumstick
cornetbottle_openereaseldumbbell
garden_hosemoneysaddle_(on_an_animal)garbage
windshield_wiperneedleliquorbamboo
armorpretzeltongsski_pole
froghairpintripodflagpole
hosebelt_bucklestreetlightcoleslaw
antennahookLegothumbtack
coatrackplow_(farm_equipment)vinegarstrap
poker_(fire_stirring_tool)cufflinkchopsticksalad
dragonflymusical_instrumentsharpenerbat_(animal)
lanyardmat_(gym_equipment)gargoyleunderdrawers
paperback_bookrazorbladeearringsword
shovelturkey_(food)ambulancepencil
weathervanetrampolineapplesaucejam
skitraytissue_paperlamppost
clipboardrouter_(computer_equipment)batterylollipop
crayonlatchfig_(fruit)sunglasses
toothpickbusiness_cardpadlockasparagus
shot_glasssledkeybolt
pipesteering_wheeldeck_chairgreen_bean
pouchtelephone_polefire_hoseladle
pliershair_curlerhandlescrewdriver
dining_tablecartoarwolf
envelopelegumeshopping_carttrench_coat
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (9)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (10)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (11)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (12)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (13)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (14)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (15)
DiverGen: Improving Instance Segmentation by Learning Wider Data Distribution with More Diverse Generative Data (2024)

References

Top Articles
Latest Posts
Article information

Author: Aracelis Kilback

Last Updated:

Views: 5458

Rating: 4.3 / 5 (44 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Aracelis Kilback

Birthday: 1994-11-22

Address: Apt. 895 30151 Green Plain, Lake Mariela, RI 98141

Phone: +5992291857476

Job: Legal Officer

Hobby: LARPing, role-playing games, Slacklining, Reading, Inline skating, Brazilian jiu-jitsu, Dance

Introduction: My name is Aracelis Kilback, I am a nice, gentle, agreeable, joyous, attractive, combative, gifted person who loves writing and wants to share my knowledge and understanding with you.