diff --git "a/6NAyT4oBgHgl3EQfpfik/content/tmp_files/load_file.txt" "b/6NAyT4oBgHgl3EQfpfik/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/6NAyT4oBgHgl3EQfpfik/content/tmp_files/load_file.txt" @@ -0,0 +1,600 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf,len=599 +page_content='Diffusion Probabilistic Models for Scene-Scale 3D Categorical Data Jumin Lee Woobin Im Sebin Lee Sung-Eui Yoon Korea Advanced Institute of Science and Technology (KAIST) {jmlee,iwbn,seb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='lee,sungeui}@kaist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='kr Abstract In this paper, we learn a diffusion model to generate 3D data on a scene-scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Specifically, our model crafts a 3D scene consisting of multiple objects, while recent diffu- sion research has focused on a single object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' To realize our goal, we represent a scene with discrete class labels, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', categorical distribution, to assign multiple objects into se- mantic categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Thus, we extend discrete diffusion mod- els to learn scene-scale categorical distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In addi- tion, we validate that a latent diffusion model can reduce computation costs for training and deploying.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' To the best of our knowledge, our work is the first to apply discrete and latent diffusion for 3D categorical data on a scene- scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We further propose to perform semantic scene com- pletion (SSC) by learning a conditional distribution using our diffusion model, where the condition is a partial ob- servation in a sparse point cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In experiments, we em- pirically show that our diffusion models not only generate reasonable scenes, but also perform the scene completion task better than a discriminative model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Our code and mod- els are available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='com/zoomin- lee/scene-scale-diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Introduction Learning to generate 3D data has received much atten- tion thanks to its high performance and promising down- stream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' For instance, a 3D generative model with a diffusion probabilistic model [2] has shown its effectiveness in 3D completion [2] and text-to-3D generation [1,3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' While recent models have focused on 3D object gener- ation, we aim beyond a single object by generating a 3D scene with multiple objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1b, we show a sam- ple scene from our generative model, where we observe the plausible placement of the objects, as well as their correct shapes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Compared to the existing object-scale model [1] (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1a), our scene-scale model can be used in a broader application, such as semantic scene completion (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='3), where we complete a scene given a sparse LiDAR point 6 Pedestrian Building Vegetation Vehicle Diffusion Model (a) Object-scale generation 6 Diffusion Model 6 (b) Scene-scale generation (ours) Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Comparison of object-scale and scene scale generation (ours).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Our result includes multiple objects in a generated scene, while the object-scale generation crafts one object at a time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' (a) is obtained by Point-E [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We base our scene-scale 3D generation method on a dif- fusion model, which has shown remarkable performance in modeling complex real-world data, such as realistic 2D im- ages [4–6] and 3D objects [1–3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We develop and evaluate diffusion models learning a scene-scale 3D categorical dis- tribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' First, we utilize categorical data for a voxel entity since we have multiple objects in contrast to the existing work [1– 3], so each category tells each voxel belongs to which cat- egory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Thus, we extend discrete diffusion models for 2D categorical data [7, 8] into 3D categorical data (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Second, we validate the latent diffusion model for the 3D scene-scale generation, which can reduce training and test- ing computational cost (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Third, we propose to per- form semantic scene completion (SSC) by learning a con- ditional distribution using our generative models, where the condition is a partial observation of the scene (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' That is, we demonstrate that our model can complete a rea- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='00527v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='CV] 2 Jan 2023 Building Barrier Other Pedestrian Pole Road Ground Sidewalk Vegetation Vehiclessonable scene in a realistic scenario with a sparse and partial observation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Lastly, we show the effectiveness of our method in terms of the unconditional and conditional (SSC) generation tasks on the CarlaSC dataset [9] (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Especially, we show that our generative model can outperform a discriminative model in the SSC task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Related Work 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Semantic Scene Completion Leveraging 3D data for semantic segmentation has been studied from different perspectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Vision sensors (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', RGB-D camera and LiDAR) provide depth information from a single viewpoint, giving more information about the world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' One of the early approaches is using an RGB-D (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', color and depth) image with a 2D segmentation map [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In addition, using data in a 3D coordinate system has been extensively studied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3D semantic segmentation is the exten- sion of 2D segmentation, where a classifier is applied to point clouds or voxel data in 3D coordinates [11,12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' One of the recent advances in 3D semantic segmentation is semantic scene completion (SSC), where a partially ob- servable space – observed via RGB-D image or point clouds – should be densely filled with class labels [13–16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In SSC, a model gets the point cloud obtained in one viewpoint;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' thus, it contains multiple partial objects (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', one side of a car).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Then, the model not only reconstructs the unobserved shape of the car but also labels it as a car.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Here, the predic- tion about the occupancy and the semantic labels can mutu- ally benefit [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Due to the partial observation, filling in occluded and sparse areas is the biggest hurdle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Thus, a generative model is effective for 3D scene completion as 2D completion tasks [18, 19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' [20] demonstrate that generative adversarial networks (GANs) can be used to improve the plausibility of a completion result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' However, a diffusion- based generative model has yet to be explored in terms of a 3D semantic segmentation map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We speculate that us- ing a diffusion model has good prospects, thanks to the larger size of the latent and the capability to deal with high- dimensional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In this work, we explore a diffusion model in the context of 3D semantic scene completion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Diffusion models have been rapidly growing and they perform remarkably well on real-world 2D images [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Thus, we would like to delve into the diffusion to generate 3D semantic segmentation maps;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' thus, we hope to provide the research community a useful road map towards generating the 3D semantic scene maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Diffusion Models Recent advances in diffusion models have shown that a deep model can learn more diverse data distribution by a diffusion process [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' A diffusion process is introduced to adopt a simple distribution (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', Gaussian) to learn a com- plex distribution [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Especially, diffusion models show im- pressive results for image generation [6] and conditional generation [22, 23] on high resolution compared to GANs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' GANs are known to suffer from the mode collapse prob- lem and struggle to capture complex scenes with multiple objects [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' On the other hand, diffusion models have a ca- pacity to escape mode collapse [6] and generate complex scenes [23,25] since likelihood-based methods achieve bet- ter coverage of full data distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Diffusion models have been studied to a large extent in high-dimensional continuous data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' However, they often lack the capacity to deal with discrete data (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', text and seg- mentation maps) since the discreteness of data is not fully covered by continuous representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' To tackle such dis- creteness, discrete diffusion models have been studied for various applications, such as text generation [7,8] and low- dimensional segmentation maps generation [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Since both continuous and discrete diffusion models es- timate the density of image pixels, a higher image res- olution means higher computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' To address this issue, latent diffusion models [23, 26] operate a diffusion pro- cess on the latent space of a lower dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' To work on the compressed latent space, Vector-Quantized Varia- tional Auto-Encoder (VQ-VAE) [27] is employed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Latent diffusion models consist of two stages: VQ-VAE and dif- fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' VQ-VAE trains an encoder to compress the image into a latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Equipped with VQ-VAE, autoregressive models [28, 29] have shown impressive performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Re- cent advances in latent diffusion models further improve the generative performance by ameliorating the unidirec- tional bias and accumulated prediction error in existing models [23,26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Our work introduces an extension of discrete diffu- sion models for high-resolution 3D categorical voxel data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Specifically, we show the effectiveness of a diffusion model in terms of unconditional and conditional generation tasks, where the condition is a partial observation of a scene (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', SSC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Further, we propose a latent diffusion models for 3D categorical data to reduce the computation load caused by high-resolution segmentation maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Diffusion Models for 3D Data Diffusion models have been used for 3D data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Until re- cently, research has been mainly conducted for 3D point clouds with xyz-coordinates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' PVD [2] applies continuous diffusion on point-voxel representations for object shape generation and completion without additional shape en- coders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' LION [3] uses latent diffusion for object shape com- Forward Process Reverse Process (a) Discrete Diffusion Models Segmentation Map Segmentation Map Reverse Process Codebook Stage1:VQ-VAE Stage2: Latent Diffusion Forward Process (b) Latent Diffusion Models Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Overview of (a) Discrete Diffusion Models and (b) La- tent Diffusion Models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Discrete diffusion models conduct diffu- sion process on voxel space, whereas latent diffusion models op- erate diffusion process on latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' pletion (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', conditional generation) with additional shape encoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In this paper, we aim to learn 3D categorical data (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', 3D semantic segmentation maps) with a diffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The study of object generation has shown promising re- sults, but as far as we know, our work is the first to generate a 3D scene with multiple objects using a diffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Concretely, our work explores discrete and latent diffusion models to learn a distribution of volumetric semantic scene segmentation maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We develop the models in an uncon- ditional and conditional generation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' the latter can be used directly for the SSC task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Method Our goal is to learn a data distribution p(x) using dif- fusion models, where each data x ∼ p(x) represents a 3D segmentation map described with the one-hot repre- sentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3D segmentation maps are samples from the data distribution p(x), which is the categorical distribution Cat(k0, k1, · · · , kM) with M +1 probabilities of the free la- bel k0 and M main categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The discrete diffusion mod- els could learn data distribution by recovering the noised data, which is destroyed through the successive transition of the label [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Our method aims to learn a distribution of voxelized 3D segmentation maps with discrete diffusion (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Specifically, it includes unconditional and conditional gen- eration, where the latter corresponds to the SSC task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In ad- dition, we explore a latent diffusion model for 3D segmen- tation maps (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Discrete Diffusion Models Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2a summarizes the overall process of discrete diffu- sion, consisting of a forward process and a reverse process;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' the former gradually adds noise to the data and the latter learns to denoise the noised data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In the forward process in the discrete diffusion, an origi- nal segmentation map x0 is gradually corrupted into a t-step noised segmentation map xt with 1 ≤ t ≤ T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Each forward step can be defined by a Markov uniform transition matrix Qt [8] as xt = xt−1Qt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Based on the Markov property, we can derive the t-step noised segmentation map xt straight from the original segmentation map x0, q(xt|x0), with a cumulative transition matrix ¯Qt = Q1Q2 · · · Qt: q(xt|x0) = Cat(xt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' p = x0 ¯Qt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' (1) In the reverse process parametrized by θ, a learn- able model is used to reverse a noised segmentation map by pθ(xt−1|xt).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Specifically, we use a reparametrization trick [5] to make the model predict a denoised map ˜x0 and subsequently get the reverse process pθ(xt−1|xt): pθ(xt−1|xt) = q(xt−1|xt, ˜x0)pθ(˜x0|xt), (2) q(xt−1|xt, ˜x0) = q(xt|xt−1, ˜x0)q(xt−1|˜x0) q(xt|˜x0) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' (3) We optimize a joint loss that consists of the KL di- vergence of the forward process q(xt−1|xt, x0) from the reverse process pθ(xt−1|xt);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' of the original segmentation map q(x0) from the reconstructed one pθ(xt−1|xt) for an auxiliary loss: L = DKL( q(xt−1|xt, x0) ∥ pθ(xt−1|xt) ) + w0DKL( q(x0) ∥ pθ(˜x0|xt) ), (4) where w0 is an auxiliary loss weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Unlike existing discrete diffusion models [7,8], our goal is to learn the distribution of 3D data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Thus, to better handle 3D data, we use a point cloud segmentation network [30] with modifications for discrete data and time embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Conditional generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We propose discrete diffusion for Semantic Scene Completion (SSC) with conditional generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' SSC jointly estimates a scene’s complete geom- etry and semantics, given a sparse occupancy map s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Thus, it introduces a condition into Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2, resulting in: pθ(xt−1|xt, s) = q(xt−1|xt, ˜x0)pθ(˜x0|xt, s), (5) where s is a sparse occupancy map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We give the condition by concatenating a sparse occupancy map s with a corrupted input xt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Latent Diffusion Models Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2b provides an overview of latent diffusion on 3D segmentation maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Latent diffusion models project the 3D segmentation maps into a smaller latent space and operate a diffusion process on the latent space instead of the high- dimensional input space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' A latent diffusion takes advantage of a lower training computational cost and a faster inference by processing diffusion on a lower dimensional space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' To encode a 3D segmentation map into a latent rep- resentation, we use Vector Quantized Variational AutoEn- coder (VQ-VAE) [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' VQ-VAE extends the VAE by adding a discrete learnable codebook E = {en}N n=1 ∈ RN×d, where N is the size of the codebook and d is the dimension of the codes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The encoder E encodes 3D segmentation maps x into a latent z = E(x), and the quantizer V Q(·) maps the latent z into a quantized latent zq, which is the closest codebook entry en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Note that the latent z ∈ Rh×w×z×d has a smaller spatial resolution than the segmentation map x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Then the decoder D reconstructs the 3D segmentation maps from the quantized latent, ˜x = D(V Q(E(x))).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The encoder E, the decoder D, and the codebook E can be trained end- to-end using the following loss function: LV QV AE = − � k wkxk log(˜xk) + ∥sg(z) − zq∥2 2 + ∥z − sg(zq)∥2 2, (6) where wk is a class weight and sg(·) is the stop-gradient operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Training the latent diffusion model is similar to that of discrete diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Discrete diffusion models diffuse between labels, but latent diffusion models diffuse between codebook indexes using Markov Uniform transition matrix Qt [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Experiments In this section, we empirically study the effectiveness of the diffusion models on 3D voxel segmentation maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We divide the following sub-sections into the learning of the unconditional data distribution p(x) (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='2) and the con- ditional data distribution p(x|s) given a sparse occupancy map s (Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='3);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' note that the latter corresponds to seman- tic scene completion (SSC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Implementation Details Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Following prior work [9], we employ the CarlaSC dataset – a synthetic outdoor driving dataset – for training and evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The dataset consists of 24 scenes in 8 dy- namic maps under low, medium, and high traffic conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Model Resolution Training (time/epoch) Sampling (time/img) D-Diffusion 128×128×8 19m 48s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='883s L-Diffusion 32×32×2 7m 37s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='499s 16×16×2 4m 41s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='230s 8×8×2 4m 40s 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='202s Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Computation time comparison between discrete diffu- sion models and latent diffusion models for 3D segmentation maps generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' ‘D-Diffusion’ and ‘L-Diffusion’ denote discrete diffu- sion models and latent diffusion models, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' ‘Resolution’ means the resolution of the space in which diffusion process op- erates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' A latent diffusion models process diffusion on a lower di- mensional latent space, as a result, it shows advantage of a faster training and sampling time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The splits of the dataset contain 18 training, 3 validation, and 3 test scenes, which are annotated with 10 semantic classes and a free label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Each scene with a resolution of 128 × 128 × 8 covers a range of 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='6 m ahead and behind the car, 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='6 m to each side, and 3 m in height.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Since SSC requires predicting the semantic label of a voxel and an occupancy state together, we use mIoU and IoU as SSC and VQ-VAE metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The mIoU measures the intersection over union averaged over all classes, and the IoU evaluates scene completion quality, regardless of the predicted semantic labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Experimental settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Experiments are deployed on two NVIDIA GTX 3090 GPUs with a batch size of 8 for dif- fusion models and 4 for VQ-VAE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Our models follow the same training strategy as multinomial diffusion [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We set the hyper-parameters of the diffusion models with the num- ber of time steps T = 100 timesteps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' And for VQ-VAE, we set the codebook E = {en}N n=1 ∈ RN×d where the codebook size N = 1100, dimension of codes d = 11 and en ∈ R32×32×2×d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' For diffusion architecture, we slightly modify the encoder–decoder structure in Cylinder3D [30] for time embedding and discreteness of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' And for VQ-VAE architecture, we also use encoder–decoder struc- ture in Cylinder3D [30], but with the vector quantizer mod- ule.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3D Segmentation Maps Generation We use the discrete and the latent diffusion models for 3D segmentation map generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3 shows the quali- tative results of the generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' As seen in the figure, both the discrete and latent models learn the categorical distri- bution as they produce a variety of reasonable scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Note that our models are learned on a large-scale data distribution like the 3D scene with multiple objects;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' this is worth noting since recent 3D diffusion models for point clouds have been performed on an object scale [2,3,31,32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1, we compare training and sampling time mod- Codebook size (N) Resolution (h × w × z) IoU mIoU 220 8×8×2 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='5 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='3 16×16×2 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='7 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='9 32×32×2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='6 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='5 550 8×8×2 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='7 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='7 16×16×2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='4 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='7 32×32×2 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='8 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='4 1,100 8×8×2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='3 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='7 16×16×2 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='3 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='0 32×32×2 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='1 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='1 2,200 8×8×2 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='2 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='5 16×16×2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='7 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='9 32×32×2 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='2 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='2 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ablation study on VQ-VAE hyper-parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We compare different sizes of codebook N and resolutions of the la- tent space h×w×z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' els for different resolutions on which each diffusion model operates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Compared to the discrete diffusion, the latent dif- fusion tends to show shorter training and inference time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' This is because the latent diffusion models compress the data into a smaller latent so that the time decreases as the compression rate increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In particular, compared to dis- crete diffusion, which performs a diffusion process in voxel space, 32 × 32 × 32 latent diffusion has 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='6 times faster training time for one epoch and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='8 times faster sampling time for generating one image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ablation study on VQ-VAE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Latent diffusion models consist of two stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The VQ-VAE compresses 3D seg- mentation maps to latent space, and then discrete diffusion models apply on the codebook index of latent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Therefore, the performance of VQ-VAE may set the upper bound for the final generation quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' So we conduct an ablation study about VQ-VAE while adjusting the resolution of the latent space h×w×z and the codebook capacities N while keep- ing the code dimension d fixed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Concretely, we compress the 3D segmentation maps from 128×128×8 to 32×32×2, 16×16×2, and 8×8×2 with four different codebook size N ∈ {220, 550, 1100, 2200}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The quantitative comparison is shown in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The big- ger the codebook size is, the higher the performance is, but it saturates around 1,100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' That is because most of the codes are not updated, and the update of the codebook can lapse into a local optimum [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The resolution of latent space has a significant impact on performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' As the resolution of the latent space becomes smaller, it cannot contain all the information of the 3D seg- mentation map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Setting the resolution to 32 × 32 × 2 with a 1,100 codebook size strike a good balance between effi- ciency and fidelity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Methods IoU mIoU LMSCNet SS [16] 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='98 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='53 SSCNet Full [17] 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='69 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='91 MotionSC (T=1) [9] 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='46 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='31 Our network w/o Diffusion 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='70 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='94 Discrete Diffusion (Ours) 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='61 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='83 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Semantic Scene Completion results on test set of CarlaSC 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Semantic Scene Completion We use a discrete diffusion model for conditional 3D segmentation map generation (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', SSC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' As a baseline model against the diffusion model, we train a network with an identical architecture by discriminative learning without a diffusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We optimize the baseline with a loss term L = − � k wkxk log(˜xk), where wk is a weight for each semantic class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We visualize results from the baseline and our discrete diffusion model in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Despite the com- plexities of the networks being identical, our discrete dif- fusion model improves mIoU (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', class-wise IoU) up to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='89%p than the baseline model as shown in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Es- pecially, our method achieves outstanding results in small objects and fewer frequency categories like ‘pedestrian’, ‘pole’, ‘vehicles,’ and ‘other’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The qualitative results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4 better demonstrate the improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' In Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3, we compare our model with existing SSC mod- els whose network architectures and training strategies are specifically built for the SSC task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Nonetheless, our diffu- sion model outperforms LMSCNet [16] and SSCNet [17], in spite of the simpler architecture and training strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Although MotionSC [9] shows a slightly better result, we speculate that the diffusion probabilistic model can be im- proved by extensive future research dedicated to this field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Conclusion In this work, we demonstrate the extension of the diffu- sion model to scene-scale 3D categorical data beyond gen- erating a single object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We empirically show that our mod- els have impressive generative power to craft various scenes through a discrete and latent diffusion process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Addition- ally, our method provides an alternative view for the SSC task, showing superior performance compared to a discrim- inative counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We believe that our work can be a useful road map for generating 3D data with a diffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' References [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Nichol, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Jun, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Dhariwal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Mishkin, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chen, “Point-e: A system for generating 3d point clouds from complex prompts,” 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Available: https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='org/abs/2212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='08751 1 Training Datasets Latent Diffusion Models Discrete Diffusion Models Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Samples from our unconditional diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The first column shows samples from training datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' From the second column, we show samples from our discrete diffusion and latent diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' We can observe our diffusion models learn the 3D categorical distribution well, so that it is capable to generate a variety of plausible maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Color assignment for each class is available in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Class IoU mIoU Free Building Barrier Other Pedestrian Pole Road Ground Sidewalk Vegetation Vehicles IoU w/o Diffusion 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='94 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='40 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='72 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='15 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='77 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='15 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='14 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='02 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='22 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='25 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='74 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='72 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='70 Discrete Diffusion (Ours) 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='83 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='00 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='75 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='42 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='43 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='22 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='32 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='57 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='01 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='50 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='45 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='46 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='61 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Semantic scene completion results on test set of CarlaSC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The discriminative learning result with the diffusion model architecture is denoted as ‘w/o Diffusion’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Values with a difference equal to or greater than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='5%p are bold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ground Truth Discrete Diffusion (ours) Input w/o Diffusion Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Qualitative comparison of a deterministic model (w/o diffusion) and ours (discrete diffusion) on the test split of CarlaSC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' The first row shows the sparse inputs for the scene completion task, and the last row shows the corresponding ground-truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Compared to the deterministic model, our probabilistic model produces more plausible shape and class inference, as highlighted by the red circles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Note that the both models (w/o diffusion and discrete diffusion) use the same network architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Color assignment for each class is available in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='X237[2] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Du, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Wu, “3d shape generation and completion through point-voxel diffusion,” in Pro- ceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 5826–5835.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1, 2, 4 [3] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Zeng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Vahdat, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Williams, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Gojcic, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Litany, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Fidler, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Kreis, “Lion: Latent point diffu- sion models for 3d shape generation,” arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='06978, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1, 2, 4 [4] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Sohl-Dickstein, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Weiss, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Maheswaranathan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' PMLR, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2256–2265.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1, 2 [5] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ho, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Jain, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Abbeel, “Denoising diffu- sion probabilistic models,” Advances in Neural Infor- mation Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 6840–6851, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1, 2, 3 [6] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Dhariwal and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Nichol, “Diffusion models beat gans on image synthesis,” Advances in Neural Infor- mation Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 8780–8794, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1, 2 [7] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Hoogeboom, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Nielsen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Jaini, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Forr´e, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Welling, “Argmax flows and multinomial diffu- sion: Learning categorical distributions,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 12 454–12 465, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1, 2, 3, 4 [8] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Austin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Johnson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ho, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Tarlow, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' van den Berg, “Structured denoising diffusion mod- els in discrete state-spaces,” Advances in Neural In- formation Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 34, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 17 981– 17 993, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1, 2, 3, 4 [9] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Wilson, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Song, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Fu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Zhang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Capodieci, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Jayakumar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Barton, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ghaffari, “Mo- tionsc: Data set and network for real-time semantic mapping in dynamic environments,” arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='07060, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2, 4, 5 [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Long, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Shelhamer, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Darrell, “Fully convo- lutional networks for semantic segmentation,” in Pro- ceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3431–3440.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [11] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Qi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Su, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Mo, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 652–660.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [12] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Riegler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Osman Ulusoy, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Geiger, “Octnet: Learning deep 3d representations at high resolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3577–3586.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [13] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Cheng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Agia, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ren, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Li, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Bingbing, “S3cnet: A sparse semantic scene completion network for lidar point clouds,” in Conference on Robot Learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' PMLR, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2148–2161.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [14] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Yan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Li, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Li, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Huang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Cui, “Sparse single sweep lidar point cloud seg- mentation via learning contextual shape priors from scene completion,” in Proceedings of the AAAI Con- ference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3101–3109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [15] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Rist, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Emmerichs, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Enzweiler, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Gavrila, “Semantic scene completion using local deep implicit functions on lidar data,” IEEE transactions on pattern analysis and machine intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 44, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 10, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 7205–7218, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [16] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Roldao, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' de Charette, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Verroust-Blondet, “Lmscnet: Lightweight multiscale 3d semantic com- pletion,” in 2020 International Conference on 3D Vi- sion (3DV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 111–119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2, 5 [17] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Song, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Yu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Zeng, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Savva, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Funkhouser, “Semantic scene completion from a single depth image,” Proceedings of 30th IEEE Con- ference on Computer Vision and Pattern Recognition, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2, 5 [18] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Jo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Im, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='-E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Yoon, “In-n-out: Towards good initialization for inpainting and outpainting,” in The 32nd British Machine Vision Conference, BMVC 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' British Machine Vision Association (BMVA), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Lugmayr, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Danelljan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Romero, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Yu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Tim- ofte, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Van Gool, “Repaint: Inpainting using de- noising diffusion probabilistic models,” in Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 11 461–11 471.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [20] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Garbade, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Gall, “3d semantic scene completion from a single depth image using ad- versarial training,” in 2019 IEEE International Con- ference on Image Processing (ICIP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' IEEE, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1835–1839.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [21] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ramesh, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Dhariwal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Nichol, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chu, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chen, “Hierarchical text-conditional im- age generation with clip latents,” arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='06125, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [22] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Saharia, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Lee, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ho, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Sal- imans, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Fleet, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Norouzi, “Palette: Image-to- image diffusion models,” in ACM SIGGRAPH 2022 Conference Proceedings, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 1–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [23] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Gu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Bao, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Wen, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Zhang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Yuan, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Guo, “Vector quantized diffusion model for text-to-image synthesis,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 10 696–10 706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [24] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Shim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Hyun, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Bae, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Heo, “Local at- tention pyramid for scene image generation,” in Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 7774–7782.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [25] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Fan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Cheng, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Yuan, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Wang, “Frido: Feature pyramid diffusion for complex scene image synthesis,” arXiv preprint arXiv:2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='13753, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [26] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Rombach, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Blattmann, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Lorenz, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Esser, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 10 684–10 695.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [27] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Van Den Oord, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Vinyals et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=', “Neural discrete representation learning,” Advances in neural informa- tion processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2, 4 [28] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Esser, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Rombach, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ommer, “Taming trans- formers for high-resolution image synthesis,” in Pro- ceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 12 873– 12 883.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [29] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ramesh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Pavlov, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Goh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Gray, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Voss, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Radford, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Chen, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Sutskever, “Zero-shot text-to-image generation,” in International Confer- ence on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' PMLR, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 8821– 8831.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2 [30] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Zhou, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Wang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Hong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ma, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Li, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Lin, “Cylindrical and asymmetrical 3d convolution networks for lidar segmentation,” in Pro- ceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 9939–9948.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 3, 4 [31] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Xu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Yu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Song, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Shi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Ermon, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Tang, “Geodiff: A geometric diffusion model for molecular conformation generation,” arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='02923, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4 [32] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Luo and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Hu, “Diffusion probabilistic models for 3d point cloud generation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 2837–2845.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 4 [33] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Hu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Wang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Cham, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Yang, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' Sug- anthan, “Global context with discrete diffusion in vec- tor quantised modelling for image generation,” in Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 11 502– 11 511.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'} +page_content=' 5' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/6NAyT4oBgHgl3EQfpfik/content/2301.00527v1.pdf'}