default search action
SIGGRAPH Asia 2023 Technical Communications: Sydney, NSW, Australia
- June Kim, Rajesh Sharma:
SIGGRAPH Asia 2023 Technical Communications, SA Technical Communications 2023, Sydney, NSW, Australia, December 12-15, 2023. ACM 2023 - Bingyi Chen, Zengyu Liu, Li Yuan, Zhitao Liu, Yi Li, Guan Wang, Ning Xie:
Monte Carlo Denoising via Multi-scale Auxiliary Feature Fusion Guided Transformer. 1:1-1:4 - Ishaan Nikhil Shah, Aakash KT, P. J. Narayanan:
Combining Resampled Importance and Projected Solid Angle Samplings for Many Area Light Rendering. 2:1-2:4 - Edoardo Alberto Dominici, Emanuel Schrade, Basile Fraboni, Luke Emrose, Curtis Black:
Focus Range: Production Ray Tracing of Depth of Field. 3:1-3:4 - Kenta Eto, Sylvain Meunier, Takahiro Harada, Guillaume Boissé:
Real-time Rendering of Glossy Reflections using Ray Tracing and Two-level Radiance Caching. 4:1-4:4 - Vanessa Tan, Junghyun Nam, Juhan Nam, Junyong Noh:
Motion to Dance Music Generation using Latent Diffusion Model. 5:1-5:4 - Megani Rajendran, Chek Tien Tan, Indriyati Atmosukarto, Aik Beng Ng, Zhihua Zhou, Andrew Grant, Simon See:
SynthDa: Exploiting Existing Real-World Data for Usable and Accessible Synthetic Data Generation. 6:1-6:4 - Charles de Malefette, Anran Qi, Amal Dev Parakkat, Marie-Paule Cani, Takeo Igarashi:
PerfectDart: Automatic Dart Design for Garment Fitting. 7:1-7:4 - Ryota Koiso, Tatsuya Kobayashi, Keisuke Nonaka, Kyoji Matsushima:
High-quality Color-animated CGH Using a Motor-driven Photomask. 8:1-8:4 - Trinity Suma, Birate Sonia, Kwame Agyemang Baffour, Oyewole Oyekoya:
The Effects of Avatar Voice and Facial Expression Intensity on Emotional Recognition and User Perception. 9:1-9:4 - Zhiyuan Yu, Cheng-Hung Lo, Mutian Niu, Hai-Ning Liang:
Comparing Cinematic Conventions through Emotional Responses in Cinematic VR and Traditional Mediums. 10:1-10:4 - Divya Kothandaraman, Tianyi Zhou, Ming C. Lin, Dinesh Manocha:
Aerial Diffusion: Text Guided Ground-to-Aerial View Synthesis from a Single Image using Diffusion Models. 11:1-11:4 - Pengzhi Li, Qinxuan Huang, Yikang Ding, Zhiheng Li:
LayerDiffusion: Layered Controlled Image Editing with Diffusion Models. 12:1-12:4 - Kenta Eto, Yusuke Tokuyoshi:
Bounded VNDF Sampling for Smith-GGX Reflections. 13:1-13:4 - André Mazzone, Chris Rydalch:
Standard Shader Ball: A Modern and Feature-Rich Render Test Scene. 14:1-14:3 - Birate Sonia, Trinity Suma, Kwame Agyemang Baffour, Oyewole Oyekoya:
Mapping and Recognition of Facial Expressions on Another Person's Look-Alike Avatars. 15:1-15:4 - Bo Li, Lingchen Yang, Barbara Solenthaler:
Efficient Incremental Potential Contact for Actuated Face Simulation. 16:1-16:4 - Yiqin Zhao, Rohit Pandey, Yinda Zhang, Ruofei Du, Feitong Tan, Chetan Ramaiah, Tian Guo, Sean Fanello:
Portrait Expression Editing With Mobile Photo Sequence. 17:1-17:4 - Wataru Kawabe, Taisuke Hashimoto, Fabrice Matulic, Takeo Igarashi, Keita Higuchi:
Interactive Material Annotation on 3D Scanned Models leveraging Color-Material Correlation. 18:1-18:4 - Xiaojuan Gu, Junliang Chen, Bo Li, Jun Chen:
Footstep Detection for Film Sound Production. 19:1-19:4 - Neil Anthony Dodgson, Kathleen Griffin:
Training Orchestral Conductors in Beating Time. 20:1-20:4 - Andrew Chalmers, Junhong Zhao, Weng Khuan Hoh, James Drown, Simon Finnie, Richard Yao, James Lin, James Wilmott, Arindam Dey, Mark Billinghurst, Taehyun Rhee:
A Motion-Simulation Platform to Generate Synthetic Motion Data for Computer Vision Tasks. 21:1-21:4 - Zhen Xu, Tao Xie, Sida Peng, Haotong Lin, Qing Shuai, Zhiyuan Yu, Guangzhao He, Jiaming Sun, Hujun Bao, Xiaowei Zhou:
EasyVolcap: Accelerating Neural Volumetric Video Research. 22:1-22:4 - Yifeng Zhou, Shuheng Wang, Wenfa Li, Chao Zhang, Li Rao, Pu Cheng, Yi Xu, Jinle Ke, Wenduo Feng, Wen Zhou, Hao Xu, Yukang Gao, Yang Ding, Weixuan Tang, Shaohui Jiao:
Live4D: A Real-time Capture System for Streamable Volumetric Video. 23:1-23:4 - Stevo Rackovic, Cláudia Soares, Dusan Jakovetic:
Distributed Solution of the Blendshape Rig Inversion Problem. 24:1-24:4 - Rinat Abdrashitov, Kim Raichstat, Jared Monsen, David Hill:
Robust Skin Weights Transfer via Weight Inpainting. 25:1-25:4 - Ran Dong, Soichiro Ikuno, Xi Yang:
Learning multivariate empirical mode decomposition for spectral motion editing. 26:1-26:4 - Toby Chong, Alina Chadwick, I-Chao Shen, Haoran Xie, Takeo Igarashi:
MicroGlam: Microscopic Skin Image Dataset with Cosmetics. 27:1-27:4 - Zhongfei Qing, Zhongang Cai, Zhitao Yang, Lei Yang:
Story-to-Motion: Synthesizing Infinite and Controllable Character Animation from Long Text. 28:1-28:4 - Pranav Manu, Astitva Srivastava, Avinash Sharma:
CLIP-Head: Text-Guided Generation of Textured Neural Parametric 3D Head Models. 29:1-29:4 - Soorya Narayan Jayaraman Mohan:
Hair Tubes: Stylized Hair from Polygonal Meshes of Arbitrary Topology. 30:1-30:4
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.