About

The 2nd Autonomous Vehicle Vision (AVVision) Workshop aims to bring together industry professionals and academics to brainstorm and exchange ideas on the advancement of computer vision techniques for autonomous driving. In this one-day workshop, we will have seven keynote talks and regular paper presentations (oral and poster) to discuss the state of the art as well as existing challenges in autonomous driving.

Speakers

Cordelia Schmid

INRIA

Raquel Urtasun

University of Toronto

Andreas Geiger

University of Tübingen

Fisher Yu

ETH Zürich

Laura Leal-Taixé

Technical University of Munich

Matthew Johnson-Roberson

University of Michigan

Carl Wellington

Aurora

Organizers

General Chairs

Rui Ranger Fan

UC San Diego

Nemanja Djuric

Aurora

Rowan McAllister

Toyota Research Institute

Ioannis Pitas

Aristotle University of Thessaloniki


Program Committee

David J. Kriegman UC San Diego Qijun Chen Tongji University Walterio Mayol-Cuevas Uni. of Bristol & Amazon Xinchen Yan Uber ATG Xiang Gao Idriverplus Ming Liu HKUST Jianping He SJTU Junhao Xiao NUDT Kai Han Uni. of Bristol Hesham Eraqi American University in Cairo Wenshuo Wang McGill University Yue Wang Zhejiang University    

Joshua Manela Waymo Dequan Wang UC Berkeley Sen Jia Uni. of Waterloo Yi Zhou HKUST Mohammud J. Bocus Uni. Of Bristol Lei Qiao SJTU Peng Yun HKUST Meng Fan Aurora Hengli Wang HKUST Yuan Wang SmartMore Henggang Cui Motional Zhuwen Li Nuro Inc. Meet Shah Waymo

Shangxuan Waymo Lingyao Zhang Aurora Carl Wellington Aurora Huaiyang Huang HKUST Shivam Gautam Aurora Weikai Chen Tencent America Peide Cai HKUST Bohuan Xue HKUST Slobodan Vucetic Temple University Zhaoen Su Aurora Fang-Chieh Chou Aurora Nick Rhinehard UC Berkeley    

Submission

Call for papers
With a number of breakthroughs in autonomous system technology over the past decade, the race to commercialize self-driving cars has become fiercer than ever. The integration of advanced sensing, computer vision, signal/image processing, and machine/deep learning into autonomous vehicles enables them to perceive the environment intelligently and navigate safely. Autonomous driving is required to ensure safe, reliable, and efficient automated mobility in complex uncontrolled real-world environments. Various applications range from automated transportation and farming to public safety and environment exploration. Visual perception is a critical component of autonomous driving. Enabling technologies include: a) affordable sensors that can acquire useful data under varying environmental conditions, b) reliable simultaneous localization and mapping, c) machine learning that can effectively handle varying real-world conditions and unforeseen events, as well as “machine-learning friendly” signal processing to enable more effective classification and decision making, d) hardware and software co-design for efficient real-time performance, e) resilient and robust platforms that can withstand adversarial attacks and failures, and f) end-to-end system integration of sensing, computer vision, signal/image processing and machine/deep learning. The 2nd AVVision workshop will cover all these topics. Research papers are solicited in, but not limited to, the following topics:

Important Dates
Submission Guidelines
Regular papers: Authors are encouraged to submit high-quality, original (i.e., not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference or workshop) research. The paper template is identical to the ICCV 2021 main conference. Papers are limited to eight pages, including figures and tables, in the ICCV style. Additional pages containing only cited references are allowed. Please refer to the following files for detailed formatting instructions:

Papers that are not properly anonymized, or do not use the template, or have more than eight pages (excluding references) will be rejected without review. The submission site is now open.

Extended abstracts: We encourage participants to submit preliminary ideas that have not been published before as extended abstracts. These submissions would benefit from additional exposure and discussion that can shape a better future publication. We also invite papers that have been published at other venues to spark discussions and foster new collaborations. Submissions may consist of up to four pages plus one additional page solely for references (using the template detailed above). The extended abstracts will NOT be published in the workshop proceedings.
Accepted Papers
  1. Monocular 3D Localization of Vehicles in Road Scenes
    Haotian Zhang, Haorui Ji, Aotian Zheng, Jenq-Neng Hwang, Ren-Hung Hwang
    paper
  2. DriPE: A Dataset for Human Pose Estimation in Real-World Driving Settings
    Romain Guesdon, Carlos Crispim-Junior, Laure Tougne
    paper
  3. On the Road to Large-Scale 3D Monocular Scene Reconstruction using Deep Implicit Functions
    Thomas Roddick, Benjamin Biggs, Daniel Olmeda Reino, Roberto Cipolla
    paper | supplementary material
  4. Weakly Supervised Approach for Joint Object and Lane Marking Detection
    Pranjay Shyam, Kuk-Jin Yoon, Kyung-Soo Kim
    paper
  5. Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver GazeZone Estimation Dataset
    Shreya Ghosh, Abhinav Dhall, Garima Sharma, Sarthak Gupta, Nicu Sebe
    paper | supplementary material
  6. Multi-weather city: Adverse weather stacking for autonomous driving
    Valentina Musat, Ivan Fursa, Paul Newman, Fabio Cuzzolin, Andrew Bradley
    paper
  7. YOLinO: Generic Single Shot Polyline Detection in Real Time
    Annika Meyer, Jan-Hendrik Pauls, Christoph Stiller
    paper | supplementary material
  8. Frustum-PointPillars: A Multi-Stage Approach for 3D Object Detection using RGB Camera and LiDAR
    Anshul Paigwar, David Sierra-Gonzalez, Özgür Erkent, Christian Laugier
    paper
  9. Occupancy Grid Mapping with Cognitive Plausibility for Autonomous Driving Applications
    Alice Plebe, Julian F. P. Kooij, Gastone Pietro Rosati Papini, Mauro Da Lio
    paper
  10. A Computer Vision-Based Attention Generator using DQN
    Jordan Chipka, Shuqing Zeng, Thanura Elvitigala, Priyantha Mudalige
    paper
  11. RaidaR: A Rich Annotated Image Dataset of Rainy Street Scenes
    Jiongchao Jin, Arezou Fatemi, Wallace Michel Pinto Lira, Fenggen Yu, Biao Leng, Rui Ma, Ali Mahdavi-Amiri, Hao Zhang
    paper | supplementary material
  12. CDAda: A Curriculum Domain Adaptation for Nighttime Semantic Segmentation
    Qi Xu, Yinan Ma, Jing Wu, Chengnian Long, Xiaoling Huang
    paper
  13. Causal BERT: Improving object detection by searching for challenging groups
    Cinjon Resnick, Or Litany, Amlan Kar, Karsten Kreis, James Lucas, Kyunghyun Cho, Sanja Fidler
    paper | supplementary material
  14. CenterPoly: real-time instance segmentation using bounding polygons
    Hughes Perreault, Guillaume-Alexandre Bilodeau, Nicolas Saunier, Maguelonne Héritier
    paper
  15. It’s All Around You: Range-Guided Cylindrical Network for 3D Object Detection
    Meytal Rapoport-Lavie, Dan Raviv
    paper
  16. SCARF: A Semantic Constrained Attention Refinement Network for Semantic Segmentation
    Xiaofeng Ding, Chaomin Shen, Zhengping Che, Tieyong Zeng, Yaxin Peng
    paper | supplementary material
  17. SDVTracker: Real-Time Multi-Sensor Association and Tracking for Self-Driving
    Shivam Gautam, Gregory P. Meyer, Carlos Vallespi-Gonzalez, Brian C. Becker
    paper
  18. SA-Det3D: Self-Attention Based Context-Aware 3D Object Detection
    Prarthana Bhattacharyya, Chengjie Huang, Krzysztof Czarnecki
    paper
  19. Semantics-aware Multi-modal Domain Translation: From LiDAR Point Clouds to Panoramic Color Images
    Tiago Cortinhal, Fatih Kurnaz, Eren Erdal Aksoy
    paper
  20. SS-SFDA : Self-Supervised Source-Free Domain Adaptation for Road Segmentation in Hazardous Environments
    Divya Kothandaraman, Rohan Chandra, Dinesh Manocha
    paper
  21. Graph Convolutional Networks for 3D Object Detection on Radar Data
    Michael Meyer, Georg Kuschk, Sven Tomforde
    paper
  22. Few-Shot Batch Incremental Road Object Detection via Detector Fusion
    Anuj Tambwekar, Kshitij Agrawal, Anay Majee, Anbumani Subramanian
    paper
  23. Synthetic Data Generation using Imitation Training
    Aman Kishore, Tae Eun Choe, Junghyun Kwon, Minwoo Park, Pengfei Hao, Akshita Mittel
    paper
  24. Efficient Uncertainty Estimation in Semantic Segmentation via Distillation
    Christopher J. Holder, Muhammad Shafique
    paper
  25. Visual Reasoning using Graph Convolutional Networks for Predicting Pedestrian Crossing Intention
    Tina Chen, Renran Tian, Zhengming Ding
    paper
  26. Cross-modal Matching CNN for Autonomous Driving Sensor Data Monitoring
    Yiqiang Chen, Feng Liu, Ke Pei
    paper
  27. Multi-Stage Fusion for Multi-Class 3D Lidar Detection
    Zejie Wang, Zhen Zhao, Zhao Jin, Zhengping Che, Jian Tang, Chaomin Shen, Yaxin Peng
    paper

Contact

Phone: +1 (412) 710-6868

Your message has been sent. Thank you!