Submission
Call for papers
With a number of breakthroughs in autonomous system technology
over the past decade, the race to commercialize self-driving
cars has become fiercer than ever.
The integration of advanced sensing, computer vision,
signal/image processing, and machine/deep learning into autonomous
vehicles enables them to perceive the environment
intelligently and navigate safely. Autonomous driving is
required to ensure safe, reliable, and efficient automated
mobility in complex uncontrolled real-world environments.
Various applications range from automated transportation
and farming to public safety and environment exploration.
Visual perception is a critical component of autonomous driving.
Enabling technologies include:
a) affordable sensors that can acquire useful data under
varying environmental conditions, b) reliable simultaneous
localization and mapping, c) machine learning that can
effectively handle varying real-world conditions and unforeseen
events, as well as “machine-learning friendly” signal processing
to enable more effective classification and decision making,
d) hardware and software co-design for efficient real-time
performance, e) resilient and robust platforms that can withstand
adversarial attacks and failures, and f) end-to-end system
integration of sensing, computer vision, signal/image
processing and machine/deep learning. The 2nd AVVision
workshop will cover all these topics. Research papers are
solicited in, but not limited to, the following topics:
- 3D road/environment reconstruction and understanding;
- Mapping and localization for autonomous cars;
- Semantic/instance driving scene segmentation and semantic mapping;
- Self-supervised/unsupervised visual environment perception;
- Car/pedestrian/object/obstacle detection/tracking and 3D localization;
- Car/license plate/road sign detection and recognition;
- Driver status monitoring and human-car interfaces;
- Deep/machine learning and image analysis for car perception;
- Adversarial domain adaptation for autonomous driving;
- On-board embedded visual perception systems;
- Bio-inspired vision sensing for car perception;
- Real-time deep learning inference.
Important Dates
- Submission deadline: Jul. 25, 2021
- Review feedback release date: Aug. 11, 2021
- Camera-ready Submission: Aug. 16, 2021
Submission Guidelines
Regular papers: Authors are encouraged to submit high-quality, original
(i.e., not been previously published or accepted for publication
in substantially similar form in any peer-reviewed venue
including journal, conference or workshop) research.
The paper template is identical to the
ICCV 2021 main conference.
Papers are limited to eight pages, including figures and tables,
in the ICCV style. Additional pages containing only cited
references are allowed. Please refer to the following files for
detailed formatting instructions:
- Example submission paper with detailed instructions Download;
- LaTeX Templates (zip): iccv2021AuthorKit.zip Download
Papers that are not properly anonymized, or do not use the template, or have more than eight pages (excluding references) will be rejected without review.
The
submission site is now open.
Extended abstracts: We encourage participants to submit preliminary ideas that have not been published before as
extended abstracts. These submissions would benefit from additional exposure and discussion that can shape a better
future publication. We also invite papers that have been published at other venues to spark discussions and foster new collaborations.
Submissions may consist of up to four pages plus one additional page solely for references (using the template detailed above). The extended abstracts will
NOT be published in the workshop proceedings.
Accepted Papers
- Monocular 3D Localization of Vehicles in Road Scenes
Haotian Zhang, Haorui Ji, Aotian Zheng, Jenq-Neng Hwang, Ren-Hung Hwang
paper
- DriPE: A Dataset for Human Pose Estimation in Real-World Driving Settings
Romain Guesdon, Carlos Crispim-Junior, Laure Tougne
paper
- On the Road to Large-Scale 3D Monocular Scene Reconstruction using Deep Implicit Functions
Thomas Roddick, Benjamin Biggs, Daniel Olmeda Reino, Roberto Cipolla
paper |
supplementary material
- Weakly Supervised Approach for Joint Object and Lane Marking Detection
Pranjay Shyam, Kuk-Jin Yoon, Kyung-Soo Kim
paper
- Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver GazeZone Estimation Dataset
Shreya Ghosh, Abhinav Dhall, Garima Sharma, Sarthak Gupta, Nicu Sebe
paper |
supplementary material
- Multi-weather city: Adverse weather stacking for autonomous driving
Valentina Musat, Ivan Fursa, Paul Newman, Fabio Cuzzolin, Andrew Bradley
paper
- YOLinO: Generic Single Shot Polyline Detection in Real Time
Annika Meyer,
Jan-Hendrik Pauls,
Christoph Stiller
paper |
supplementary material
- Frustum-PointPillars: A Multi-Stage Approach for 3D Object Detection
using RGB Camera and LiDAR
Anshul Paigwar,
David Sierra-Gonzalez,
Özgür Erkent,
Christian Laugier
paper
- Occupancy Grid Mapping with Cognitive Plausibility
for Autonomous Driving Applications
Alice Plebe,
Julian F. P. Kooij,
Gastone Pietro Rosati Papini,
Mauro Da Lio
paper
- A Computer Vision-Based Attention Generator using DQN
Jordan Chipka,
Shuqing Zeng,
Thanura Elvitigala,
Priyantha Mudalige
paper
- RaidaR: A Rich Annotated Image Dataset of Rainy Street Scenes
Jiongchao Jin,
Arezou Fatemi,
Wallace Michel Pinto Lira,
Fenggen Yu,
Biao Leng,
Rui Ma,
Ali Mahdavi-Amiri,
Hao Zhang
paper |
supplementary material
- CDAda: A Curriculum Domain Adaptation for Nighttime Semantic
Segmentation
Qi Xu,
Yinan Ma, Jing Wu, Chengnian Long,
Xiaoling Huang
paper
- Causal BERT: Improving object detection
by searching for challenging groups
Cinjon Resnick,
Or Litany,
Amlan Kar,
Karsten Kreis,
James Lucas,
Kyunghyun Cho,
Sanja Fidler
paper |
supplementary material
- CenterPoly: real-time instance segmentation using bounding polygons
Hughes Perreault, Guillaume-Alexandre Bilodeau, Nicolas Saunier, Maguelonne Héritier
paper
- It’s All Around You: Range-Guided Cylindrical Network
for 3D Object Detection
Meytal Rapoport-Lavie, Dan Raviv
paper
- SCARF: A Semantic Constrained Attention Refinement Network
for Semantic Segmentation
Xiaofeng Ding,
Chaomin Shen,
Zhengping Che,
Tieyong Zeng,
Yaxin Peng
paper |
supplementary material
- SDVTracker: Real-Time Multi-Sensor Association and Tracking for Self-Driving
Shivam Gautam,
Gregory P. Meyer,
Carlos Vallespi-Gonzalez,
Brian C. Becker
paper
- SA-Det3D: Self-Attention Based Context-Aware 3D Object Detection
Prarthana Bhattacharyya,
Chengjie Huang,
Krzysztof Czarnecki
paper
- Semantics-aware Multi-modal Domain Translation:
From LiDAR Point Clouds to Panoramic Color Images
Tiago Cortinhal,
Fatih Kurnaz,
Eren Erdal Aksoy
paper
- SS-SFDA : Self-Supervised Source-Free Domain Adaptation for Road
Segmentation in Hazardous Environments
Divya Kothandaraman, Rohan Chandra, Dinesh Manocha
paper
- Graph Convolutional Networks for 3D Object Detection on Radar Data
Michael Meyer,
Georg Kuschk,
Sven Tomforde
paper
- Few-Shot Batch Incremental Road Object Detection via Detector Fusion
Anuj Tambwekar,
Kshitij Agrawal,
Anay Majee,
Anbumani Subramanian
paper
- Synthetic Data Generation using Imitation Training
Aman Kishore,
Tae Eun Choe,
Junghyun Kwon,
Minwoo Park,
Pengfei Hao,
Akshita Mittel
paper
- Efficient Uncertainty Estimation in Semantic Segmentation via Distillation
Christopher J. Holder, Muhammad Shafique
paper
- Visual Reasoning using Graph Convolutional Networks for
Predicting Pedestrian Crossing Intention
Tina Chen,
Renran Tian,
Zhengming Ding
paper
- Cross-modal Matching CNN for Autonomous Driving Sensor Data Monitoring
Yiqiang Chen,
Feng Liu, Ke Pei
paper
- Multi-Stage Fusion for Multi-Class 3D Lidar Detection
Zejie Wang,
Zhen Zhao,
Zhao Jin,
Zhengping Che,
Jian Tang,
Chaomin Shen,
Yaxin Peng
paper