Home | Organizers | Speakers | Dates | Submission | Challenges | Program |
* Instructions can be found
here.
CalmCar, future mobility part supplier, specialized in
deep learning-based embedded vision products
and data services. CalmCar core product of
multi-camera active surround view perception system
is developed based on automotive grade computing
platform, it enables applications of smart
parking, L2+ autonomous driving and crowdsourced mapping.
Multi-target multi-camera (MTMC) tracking systems can automatically
track multiple vehicles using an array of cameras.
In this challenge, participants are required to design
robust MTMC tracking algorithms, which are targeted at vehicles,
where the same vehicles captured by different cameras
possess the same tracking IDs. The competitors will have access
to four large-scale training datasets, each of which includes
around 1200 annotated RGB images, where the labels cover
the types of vehicles, tracking IDs and 2D bounding boxes.
Identification precision (IDP) and identification recall (IDR)
will be used as metrics to evaluate the performance of
the implemented algorithms. The competitors are required
to submit their pretrained models as well as the corresponding
docker image files via the
CMT submission system for
algorithm evaluation (in terms of both speed and accuracy).
The winner of the competition will receive a monetary prize
(US$5000) and will give a keynote presentation at the workshop.
* Instructions can be found
here.
HKUST is commonly regarded as one of the fastest-growing
universities in the world. In 2019, the university
was ranked seventh in Asia by QS and third by The Times,
and around top 40 internationally.
It was ranked 27th in the world and second in Hong
Kong by QS 2021.
UDI
is committed to making autonomous systems everywhere. UDI
focuses on creating safe, stable, and mass-produced autonomous
driving vehicles, providing integrated, efficient, and
reproducible intelligent logistics solutions. UDI is
leading the development of logistics automation in the
era of Industry 4.0.
Deep neural networks excel at learning from large amounts
of data but they can be inefficient when it comes to
generalizing and applying learned knowledge to new datasets
or environments. In this competition, participants need
to develop an unsupervised domain adaptation (UDA) framework
which can allow a model trained on a large synthetic dataset
to generalize to real-world imagery. The tasks in this
competition include: 1) UDA for monocular depth prediction
and 2) UDA for semantic driving-scene segmentation.
The competitors will have access to Ready to Drive (R2D)
dataset, which is a large-scale synthetic driving scene dataset
collected under different weather/illumination conditions
using the Carla Simulator. In addition, competitors
will also have access to a small amount of real-world data.
The mean absolute value of the relative (mAbsRel) error
and the mean intersection over union (mIoU) score will be
used as metrics to evaluate the performance of UDA for monocular
depth prediction and UDA for semantic driving scene segmentation,
respectively. The competitors will be required to submit
their pretrained models and docker image files via the
CMT submission system.
The winner of the competition will give a keynote presentation at the workshop.
Researchers of top-ranked object detection algorithms submitted to the KITTI Object Detection Benchmarks will have the opportunity to present their work at the 1st AVVision workshop, subject to space availability and approval by the workshop organizers. It should be noted that only the algorithms submitted before 12/20/2020 are eligible for presentation at the 1st AVVision workshop.