Sportlogiq’s research team, SLiQ Labs, will be attending and presenting their work at the 2020 Winter Conference on Applications of Computer Vision (WACV) on March 1st to 5th in Colorado. WACV is the premier event on applications of computer vision.
The papers touch on different aspects of computer vision and illustrate how parts of our vision systems work. More specifically, one of the research papers describes our most recent approach to solving self-camera calibration for sports field registration. This work was done in collaboration with Dr. Kwang Moo Yi from the University of Victoria. The other two research papers describe our methods for person re-identification in tracking systems in collaboration with Prof. Eric Granger at Ecole de Technologie Supérieure and Prof. Alex Wong at the University of Waterloo.
Here’s what you can expect to see from the SLiQ Labs team at the conference:
Title: Optimizing Through Learned Errors for Accurate Sports Field Registration (Patent Pending)
Authors: Wei Jiang, Juan Camilo Gamboa Higuera, Baptiste Angles, Weiwei Sun, Mehrsan Javan, Kwang Moo Yi.
We propose an optimization-based framework to register sports field templates onto broadcast videos. For accurate registration we go beyond the prevalent feed-forward paradigm. Instead, we propose to train a deep network that regresses the registration error, and then register images by finding the registration parameters that minimize the regressed error. We demonstrate the effectiveness of our method by applying it to real-world sports broadcast videos, outperforming the state of the art. We further apply our method on a synthetic toy example and demonstrate that our method brings significant gains even when the problem is simplified and unlimited training data is available.
Title: Pose Guided Gated Fusion for Person Re-identification (Patent Pending)
Authors: Amran Bhuiyan, Yang Liu, Parthipan Siva, Mehrsan Javan, Ismail Ben Ayed, Eric Granger
Person re-identification is an important yet challenging problem in visual recognition. Despite the recent advances with deep learning (DL) models for spatio-temporal and multi-modal fusion, re-identification approaches often fail to leverage the contextual information (e.g., pose and illumination) to dynamically select the most discriminant convolutional filters (i.e., appearance features) for feature representation and inference. State-of-the-art techniques for gated fusion employ complex dedicated part-or attention-based architectures for late fusion, and do not incorporate pose and appearance information to train the backbone network.
Authors: Devinder Kumar, Parthipan Siva, P Marchwica, A Wong
An ongoing major challenge in computer vision is the task of person re-identification, where the goal is to match individuals across different, non-overlapping camera views. While recent success has been achieved via supervised learning using deep neural networks, such methods have limited widespread adoption due to the need for large-scale, customized data annotation. As such, there has been a recent focus on unsupervised learning approaches to mitigate the data annotation issue; however, current approaches in literature have limited performance compared to supervised learning approaches as well as limited applicability for adoption in new environments.
To connect with us at the conference, get in touch at firstname.lastname@example.org.