Welcome to Ty Ng.'s House
About Me
I am a Ph.D. candidate in Computer and Information Science at the University of Pennsylvania at Dr. Vijay Kumar's lab. I have been working on deep learning and machine perception for robotics. My recent focus is on developing real-time algorithms for vision tasks on platforms subject to size, weight, and power constraints.
I received a B.Eng. degree in Electrical Engineering at Hanoi University of Science and Technology, Hanoi, Vietnam in 2012, followed by an M.S. in Computer Engineering at Ulsan National Institute of Science and Technology, Ulsan, South Korea in 2016.

Upenn Quadrangle Dormitories

Education
Since 2016
- Ph.D. Candidate in Computer & Information Science
- At the University of Pennsylvania
- Research topic: Machine Perception for Micro Aerial Vehicle Robot Systems
&Information Scienc
Research

C. Qu, T. Nguyen, C.J. Taylor. "Depth Completion via Deep Basis Fitting," in 2020 Winter Conference on Applications of Computer Vision (WACV' 20)
(pdf)
- We consider the task of image-guided depth completion where our system must infer the depth at every pixel of an input image based on the image content and a sparse set of depth measurements
- We propose a novel approach that replaces the final 1 × 1 convolutional layer employed in most depth completion networks with a least-squares fitting module which computes weights by fitting the implicit depth bases to the given sparse depth measurements
- Our proposed method can be naturally extended to a multi-scale formulation for improved self-supervised training
- We demonstrate through extensive experiments on various datasets that our approach achieves consistent improvements over state-of-the-art baseline methods with small computational overhead

- We address the localization of robots in a multi-MAV system where external infrastructure like GPS or motion capture systems may not be available.
- Our framework fuses the onboard VIO with the anonymous, visual-based robot-to-robot detection to estimate all robot poses in one common frame, addressing three main challenges: 1) the initial configuration of the robot team is unknown, 2) the data association between each vision-based detection and robot targets is unknown, and 3) the vision-based detection yields false negatives, false positives, inaccurate, and provides noisy bearing, distance measurements of other robots.
(*) Authors equally contributed

T. Nguyen, T. Ozaslan, I.D. Miller, J. Keller, G. Loianno, C.J. Taylor, D.D. Lee, V. Kumar, J.H. Harwood, J. Wozencraft. "MAVNet: an Effective Semantic Segmentation Micro-Network for MAV-based Tasks," in IEEE Robotics and Automation Letters (R-AL), IROS 2019
- Primarily focus on fast and lightweight CNNs for semantic segmentation on SWaP-constrained platforms
- Propose a CNN consists of only 4000 parameters, running at 19 hz on Jetson TX2 ( input image of size 512x640x3)
- Demonstrate the effectiveness of the network model with two datasets:
-
MAV segmentation dataset for surveillance applications
-
Penstock Inspection dataset for autonomous inspection applications
-

S.S. Shivakumar, T. Nguyen, S.W. Chen, C.J. Taylor, V. Kumar. "DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image-Guided Dense Depth Completion," in IEEE Intelligent Transportation Systems Conference (ITSC 2019)
- We propose a CNN architecture that can be used to upsample sparse range data using the available high-resolution intensity imagery
- We demonstrate performance on relevant datasets
- We also propose our own dataset for additional validation


- We propose an unsupervised learning algorithm that trains a Deep Convolutional Neural Network to estimate planar homography
- We compare the proposed algorithm to traditional feature-based and direct methods, as well as a corresponding supervised learning algorithm
- Our empirical results demonstrate that compared to traditional approaches, the unsupervised algorithm achieves faster inference speed, while maintaining comparable or better accuracy and robustness to illumination variation
- Our unsupervised method has superior adaptability and performance compared to the corresponding supervised deep learning method

