Smart glass

Face recognition

Face recognition is one of the most popular research problems on various platforms. New research issues arise when it comes to resource constrained devices, such as smart glasses, due to the overwhelming computation and energy requirements of the accurate face recognition methods. In this paper, we propose a robust and efficient sensor-assisted face recognition system on smart glasses by exploring the power of multimodal sensors including the camera and Inertial Measurement Unit (IMU) sensors. The system is based on a novel face recognition algorithm, namely Multi-view Sparse Representation Classification (MVSRC), by exploiting the prolific information among multi-view face images. To improve the efficiency of MVSRC on smart glasses, we propose a novel sampling optimization strategy using the less expensive inertial sensors. Our evaluations on public and private datasets show that the proposed method is up to 10% more accurate than the state-of-the-art multi-view face recognition methods while its computation cost is in the same order as an efficient benchmark method (e.g., Eigenfaces). Finally, extensive real-world experiments show that our proposed system improves recognition accuracy by up to 15% while achieving the same level of system overhead compared to the existing face recognition system (OpenCV algorithms) on smart glasses.

  • [TMC] Weitao Xu, Yiran Shen*, Neil Bergmann, Wen Hu. "Sensor-assisted Multi-view Face Recognition System on Smart Glass", IEEE Transactions on Mobile Computing (SCI IF: 3.822, CCF A), volume 17, issue 1, pp.197-210, Jan 2018. (SCI IF: 5.115, CCF A, 中科院二区)
  • [IPSN' 2016] Weitao Xu, Yiran Shen, Neil Bergmann,Wen Hu. "Sensor-assisted Face Recognition System on Smart Glass via Multi-view Sparse Representation Classification". In Proceedings of the 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), pp. 1-12, Vienna, Austria, 2016.4.11-16. (CCF B)
  • Eye tracking

    Smart head-worn or head-mounted devices, including smart glasses and Virtual Reality (VR) headsets, are gaining popularity. Online shopping and in-app purchase from such headsets are presenting new e-commerce opportunities to the app developers. For convenience, users of these headsets may store account login, bank account and credit card details in order to perform quick in-app purchases. If the device is unattended, then an attacker, which can include insiders, can make use of the stored account and banking details to perform their own in-app purchases at the expense of the legitimate owner. To better protect the legitimate users of VR headsets (or head mounted displays in general) from such threats, in this paper, we propose to use eye movement to continuously authenticate the current wearer of the VR headset. We built a prototype device which allows us to apply visual stimuli to the wearer and to video the eye movements of the wearer at the same time. We use implicit visual stimuli (the contents of existing apps) which evoke eye movements from the headset wearer but without distracting them from their normal activities. This is so that we can continuously authenticate the wearer without them being aware of the authentication running in the background. We evaluated our proposed system experimentally with 30 subjects. Our results showed that the achievable authentication accuracy for implicit visual stimuli is comparable to that of using explicit visual stimuli. We also tested the time stability of our proposed method by collecting eye movement data on two different days that are two weeks apart. Our authentication method achieved an Equal Error Rate of 6.9% (resp. 9.7%) if data collected from the same day (resp. two weeks apart) were used for testing. In addition, we considered active impersonation attacks where attackers trying to imitate legitimate users' eye movements. We found that for a simple (resp. complex) eye tracking scene, a successful attack could be realised after on average 5.67 (13.50) attempts and our proposed authentication algorithm gave a false acceptance rate of 14.17% (3.61%). These results show that active impersonating attacks can be prevented using complex scenes and an appropriate limit on the number of authentication attempts. Lastly, we carried out a survey to study the user acceptability to our proposed implicit stimuli. We found that on a 5-point Likert scale, at least 60% of the respondents either agreed or strongly agreed that our proposed implicit stimuli were non-intrusive.

  • [Ubicomp' 2018] Yongtuo Zhang, Wen Hu, Weitao Xu, Chun Tong Chou. “Continuous Authentication Using Eye Movement Response of Implicit Visual Stimuli”, Singapore, 2018.10.09-11. (CCF A)
  • Localization

    image alt

    Smart glasses (e.g. Google Glass) is a class of wearable embedded devices with both inertial sensors and camera onboard. This paper proposes a smart glasses based indoor localisation method called NaviGlass. Because of high energy consumption of vision sensors, NaviGlass uses inertial sensors predominantly and uses the camera images for correcting the drift in the position estimates due to the accumulated errors of inertial sensors. On account of limited computation resources available on smart glasses, the computation time for image matching, which is needed to correct the position estimate, is high. We propose a feature reduction method that can significantly reduce the computation time for image matching but with little compromise on accuracy. We compare our method against Travi-Navi, which is a state-of-the-art localisation system that uses both inertial and image sensors. Our evaluations show that our proposed method achieved a mean localisation error of 3.3m which is 64% less than that of Travi-Navi.

  • [EWSN' 2016] Yongtuo Zhang, Wen hu, Weitao Xu, Hongkai wen, Chun Tung Chou. "NaviGlass: Indoor Localisation Using Smart Glasses", TU Graz, Austria, 2016.02.15-17