Research

Our research seeks to improve the accessibility of interactive technologies. Accessibility does not only benefit people with disability but also empowers everyone. Recent advances in AI have enabled new forms of human-computer interaction characterized by greater adaptability and better human-machine symbiosis. Our research uses AI to accelerate the development of assistive technology and enhance the accessibility of user interfaces. We are looking for graduate students and undergraduate students to participant in these areas:

  • Disability, Usability, and Digital Accessibility
  • Adaptive human-technology interaction
  • Explainable AI and Visualization
  • Computer Science (STEAM) Education

You are welcome to visit CSUN Human-computing Lab @ JD2222 and our design space @ADC410.

 

Research Projects

Youtube video using magnifiers for reading comments

Publication

Shuo Niu, Jaime Garcia, Summayah Waseem, and Li Liu. 2022. Investigating How People with Disabilities Disclose Difficulties on YouTube. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’22). Association for Computing Machinery, New York, NY, USA, Article 58, 1–5. https://doi.org/10.1145/3517428.3550383

Investigating How People with Disabilities Disclose Difficulties on YouTube

Jaime Garcia, Arts & Visual Design
Summayah Waseem, Computer Science

People with disabilities (PWDs) use video-sharing platforms such as YouTube to share experiences and information, which contains the challenges and difficulties they have met with social circles, technologies, and the environment. Human Computer-Interaction (HCI) researchers studied online videos to understand PWDs’ particular needs and challenges when doing daily activities, barriers with the video interactions, and ways to have playful experiences. This work presents a preliminary analysis of 257 video clips that mention disability challenges to understand how and why PWDs use YouTube to discourse disabilities.

Youtube video using magnifiers for reading comments

Investigating How People with Disabilities Disclose Difficulties on YouTube

People with disabilities (PWDs) use video-sharing platforms such as YouTube to share experiences and information, which contains the challenges and difficulties they have met with social circles, technologies, and the environment. Human Computer-Interaction (HCI) researchers studied online videos to understand PWDs’ particular needs and challenges when doing daily activities, barriers with the video interactions, and ways to have playful experiences. This work presents a preliminary analysis of 257 video clips that mention disability challenges to understand how and why PWDs use YouTube to discourse disabilities.

People

Dr. Shuo Niu, Clark University
Dr. Li Liu, CSUN
Jaime Garcia, CSUN
Summayah Waseem, CSUN

Papers

Shuo Niu, Jaime Garcia, Summayah Waseem, and Li Liu. 2022. Investigating How People with Disabilities Disclose Difficulties on YouTube. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’22). Association for Computing Machinery, New York, NY, USA, Article 58, 1–5. https://doi.org/10.1145/3517428.3550383

apple watch on wrist

People

Megan Ngo, CSUN

A Case Study on Apple Watch Usability

This case study explores the accessibility features of the Apple Watch, evaluating its usability for individuals with varying abilities. The study investigates features like AssistiveTouch, VoiceOver, haptic feedback, larger text options, and customized display settings. Through testing and user feedback, we examine how these features enable more inclusive interactions, assess their effectiveness, and identify areas for potential improvement to make wearable technology more accessible for everyone.

apple watch on wrist

A Case Study on Apple Watch Usability

This case study explores the accessibility features of the Apple Watch, evaluating its usability for individuals with varying abilities. The study investigates features like AssistiveTouch, VoiceOver, haptic feedback, larger text options, and customized display settings. Through testing and user feedback, we examine how these features enable more inclusive interactions, assess their effectiveness, and identify areas for potential improvement to make wearable technology more accessible for everyone.

People

Megan Ngo, CSUN

Recent Graduate Research Projects

  • AI Visualization for multi-agent robotics by Gage Aschenbrenner and Ramin Roufeh
  • Automated Speech Recognition for Instructional Contents by Timothy Spengler
  • Visual Biofeedback in Speech Rehabilitation by Luan Ta
  • Behavioral Biometrics Classification by Shen Huang
  • Web Accessibility Suggestions through Short-text Classification by Gerardo Rodriguez
  • Deep Convolutional GANs in Procedural Terrain Generation Systems by Edgar Lopez-Garcia
  • Procedural Terrain Generation in Virtual Reality by Ryan Vitacion