Research
My recent research focuses on the intersection of machine learning, robotics, computer vision, and human-centered systems.
In the area of AI for digital health, I develop systems that leverage audio, video, physiological sensing, and multimodal data fusion
to support safety, wellbeing, and early detection of health-related conditions.
I am also actively researching human-robot interaction, particularly in assistive contexts for older adults,
with an emphasis on physical collaboration and human perception.
A key direction in this space involves 3D human motion generation, where we explore AI methods for synthesizing
realistic human movements across different environments—work that informs and complements our efforts in assistive robotics.
In the domain of AI for diet and food understanding, I investigate novel approaches that combine visual and linguistic cues
with computer vision, unsupervised learning, and generative AI to model natural processes such as food degradation.
My earlier work includes contributions to 3D human shape reconstruction, cross-cultural anthropometric analysis,
and deep learning methods for medical image analysis. Across these domains, my goal is to develop intelligent systems that enhance
real-world usability, safety, and quality of life.
Updates
- April 2025: Paper accepted to IEEE Transactions on Instrumentation and Measurement.
- January 2025: Started new project on generative motion models.
|