Experience

Junior Software Developer

Neuroimaging Probe Placement - Dalhousie University

Jan 2024Apr 2024Halifax, NS

Prototyped an iOS app using LiDAR and facial landmark detection to position non-invasive brain imaging probes.

What I Did

I built an iOS prototype that captures 3D face scans using the iPhone's LiDAR sensor. The scans were processed with OpenCV and MediaPipe to detect facial landmarks. These landmarks were used to identify standardized positions for placing brain imaging probes.

Impact

The prototype demonstrated that facial landmark-based probe positioning is feasible with consumer hardware, potentially replacing more expensive 3-camera setups. The work informed future research directions for the lab.

What I Learned

I gained experience with 3D sensing technologies (LiDAR point cloud capture, depth maps) and how to integrate them with computer vision pipelines. I learned MediaPipe's face mesh model and how to map 2D facial landmarks to 3D coordinates. The iOS work taught me Swift, RealityKit scene geometry, and ARKit world tracking.

Key Highlights

  • Prototyped a non-invasive brain mapping method to identify probe placement spots by researching existing techniques and implementing a facial landmark-based approach.

  • Built a prototype tool using RealityKit to capture LiDAR data from iPhone, processed with OpenCV and MediaPipe to detect facial landmarks for probe positioning.

  • Explored transitioning from a 3-camera mapping technique to a facial landmark-based method, testing spatial consistency and precision improvements.

Tech Stack

SwiftOpenCVLiDARRealityKitARKitMediaPipe

Tags

cvariosresearch

Command Palette

Search for a command to run...