Signapse

The demo of the Signapse application in action at VIVES.

While presenting the tech stack used in the Signapse project.

Signapse application home page.
Project Overview
An innovative accessibility solution developed for VIVES Project Experience that bridges the communication gap between deaf/hard-of-hearing and hearing individuals. The application leverages advanced computer vision and machine learning techniques to recognize sign language in real-time through the phone's camera and convert it to text and speech.
The project features a multi-model AI pipeline combining PyTorch LSTM networks for sequential analysis with MediaPipe for hand and pose landmark extraction. It supports both ASL (American Sign Language) and VGT (Flemish Sign Language), recognizing individual letters and complete words with high accuracy.
Built with a modular architecture, the solution consists of a React Native mobile app with TypeScript for the frontend, a FastAPI-based Python backend for AI processing, and a robust DevOps setup using Docker containers and Kubernetes for production deployment. The custom smart_gestures package, published on PyPI, enables feature extraction and gesture recognition across different components of the system.
Technologies Used
- AI
- Python
- Kubernetes
- PyTorch
- React Native
- CI/CD
- FastAPI
- Docker
- TypeScript