Personal Projects 

NNodely - Structured Neural Networks for Mechanical Systems
Framework - under development

NNodely - Structured Neural Networks for Mechanical Systems

Modeling, control, and estimation of physical systems are central to many engineering disciplines. While data-driven methods like neural networks offer powerful tools, they often struggle to incorporate prior domain knowledge, limiting their interpretability, generalizability, and safety. The framework's goal is to allow fast prototyping of MS-NNs for modeling, control and estimation of physical systems, by embedding prior domain knowledge into the neural network architecture.

Neu4mes - A Neural Network Framework
Master Thesis

Neu4mes - A Neural Network Framework

Structured neural networks (SNNs) are a new neural networks concept. These networks base their structure on mechanical and control theory laws. The frameworks goal is to allow the users fast modeling and control of a mechanical system such as an autonomous vehicle. Using a conceptual representation of your mechanical system the framework generates the structured neural network of model of mechanical device considered.

Textual Prompts Object Removal for Video Impainting
Deep Learning

Textual Prompts Object Removal for Video Impainting

In this project we proposed an efficient way to automate the generation of video sequence masks using state of the art pre-trained models. The models used to achieve textual video inpainting are: "You Only Look Once" (YOLO), "Contrastive Language-Image Pre-training" (CLIP), "Segment-Anything Model" (SAM) and "Improving Propagation and Transformer for Video Inpainting" (ProPainter)

Learning Prompts for Transfer Learning
Featured Project

Learning Prompts for Transfer Learning

Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks.