Amanda Duarte

Position | |
---|---|
PhD Candidate | amanda.duarte@upc.edu |
Office | Phone |
---|---|
C6 E201 | +34 684165796 |
Biography
Amanda Duarte is a Ph.D. candidate at the Universitat Politècnica de Catalunya jointly with the Barcelona Supercomputing Center under the supervision of Prof. Jordi Torres and Prof. Xavier Giró.
Thanks to the INPhINIT ”La Caixa” Doctoral fellowship she is also a Marie Skłodowska-Curie fellow.
Her research aims to give people with special needs and sign language users further access to information.
Specifically, her recent work focus on developing systems that enable sign language users to have automatic ways of translating online content (e.g. the speech of videos or texts) into sign language representations.
To that end, as part of her Ph.D. studies, she leads together with Prof. Xavier Giró the Speech2Signs project that tackles the task of automatic speech to sign language translation.
As no data for learning such a system were available, the project introduced the first large-scale continuous American Sign Language dataset called How2Sign, which will be publicly available soon.
During her Ph.D., she interned at Johns Hopkins University and at Carnegie Mellon University.
Before starting her PhD studies, she got her master’s in Computer Engineering at Federal University of Rio Grande (FURG) in Brazil, and a degree in Systems Analysis at Instituto Federal Sul-rio-grandense (IFSul).
Her past research projects span a wide variety of areas and involve multimodal data collection and annotation, speech-conditioned image generation, underwater robot localization, navigation, and underwater image restoration.
Portuguese is her first language but she is also fluent in English and Spanish.
She is able to understand Catalan, but be aware of possible misunderstandings. Expect even more misunderstanding when using American Sign Language (ASL) but it's also worth trying. :)
Besides research, she is passionate about travel, photography, and art.
For more information and recent news, visit her personal webpage.
Latest News:
- [08/2020] Presenting together with Lucas Ventura two extended abstracts at the SLRTP workshop. Check out our large-scale American Sign Language dataset called "How2Sign" and our study on how the Deaf community perceive generated videos of sign language presented on our work called "Can Everybody Sign Now?"
- [05/2020] Grounded Sequence to Sequence Transduction accepted at IEEE Journal of Selected Topics in Signal Processing.
- [10/2019] Presenting my reserach proposal titled Cross-modal Neural Sign Language Translation at ACM Multimedia 2019 at the Doctoral Symposium.
- [Spring 2019] Visiting Student at Carnegie Mellon University.
- [05/2019] Our paper Wav2Pix: Speech-conditioned Face Generation using Generative Adversarial Networks was accepted at ICASSP 2019.
- [10/2018] Received a Marie Skłodowska-Curie fellowship through the INPhINIT - ”La Caixa” Doctoral fellowship.
- [Summer 2018] Participating on the Frederick Jelinek Memorial Summer Workshop 2018.
- [12/2017]: Presenting our work Temporal-aware Cross-modal Embeddings for Video and Audio Retrieval at Woman in Machine Learning (WiML) Workshop 2017.
- [09/2017]: Awarded with a Caffe2 Research Grant by Facebook.
Journal Articles top
“Grounded Sequence to Sequence Transduction”, IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 3, pp. 577-591, 2020. | ,
Conference Papers top
“Sign Language Translation from Instructional Videos”, in CVPR 2023 Women in Computer Vision Workshop, Vancouver, Canada, In Press.![]() |
,
“Sign Language Video Retrieval with Free-Form Textual Queries”, in CVPR 2022 - CVF/IEEE Conference on Computer Vision and Pattern Recognition, 2022.![]() |
,
“How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language”, in CVPR 2021, 2021.![]() |
,
“Can Everybody Sign Now? Exploring Sign Language Video Generation from 2D Poses”, in ECCV 2020 Workshop on Sign Language recognition, Production and Translation (SLRTP), 2020.![]() |
,
“Wav2Pix: Speech-conditioned Face Generation using Generative Adversarial Networks”, in ICASSP, Brighton, UK, 2019.![]() |
,
Theses top
Other top
“Towards Sign Language Translation and Production”. 2022. | ,Presentation |
“2D-to-3D Lifting of Sign Language Body Poses with Recurrent Neural Networks”, UPC ETSETB TelecomBCN, Barcelona, 2021.![]() |
, Report |
“Wav2Pix: Enhancement and Evaluation of a Speech-conditioned Image Generator”. 2019.![]() |
, Ms Thesis |
“Block-based Speech-to-Speech Translation”. 2018.![]() |
, Ms Thesis |
Projects top
![]() |
Cross-modal Deep Learning between Vision, Language, Audio and Speech | European | Oct 2018 | Sep 2021 |
![]() |
Speech2Signs: Spoken to Sign Language Translation using Neural Networks | Other | Nov 2017 | Nov 2018 |
Research Areas top
![]() |
Sign Language Recognition, Translation and Production | Internal | May 2017 | May 2024 |
Teaching top
Acronym | Title | Level | College |
---|---|---|---|
AIDL | Artificial Intelligence with Deep Learning | Postgraduate | UPC School |