My group, the MLLP research group, was participating jointly with the i6 from RWTH Aachen. I was the proud presenter of our work at IberSPEECH2018 in Barcelona, presenting our speech-to-text systems for the IberspeechRTVEchallenge. We won both tracks of this international challenge on TV show transcription!.
This competition aims to evaluate the state-of-the-art systems in ASR (Automatic Speech Recognition) for broadcast speech transcription. It comprised two categories:
Closed condition: Limiting the systems to be trained with the RTVE2018 database that is provided for the organizer.
Open condition: Systems are allowed to use any data to train the models.
Some of these shows are really challenging, as their content involves different speakers talking at the same time, very noisy field reports, and all kind of real-life situations like these. In the case of the closed condition, there was a tedious part related to the data cleaning and filtering that was a headache, and it made that most teams did not send their systems for evaluation.
This is the poster where we introduced our systems:
And the PDF version in this link and the paper that we submitted for this conference where our system is described in detail.
This is the second poster that I presented during the EuroPython 2018, the gist is how to deploy the required infrastructure to create a TensorFlow cluster, and then provision the software to train a Deep Learning model. For doing this, I used the Infrastructure Manager (http://www.grycap.upv.es/im/index.php) that supports API’s from different virtual platforms, making user applications Cloud-agnostic.
IM also integrates a contextualization system, based on Ansible, to enable the installation and configuration of all the required applications providing a fully functional Deep Learning infrastructure on the Cloud provider that we need.
After PyConUS, I visited Edinburgh to attend to the EuroPython 2018, the largest Python conference in Europe, presenting a poster concerning some part of my work during the M.Sc in parallel and distributed computing.
I wanted to introduced the library SLEPc, developed in Universidad Politécnica de València, and the bindings for using this library with Python: SLEPc4py, as well as MPI and MPI4py. All the information is gathered in the poster that can be found here.
I’m enjoying these days an amazing experience at PyCon 2018 in Cleveland. A lot of firsts: first time in USA, first time at an international PyCon, first time presenting a poster at a conference like this, and so many more firsts, but this is a great experience that I will remember forever.
Thanks a lot to all the people that come to visit my poster and show interest in it!!!
You guys do this so big and awesome!!
After some requests, I’m going to put my poster available (and the LaTeX sources when I have the time, if someone is interested in) in the following link: exploring-generative-models-pycon2018.
And the low-res JPEG version as an image here:
And, one more time, THANK YOU for being so kind to me :).
This is the third part of the tutorial to install and configure SLURM on Azure (part I, part II). With this post, we are going to complete the process and we show an example of the execution of one task.
This is the second post of the SLURM configuration and installation guide on Azure (part I is here). In this part, we are going to configure the NFS system, and finally, in the third post, we are going to set up the SLURM environment.
As these days I’m very busy, I want to publish just one quick post about something that could be useful in some contexts, that is the use of vector instructions, in particular, the Advanced Vector Extensions (AVX) instruction set from Intel.
The project that I want to introduce in this post is a minigame developed with Unity, a powerful game engine, and Vuforia, a library for creating Augmented Reality (AR) apps. This was the final task of the subject “Augmented and Virtual Reality”.
In this game, the idea is throwing stones to hit the skeletons that raise from their graves. It is a very straightforward game, but developing it was enjoyable. The following video shows a short gameplay.
About Unity, it seemed a helpful tool that allows you to develop your ideas quickly, obtaining a “multiplatform” app that you can run and test. Considering the number of available assets thanks to both the store and the community, it could be the best option to evaluate if a game concept could work or not.
Regarding Vuforia, it was integrated with Unity through its plugin, and it worked very well, providing a fast response even with marker occlusion.
I don’t know if the situation is the same with the updates that both tools have received since this project was developed, around 2014, but I strongly recommend using this combination (Unity + Vuforia) for creating your AR apps, even more, considering the attention that AR is receiving nowadays.