Our vision is to build the next-generation platform for fast and easy creation of audio and video content. In October 2020, we took a huge leap forward by launching a cloud based collaborative video editor and a screen recorder. We have built and shipped some key technologies like voice cloning, one-click speech enhancement, etc. to help us realize our vision. We're used by some of the world's top podcasters and influencers as well as businesses such as BBC, ESPN, Hubspot, Shopify and Washington Post for communicating via video. We've raised $50M from some of the world's best investors like Andreessen Horowitz, Redpoint Ventures and Spark Capital.
We need great people to help us build these cutting-edge technologies and guide its development. In particular, we’re always looking to hire smart applied research scientists. You will join a team of around a dozen researchers specialized in generative models and deep learning.
Some of our research publications:
- SampleRNN: An Unconditional End-to-End Neural Audio Generation Model.
- Char2Wav: End-to-end Speech Synthesis.
- ObamaNet: Photo-realistic lip-sync from text.
- MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis.
- Chunked Autoregressive GAN for Conditional Waveform Synthesis
- Wav2CLIP: Learning Robust Audio Representations From CLIP
Responsibilities
- Oversee research process implementation, from problem definition all the way to running and analyzing concrete research experiments.
- Collaborate and communicate clearly and efficiently with the rest of the team about the status, results, and challenges of your current tasks.
- Contribute to designing the research roadmap of the company
- Train and mentor other members of the team.
- Own the research function of specific product features.
Challenges
As a core member of our research team, you'll play an integral role in challenges such as:
- Using deep learning (including but not limited to NLP, speech processing, computer vision, etc.) to solve problems for media creation and editing.
- Creating realistic voice doubles using only a few minutes of audio.
- Creating tools to synthesize photo-realistic videos that match our Overdub (personalized speech synthesis) feature.
- Designing and developing new algorithms for media synthesis, anomaly detection, speech recognition, speech enhancement, filler word detection, audio and video tagging etc.
- Coming up with new research directions to improve our product
Requirements
- Proven experience in designing and implementing deep learning algorithms.
- PhD or Master’s degree specialized in Deep Learning or equivalent experience.
- Track record of developing new ideas in machine learning, as demonstrated by one or more first author publications or projects.
- Good programming skills and experience with deep learning frameworks.
- Ability to generate more ideas than you can implement.
- Implementing a given idea is easy and efficient. Once the experiment set up is established, you’re able to implement many ideas per day, evaluate them, and organize your time to be productive and efficient.
- You wish you had more GPUs to run all the experiments that you wanted!
- You know Pytorch/Tensorflow inside-out
- We do not require domain-specific knowledge in computer vision or speech-processing
At least one of the following must be true for the applicant to be considered:
- Lead author of an accepted publication in one of the top conferences: ICLR, ICML, NeurIPS, ICASSP, ICCV, CVPR, InterSpeech, etc.
- Played a key role in shipping a feature in production which uses deep learning as a key component.