At the anime deep learning group, we explore using neural networks to build tools for anime production. Students generally work with our in-house anime dataset, and develop new deep learning methods trained with it. We meet regularly every two weeks (online during the current crisis), and are in direct contact with artists working in Japanese studios.
What specific problems we are trying to solve can change quickly but, at the time of writting, we are working torwards two main directions:
- Assisted Drawing: we have taught AI to draw anime, and are developing assistants that can work alongside artists.
- Automated Coloring: we are working to automate the repetitive tasks in coloring keyframe and in-between character animation.
Topics and Tasks
Generally, multiple possible topics are open for applicants, but which ones are available change quite quickly. Topic will be chosen after discussion with the supervisor according to the applicant's qualifications and interests. The exact tasks will depend on the topic, but generally all involve working on Deep Learning and/or Image Processing with our dataset.
Here are some examples of what students have worked on:
- Metadata Estimation from Single Image: We have explored estimating series, genre, studio, etc from a single user inputted image.
- Optimized Input: Due to the nature of 2D animation, using the raw production images might not be the optimal input for learning. We explored alternative input representations, such as contour lines and contour distance fields.
- Feature Extraction: Before developing some drawing tools, we need to be able to extract specific features from images in the dataset. We explored developing networks capable of identifying such features.
- Feature Translation: Character and scene features are often drawn with different qualities depending on factors such as being in focus or distance. We have explored translating drawings between these two levels of quality.
- Frame Relevancy Estimation: Not all frames from a show are necessarily as characteristic as others (example: black screen). We have explored classifying the importance of images in the dataset, allowing for the most characteristic images from a show to be identified, and for irrelevant ones to be ignored in other learning algorithms.
- Fluent in English, spoken and written (supervisor speaks English, code and reports must be written in English).
- Familiar with Machine Learning (eg: taken a course in the subject) or availability to learn new concepts on their own before starting the work on their thesis.
- Basic understanding of the media domain in question.
- Available to meet bi-weekly with group (remotely).
- Proficiency in Python and PyTorch.
- Previous Machine Learning work.