Coding project from
Nao Tokui uses neural networks to analyze a short recorded example which is broken down and classified into sound types, and then improvises various rhythms from these samples. The video above is one put together by Nao himself and shared on his Twitter profile.
Neural Beatbox? RNN-based Rhythm Generation + Audio Classification = FUN!
Built with tensoflow.js, magenta.js and p5xjs. Rhythm generation part was based on magenta’s DrumRNN and teropa ’s Neural Drum Machine!
I use my own keras model (converted to TFJS model) to classify sound segments.
The project runs in-browser, and you can try it out for yourself here
bigmacdeezy liked this
windian29 liked this