MA Music, Science and Technology

The Effects of Timbre and Musical Training on Vocal Pitch-Matching Accuracy

Mar 2013 – Present

Vocal Pitch-matching Research, PsychoPy, Praat, MATLAB, Audacity

Abstract – Pitch control is one of the most essential skills when singing. Previous studies indicated that in poor-pitch singers, pitch-matching accuracy was greater when the timbre of the reference tone was vocal, as compared to non-vocal. However, it is not consistently shown as to whether this vocal timbre advantage exists across musically trained and untrained individuals. The present study investigated how vocal pitch-matching was influenced by the timbre of reference tones and the type of musical training experience. Pure tones and recorded voice were used as reference stimuli in a single-tone pitch-matching task. Vocalists, instrumentalists and non-musicians were tested. Acoustic analysis on the vocal responses revealed that the pitch accuracy was significantly better in response to voice across all groups, while vocalists performed not differently from instrumentalists. This suggests the universality and robustness of the vocal timbre advantage, which is not influenced by intense vocal practice. The pitch variability across trials was significantly larger in non-musicians than in the two musician groups. Non-musicians also showed larger heterogeneity across individuals.

Shekar P. & Fujioka T. (2014, August). The Effects of Timbre and Musical Training on Vocal Pitch-Matching Accuracy. In 13th International Conference on Music Perception and Cognition.

*Slide deck*

Paper


Piano Teacher from Hell

Jun 2014 – Final Project, Neuroplasticity and Musical Gaming

Musical Virtual Reality Game, Unity 3D, Oculus Rift, Midi Keyboard

Survival horror game that beckons the student (user) to take a piano lesson with a nightmarish teacher (“Slender Man”). The motivation was to leverage neuroplasticity and teach musicianship skills in sight-playing keyboard. Practice, practice, practice! The poster was designed by me.

Photos


Rose Bud Thorn

Jun 2014 – Final Project, Interaction Design Studio

Web App (for mobile), Handlebars.js, node.js, MySQL

Social networking app for small, intimate circles of friends to share how they’re really doing through structured and periodic life updates. Rose – what was your highlight? Bud – what are you looking forward to? Thorn – what was your lowlight? “Give sunshine” feedback to others’ updates.

Video | Slide Deck


Kickstart Your {Sex} Life

Jun 2014 – Final Project, From Play To Innovation

Female Contraceptive Adoption – Design Proposal

d.school project tackling the uncomfortable issue of contraception in young women. Kickstarter for safe sex – incentivizing the adoption of long-term contraceptives (IUD’s and implants) with crowdfunded assistance. A flier (designed by me) pointing to a launching-soon site received interest from 150 unique visitors.

VideoSlide Deck


Kollabo

Mar 2014 – Final Project, Mobile Music

iOS App, Real-time Audio, Mobile Music Toolkit (MoMu), OpenGL ES

Collaborative music visualization builder. Playlist music and tactile interactions animate graphical visualizations. Saved visualizations can be viewed and extended upon by others listening to the same song, encouraging “creative collaboration”.

Webpage | Slide Deck


Gong

Feb 2014 – Mobile Music

iOS App, Real-time Audio, Mobile Music Toolkit (MoMu), OpenGL ES

Musical take on the classic arcade game, Pong. Balls generate reverbed plucks of the Mandolin when they collide with surfaces. The user controls a paddle to deflect balls, sounding them prominently over the colliding chimes of other balls.

Webpage


iDrums

Feb 2014 – Mobile Music

iOS App, Real-time Audio, Mobile Music Toolkit (MoMu), OpenGL ES

Virtual drumkit – A hitting motion (acceleration) with the iPhone triggers off a sample-based Mid Tom, Snare, Ride or Crash Cymbal sound. Be generous – drumming works best with one device in each hand!

Webpage


Helvetica and Nebulae

Dec 2013 – Final Project, Software Paradigms of Computer Music

Interactive App, Real-time Audio, C++, Synthesis Toolkit (STK), RtAudio, OpenGL

Users can explore their current music playlists by exploding tracks into grain-like constellations, and re-imagine them by chaining grains with a sample-based step synthesizer.

Webpage


Sonic Shape Sorter

Nov 2014 – Software Paradigms of Computer Music

Visualizer, Real-time Audio and Graphics, C++, RtAudio, OpenGL

Desktop music visualizer for the system audio. In real-time, it illustrates an FFT waterfall of sorts, along with spectral chroma energy (the energy in each pitch class, indicating tonality).

Webpage


Modeling the Harmonic Exciter

Mar 2014 – Final Project, Software for Sound Synthesis and Audio Effects

Audio Effects Modeling, FAUST, MATLAB

Abstract – A harmonic exciter is an audio effects signal processor applied to enhance the brightness and clarity of a sound, particularly used for vocals. This is achieved by inducing a measured amount of high-frequency distortion. In this paper, an exciter is digitally modeled and implemented as a standalone application (or plugin) using the FAUST (Functional AUdio STream) programming language for real-time audio. The model is based on the Aural Exciter by Aphex Ltd., an analog hardware unit. Technical specifications of the Aural Exciter are drawn from the original 1979 patent. The digital model performs as expected, recreating a “vintage” style audio effect.

Shekar P. & Smith III J.O. (2013, October). Modeling the Harmonic Exciter. In Audio Engineering Society Convention 135. Audio Engineering Society.

PaperSamples


Sweetspot

Jul 2013 – Stanford Graduate School of Business Ignite program

3D Binaural Headset and App – Business Proposal

Where today’s concert crowd “throws their smartphones in the air”, Sweetspot lets you capture these soundscapes in 3D, and relive them as if you were there – think bootlegging 2.0. My initial venture idea was selected for the program out of 140 entries. I led a 6-person team to develop a business case for our final venture, which was received well by VC’s.

Final Video | Initial Video | Initial Writeup | Photos


Synthesizing the Vocal Textures of Female Pop Singers

Mar 2013 – Final Project, Advanced Acoustics

Vocals, C++, Synthesis Toolkit (STK), MATLAB

A brief investigation into the acoustics of the singing voice, particularly the vocal fry (pulse) register, frequently exploited in pop music. The Liljencrantz-Fant model is adapted to a “best guess” of the derivative glottal excitation waveform during fry, and compared with live recordings.

Paper


Dough Re Mi

Dec 2012 – Final Project, Physical Interaction Design for Music

Computer Instrument, Interaction Design, Play Dough, BeagleBoard, Arduino Nano, PureData

First computer instrument toy for audio manipulation via electrically conductive play dough control! It seeks to override the barrier between esoteric electronics and the audience. Showcased by a Wired correspondent at Pop-Up Magazine in Davies Symphony Hall, San Francisco. Demoed at Maker Faire Bay Area 2013.

Video | Paper | Photos


Skins Reprise

Dec 2012 – Final Project, Fundamentals of Computer-Generated Sound

Computer Instrument, Interaction Design, Apple Magic Trackpad, ChucK, Wekinator

Improvisational computer instrument accompanied by a fixed algorithmically-generated backing track (Lucid Dream). A trackpad controls synth instruments via machine-learning trained models, mapping finger position and velocity to several parameters (e.g. stereo pan, timbre, melodic line).

Webpage


The Day of Madness

Dec 2012 – Fundamentals of Computer-Generated Sound

Composition, Vocals, ChucK, Audacity

A study in voice, highlighting three themes – free, rhythm and score. A capella recordings of Voi Che Sapete (Mozart’s Marriage of Figaro) were captured with a DIY capsule mic and fed into an FFT-based real-time pitch follower. This was then used as a control signal to manipulate Moog-like swept filter synths, formant filters and live samplers.

Webpage


 Lucid Dream

Oct 2012 – Fundamentals of Computer-Generated Sound

Algorithmic Composition, Binaural, ChucK

Sitar physical model intertwining with live sampled track, Dreams on Fire (A.R. Rahman). Voices are algorithmically controlled to produce deterministic, random or periodic form.

Webpage


Oh Boi

Sep 2012 – Fundamentals of Computer-Generated Sound

Composition, Binaural, ChucK, Audacity

Binaurally rendered radioplay, borrowing dialogue from the New Zealand film, Boy (Taika Waititi). A 4-channel mix was also produced.

Webpage


Wormholes

Oct 2012 – Final Project, Composition for Electronic Musicians

Computer Instrument, Vocals, Max

Improvisational computer instrument to explore real-time vocal effects. A virtual choir generated by formant synthesis accompanies (pitch follows) the live vocalist; these sources are granularized.

Advertisements