Here's a simple synth I wrote some years ago. https://github.com/rikusalminen/jamtoysynth/blob/master/src/...
It was originally intended for a 4k intro (ie. demoscene) which I never finished. The synth was written in x86 assembler using 16.16 fixed point algebra because the instruction encoding for grabbing the lower 16 bit part (AX) of a 32 bit register (EAX) uses very short instruction encoding. The old assembler synth was under 1k in size, uncompressed.
This version is written with floats and is written in easy-to-read C code, but it's essentially the same logic.
I also enjoy using the keyboard as a piano-like control device, like tracker software back in the day. Here's an excellent example of playing music using the qwerty keyboard: https://www.youtube.com/watch?v=3JQkW6BgUYU
he was right. I had a delay bug.
There's a small demo [2], based on an older version of the lib, which can be played using the keyboard.
[1]: https://github.com/zenoamaro/audiokit [2]: http://zenoamaro.github.io/audiokit/
Not very hacky, but you can download the free demo of FL and have a go at playing with many synthesizers and audio samples. It's too bad my keyboard (and, I suspect, a lot of keyboards) doesn't support every possible key combination; a lot of even 3-note chords are impossible to play.
Anyone who hasn't tried it should give it a whirl - it's got the hexadecimal-happy vertical sequencing of a tracker, combined with an insanely flexible modular sound workspace. I switched to FL 'full time' eight or so years ago, but I've never found anything as intuitive for sound design as Buzz.
factor = 2**(1.0 * n / 12.0)
Aargh, equal temperament! b^)http://thesynthesizersympathizer.blogspot.com/2014/03/buying...
The phase vocoder does use an FFT on each window internally, so that it can ensure the phases remain continuous when everything is merged back together. There are variants that let you monkey around with the FFT coefficients before the merge, so you can pitch-shift that way, but I believe when you stitch it back together you end with with the exact same artifacts as the two-step way. I think people have concentrated on perfecting time-stretching since pitch-shifting can be derived from it.
The problem with doing an FFT on the entire length of the sample is that shifting all frequencies would then simply speed it up as well as changing the pitch ;) Chopping it up into bits is key to separating the fundamental frequencies that we percieve as the general "pitch" and all the time-varying harmonics that we percieve as "timbre".
edit: what is maybe a bit hackish is the crude resampling here - when going to the trouble of building a phase vocoder at least some linear interpolation might be appropriate rather than just dropping/repeating samples ;)
Furthermore, the "proper" way would pass the sound through an anti-alaising filter before creating a higher pitch tone. However, at 48kHz (as in the post), this isn't really an issue for audio.
The most common reasons in audio problems I've found are mismatching sample rates and a failure to meet the deadline of the audio interrupt, causing choppy or distorted audio. A mismatched audio format (ie. number of channels or bits per sample) might also cause problems.
Also, knowing what OS/Audio api backend is being used is vital information for debugging.
I can't affort to "nerd into" the actual problem right now. Hence the whitty and non constructive comment. Apologies.