Audio Frameworks
I ended the last installment of the Well Tempered behind-the-scenes story saying that the 1.x versions were live from early 2009 to mid 2015. It taking so long for version 2.0 to come out was never my intention. Well Tempered was my first released app as an iOS developer, and I don’t have count of how many applications in different versions and variations I would release until version 2.0 of Well Tempered was finally in the app store, so it wasn’t because I had stopped developing for the platform. Far from it. If anything, it was probably that I wanted too much - the second system syndrome was strong for this project.
I really wanted to deliver on making a tuner that would listen to what people played and told them how flat or sharp they were. I already had done the calculations to know where the target was with the pitch pipe, and I had implemented my own spectral tuner back in 2006, so how hard could it be?
Well, the original iPhone wasn’t a great place to calculate FFTs in real time. I believe the Accellerate framework (which I seem to remember was called VecLib at the time) was available with iPhoneOS 2.0, but I could not get it to perform well on my iPhone 3G, which still ran the original iPhone hardware but with a 3G modem.
When the iPhone 3GS came around, I did have a rudamentary implementation that would perform, but AudioQueue wasn’t that nice a framework to play ball with, and I never was able to get a precision I could accept. The code I wrote for this was hard to maintain, and debugging it and experimenting with it to try to increase precision drained my energy and interest from it so much so that I abandoned the code, not wanting to build a new release upon this.
The reason AudioQueue was written in C was not to have the overhead of Objective C. The overhead wasn’t much, but there was little room to spare. Since there was really no alternative to AudioQueue, I built some Objective C wrapping around it of my own, but was never happy with the performance. That is why I experimented every time I would see a new open source audio framework for the iOS platform. There were in particular three that drew my attention:
libPD, a Pure Data engine for iOS, was something that I gave a lot of attention. My original work had been done with Pure Data and Max/MSP, so this was a world I knew, and I was hoping I could bring over my earlier work and base a version 2.0 upon that. While I still find this project very interesting, I must admit I have yet to ship something built on that. I have used it for demos, both by myself, with friends and with co-workers, but never shipped anything with it.
The second project that I found amazing was novocaine, which provided a nice block-interface on top of AudioQueue, making a nice compromise between performance and convenience. But alas, I never could compute the frequencies just right with it, and after a bit of testing I discovered the examples wouldn’t play a clear sine tone. In the end, I abandoned it.
The third project that got me excited was AudioKit. AudioKit looked like a really nice framework on top of Csound. I was surprised to find I could use it in real time, as I’ve always seen people render sounds and music with it that could be played back afterwards. And with the nice examples that came along with it, I could easily prototype that I could get pretty far in what I wanted, enough to give me confidence that I could build my version 2.0 upon it.
AudioKit has a great maintainer and a very inclusive community with an active mailinglist. If you want to write audio software for iOS, you really should check it out. I hope to use it in many later projects, and plan to continue using it as the basis for Well Tempereds audio and audio analysis.
Come back for installment three of the Well Tempered behind-the-scenes story.
Full index:
Follow Me
Follow me online and join a conversation