The pitch class shifter is a new feature introduced in SoundPrism Pro for the iPad. The iPhone does not support the pitch class shifter at the moment. The pitch class shifter is located directly right of the BassSection.
Using it is quite simple: Touch the pitch class shifter at the pitch class (e.g. C) you want to change and slide the finger towards the top. The pitch class will be changed by one semitone (e.g. from C to C#). If you instead move your finger towards the bottom the pitch class will be lowered by one semitone (e.g. from C to Cb).
Changing a pitch class means that all pitches of that pitch class (one horizontal stripe) are changed. The bright bar right of the c# in the next figure indicates that the pitch class is raised by one semitone.
To neutralize the change of a pitch class simply tap the pitch class shifter at the shifted pitch class again.
This blog post is part of a series about the musical theory which SoundPrism is based upon. Check out the previous one: SoundPrism Pitch Layout I
During my dissertation I had to wait for several weeks until a certain book which I had ordered from another library finally arrived at my university’s library. I was hoping to be able to quote interesting passages from it. White wool gloves were given to me which I had to use in order to even touch its pages. I was not allowed to xerox it or even take a photo of some of the illustrations.
The book was written by Wilhelm M. Drobisch and its title was „Über musikalische Tonbestimmung und Temperatur“ (which is german for: ”About musical pitch estimation and temperature”). It was written in 1855 and I finally had the honor to hold one of the the original prints in my hands. One of the theoretical models described in this book is an important part of SoundPrism.
I’m talking about the pitch class - pitch height space. This blog post is dedicated to it.
The pitch class - pitch height space If we want to create instruments based on musical circles like the circle of fifths, a chromatic circle, a diatonic one or also a circle of thirds we have one problem: These circles tell us nothing about how to assign different octaves to the pitch classes contained. The reason for that is that this circles do not contain pitches. Furthermore they contain pitch classes. A pitch class is a kind of pitch where it is not clear at which octave it has to be played. Or in other words: All Cs hat the piano have the same pitch class. To solve this problem we need a model that tells us how to locate different octaves of a given pitch class at our instruments surface. And such an model is the pitch class - pitch height space or relation, first proposed by M. W. Drobisch in 1855.
Figure 1: M.W. Drobisch‘s pitch class - pitch height space (taken from )
As you can see in Figure 1 the model is based on a spiral helix. It is tree-dimensional: At the (horizontal) xy-plane you see the chroma circle which consists of the 12 chromatic pitch classes c, c#, d … , b. The circle at the bottom of the spiral can therefore be called „pitch class dimension“.
To locate different octaves of a pitch class Drobisch introduces a third dimension, the (vertical) z-axis. According to its frequency every pitch is assigned to an unique z-position, which means that pitches with low frequencies are assigned smaller z-values and pitches with high frequencies are assigned larger z-values.
E.g. if you follow the pitches c1, c#1, d1, d# etc you fill find out that the z-position slightly increases when you continually climb the spiral in upward direction. Therefore the z-axis can also be called „pitch height dimension“.
The relationship between pitch class and pitch height dimension can be demonstrated as follows: Imagine you have climbed up Drobisch‘s spiral. Now you are at the pitch „C3“. Imagine now you have a tennis ball in your hand. If you are now opening your hand the tennis ball will hit the ground at the location of the pitch class „c“. In other words: Drobisch’s model assigns a corresponding pitch class to every pitch.
The pitch class - pitch height relation is one of the most fundamental pitch relations. This becomes obvious when we look into the work of J.D. Warren  who showed that our brain is separating perceived music into the components pitch class and pitch height (pitch chroma). Both are processed in different brain regions.
This means that if we are developing musical instruments that follow human music cognition (See my previous blog post - “Motivation”), we have to take the pitch class - pitch height relation into account.
Drobisch‘s model does not only work for the chroma circle. Furthermore the chroma circle (See Figure 1) can be replaced by arbitrary pitch class arrangements. Therefore we can replace the chromatic circle by the key related circle of thirds that I introduced in my last blog post. Another thing we have to change is the threedimensionality: To use Drobisch’s Model on the iPhone or iPad we have to reduce it to two dimensions. Therefore the differences between Drobisch‘s Model and the model behind SoundPrism are the following:
Drobisch uses a circular representation. SoundPrism uses a linear one.
Drobisch uses the Z-axis as pitch height dimension, we use the x-axis
How the pitch class - pitch height space of SoundPrism is derived is explained in the following video.
Video: The derivation of SoundPrisms Pitch Class - Pitch Height Space
The particular advantage of the pitch class - pitch height space is, that it works for arbitrary pitch class layouts. Therefore it is possible to introduce new pitch class layouts, tunings etc.
It is furthermore possible - like shown in the above video - to define so called „pitch selections“. A pitch selection is an area that covers the pitch space. All pitches that are covered by the pitch selection are played.
In SoundPrism the pitch selection is visualized as the bright rectangle that your are creating by tapping the screen with your finger.
Moving your finger along the pitch class axis (in other words vertically) triggers the next relativechord.
Moving the finger along the pitch height dimension (in other words horizontally) plays the next inversion of the currentchord .
My next blog post
In my next post I will start to show how all of these things can be applied to cadences as music theory is useless if it doesn’t show you how to create better music. :-)
References  Shepard, Roger: „Geometrical approximations to the structure of musical pitch“. In: Psychological Review 89(4) (1982), Jul, S. 305–333
 Drobisch, Wilhelm M.: Abhandlungen der Königlich Sächsischen Gesellschaft der Wissenschaften. Bd. 4,[1: Über musikalische Tonbestimmung und Temperatur. Leipzig : Hirzel, 1855, http://books.google.de/books?id=UlIQAAAAYAAJ
 Gatzsche, G.; Mehnert, M.; Gatzsche D.; “The Harmony Pad - A new creative tool for analyzing, generating and teaching tonal music”, AudioMostly 2008 - A Conference on Interaction with Sound, 2008, Pitea, Sweden, http://bit.ly/9Q7A3w
When I was 10 years old, my father attended our music theory lessons and observed how complicated we learned which tones compose certain chords, which chords belong to certain keys and so on. During that time he had an idea for a musical circle which helped us a lot to understand several musical relationships. In his diploma thesis my brother David later developed a pedagogical concept around that circle. Finally in 2005 I had the idea to develop a musical instrument based on it - SoundPrism.
The key related circle of thirds
The circle mentioned before is called „key related circle of thirds“. Figure 1 shows the circle for the keys C-major and a-Minor. The black points represent the actual pitch classes. The grey points show how many semitones are between two pitch classes, e.g. between the pitch classes C and e are three semitones. The circle as shown in Figure 1 is full of musical semantics. E.g. it shows which tones build which chords, visualizes the wellformedness of a cadence and outlines the musical functions of chords geometrically. Additionally I discovered a strong relationship between psychologically measured data and the geometric positions of the pitch classes within the circle. I will explain that in more detail in another blog posts.
Figure 1: The key related circle of thirds for C-major / a-minor
SoundPrism and the key related circle of thirds
If you look at SoundPrism you will wonder where the key related circle of thirds is. Originally the interface of SoundPrism was circular. But we realized that a rectangular version is much better to control, although the circular version is easier to understand. Figure 2 shows the relationship between the SoundPrism interface and the circle. The first and last pitch class at the SoundPrism (Figure 2, left) are the pitch classes d and b. Within the key related circle of thirds (Figure 2, right) these pitch classes are next to each other. So it is possible to bow the upper and the lower edge of the SoundPrism such that a cylinder arises (Figure 2, middle). The order of the pitch on that cylinder exactly correspond to the order of pitch classes within the key related circle of thirds.
Figure 2: From SoundPrism to the key related circle of thirds
But now we were faced with the problem that the most interesting chords of a given key (the dominant seventh chord G-d-F-b, the diminished b-d-F, or the Sixte ajoutée d-F-a-b) are apart on both ends of the interface. To solve this we used the “Pac-Man-paradigm”.
When you move the pitch selection over the upper end then it will appear at the bottom. So it is possible to play all the chords within the key related circle of thirds.
Navigating the Circle
Up to this this point we’ve only talked about pitch classes. But the pitch class C exists on several octaves. You do not have only one C on a piano but 8 Cs. So we had to find a way to put different octaves of a pitch class on the surface of SoundPrism. I’m going explain how we accomplished that in my next blog post.
The manager of an open air rock concert calls a specialist to install a light show with lasers, additional spotlights - a whole visual entertainment system to turn the concert into an exceptional audiovisual experience for the audience.
The specialist works on this for months, talks with artists about which features they might like, makes plans and comes up with a concept that is liked by everyone involved. He implements it together with his employees (he’s the owner a small company specializing in this kind of work) over the course of weeks, tests it, rearranges some of the stage setup for it to work beautifully.
When it’s time to get paid for his work, the manager of the rock concert tells him:
"Sorry buddy, the electric framework needed for what you did there was already in place before you I called you to do this. You’re just building on top of that, how dare you charge money for this? You didn’t invent the fuses, cables, spotlights, you’re just using them and rearranging them a little."
That’s exactly what we were told by Apple when we tried to sell a feature via a store inside of our app SoundPrism. This feature is based on a framework that is part of iOS since version 4.2.
These are the words they used:
“The application is using In App purchase to unlock the use of the Apple Camera Connection Kit, which then enables MIDI Support.
It would be appropriate to revise this In App Purchase product to provide functionality other than what the iOS provides; or to remove it.”
This statement shows that the person who reviewed this was thinking exactly like the manager in the story above. I’m sure that person didn’t mean us any harm, it’s more likely that the reviewer doesn’t understand what the implementation of a framework means. They (as well as some music blogs) might assume that it’s just as easy as flicking a switch.
In fact, the wording of this statement shows that they think unlocking the Camera Connection Kit enables MIDI Support. Which is absolutely untrue. Plug in the Camera Connection Kit into your iPad while it’s running any music app and then see if the app ‘magically’ sends out MIDI information. If the application developers didn’t support it in the first place - which means they had to write the code for it - then nothing happens at all.
It seems like the section of the App Review guidelines forbids to sell any implementation that unlocks a framework provided by iOS. The wording from the actual guideline is:
11.8: Apps that charge users to access built-in capabilities provided by iOS, such as the camera or the gyroscope, will be rejected
That sounds like it’s only limited to hardware features. But with the rejection of our IAP for Core MIDI (which is not based on hardware at all) this has far fetching implications.
It means that iOS developers cannot - ever - charge via InApp Purchase for a feature that is based on a framework supplied by Apple’s iOS.
That means that a photo app cannot for example offer an additional video feature that uses the Core Video framework as a purchase in that application. An ebook reader app cannot charge for an audio book feature because it’s based on Core Audio via IAP.
Developers of these apps would have to release completely new apps with that feature and offer them separately if they want to make up for development costs - instead of improving their existing ones. Customers of the initial application would have to buy the new one instead of being able to decide if they need the feature or not and just buying it separately within the app.
I sincerely hope someone with a connection to the App Review team reads this and manages to persuade them to reconsider this part of the guidelines.
CEO Audanika GmbH
mail: sebastian dot dittmann [at] audanika dot com
We will never be able to sell our work on new features if they’re based on technology - software or hardware - that Apple has just released without creating a new application specifically for that reason. Neither will any other iOS developer without breaking Section 11.8 of the App Review Guidelines.
That’s a huge difference to creating and selling software outside of the iOS ecosystem. If you’re part of Apple’s ecosystem you can’t charge for updates. Neither through having users paying for them directly nor via creating a store inside of your application and charging for new features that make use of new features on iOS.
Implementing a new technology that was released by Apple (like CoreMIDI, GameCenter etc…) isn’t as easy as flicking a switch. To do it nicely multiple developers will usually have to work for at least a week or more (obviously depending on the technology).
Example: it took us a week or so to implement CoreMIDI into SoundPrism. It took us another month to reach the quality that we were ok with charging for it.
As a developer that’s pretty bad but it also isn’t great for users/customers.
It either forces smaller teams of developers to release new apps instead of making their existing ones better. Or it forces them to improve their applications but they have to do it without getting paid for it.
From Apple’s perspective this is awesome. They release a new feature which (usually) rocks and their developers can choose between:
Creating a new application to make use of that feature and charge for it.
Spending quite a lot of time (usually a week or more) to implement it in their existing apps and giving it away for free.
Now why would any developer choose option 2? That depends on what their competition is doing. If their competition is implementing that feature then they’re forced to do it as well. If their competition is not implementing that feature then it’s probably a good idea to be the first one who has it.
Option 1 isn’t optimal for developers as well because it means they cannot stick with few applications and make them better and better.
One could argue that quality prevails and that even free updates to an application result in higher sales - which is true - but releasing a new application or actually charging for a feature results in higher revenue. In fact, sometimes free updates don’t change revenue at all if they don’t drive enough attention to your application.
The implications of this are that whenever Apple releases a new technology developers will either:
implement it for free. Result: Apple and users of older apps win because developers create better applications without users or Apple having to pay anything. Developers lose.
create new applications containing the new features and abandoning older apps. Result: Apple wins. Developers and users of older apps lose because nobody wants to buy a new application if they got used to the old one. Also the App store gets ‘spammed’ with new apps which is great for Apple’s statistics but bad for users trying to discover good apps.
Either way Apple always wins:
By having tens (hundreds?) of thousands of developers getting to work literally for free after releasing a new feature.
And by indirectly forcing users to buy new apps - which will will be more expensive for them than if developers could have offered the feature as an upgrade.
Please excuse me now, we still have to submit two new applications today.
UPDATE:You might want to read this follow-up post about the implications of this for all iOS developers.
Today our latest version of SoundPrism got rejected from Apples App Review team after an excruciatingly long waiting period of 20 days since we submitted it to them (you can read about it and the things I’ve found out about the review process in my excruciatingly long previous post).
I’ve made a video showing what we were trying to do in our latest update and what we were trying to sell as an InApp Purchase since it would take some time to explain it (badly). So here’s what we were trying to do:
"Apps that use IAP to purchase access to built-in capabilities provided by iOS, such as the camera or the gyroscope, will be rejected"
Apparently CoreMIDI is considered to be on the same level of importance as the camera or gyroscope which is probably a correct assumption by Apple.
Let our misery be a warning to you if you’re a budding iOS developer of audio apps - CoreMIDI cannot be sold as an IAP. Which is quite a shame if you ask me. Just because the capability is there that shouldn’t mean one can’t make any money off of it via InApp Purchase because IAP is probably the way to do business as an iOS develper in the future. InApp Sales/Purchases are a really nice way to monetize features of an application without letting the customer/user pay in advance for them.
But ok, it’s their store so it’s their rules.
Our solution to this is that we’re going to create multiple Apps. We’re going to leave our current one (SoundPrism) as it is. Then we’re adding a lite version which doesn’t have certain features and are offering it for free. And for our professional users we’re going to offer a version of SoundPrism which has the capability to change the scales and - tada - CoreMIDI support built in. That should do the trick, right?
Only there’s a catch.
The email sent to me by the “iTunes Store” reads like this (Statment #2):
"We’ve completed the review of your app, but cannot post this version to the App Store because it did not comply with the App Store Review Guidelines, as detailed below:
11.8: Apps that charge users to access built-in capabilities provided by iOS, such as the camera or the gyroscope, will be rejected”
See the difference? No “IAP” in that version of Section 11.8. So it’s not ok to charge for capabilies that are provided by iOS at all? Must be a typo, right?
Here’s another statment of the rejection from iTunes Connect (the developer area of Apples iTunes portal) (Statement #3):
"Mar 9, 2011 02:25 PM. From Apple.
We found that your In App Purchase product provides access to built-in iOS capabilities, which is not in compliance with the App Store Review Guidelines.
The application is using In App purchase to unlock the use of the Apple Camera Connection Kit, which then enables MIDI Support.
It would be appropriate to revise this In App Purchase product to provide functionality other than what the iOS provides; or to remove it.”
So we’ve received three statements. Two of them are contradicting themselves in an important matter and the third one doesn’t really bring any clarification.
The first statement from the actual guidlines says that InApp Purchases can’t charge for features that are based on iOS capabilities.
The second statment from iTunes Connect says it’s not ok to charge for features that are based on iOS capabilities. At all. Which doesn’t make any sense because literally every single app is built onto some basic capability of iOS.
The third statement via mail from “iTunes Store” says we’re breaking the rules because we’re supporting the Camera Connection Kit to enable MIDI support. Which we are not.
MIDI via USB via the Camera Connection Kit is just a byproduct of enabling CoreMIDI. I’m never using it at all. Connecting wirelessly is the really cool part about our latest release.
Also, it’s simply wrong that the Camera Connection Kit enables CoreMIDI. Enabling CoreMIDI is what enables the Camera Connection Kit as a way to connect.
What’s also slightly worrying is that some people in App Review are either working with a different set of guidelines or they’re quoting them incorrectly. Both of which isn’t optimal.
We’re now putting CoreMIDI support into a different application which we’ll sell independently. It’s just not feasible for us to not charge for CoreMIDI at all or making our ‘vanilla’ App (SoundPrism) more expensive.
We’re also ignoring the implications of Statement #2 because our common sense tells us that’s ok.
Hopefully Apple is ok with that.
So to end this on a positive note here’s a video of Roger O’Donnell using a midi enabled beta version to control his Moog Voyagers with his iPhone running SoundPrism.
Hopefully we’ll be able to see more musicians do something like that in the future. Wish us luck.
Not being told what’s going on is always a bad starting point. Knowing that you’re not being told what’s going on is even worse. Trying to find it out on your own is usually complicated, creates tension and most likely leaves a bad aftertaste.
That’s exactly what’s going on right now with some developers for iOS. I’m talking about Apple not telling them why their apps aren’t being reviewed for a long time. And if they’re reviewed the review seems to take a lot more time until approval.
This might not sound like a big deal but in fact it is. It has the potential to severely hamper the iOS ecosystem. And since the iOS ecosystem is pretty much the only economically relevant mobile ecosystem at the moment it’s a big deal for mobile.
So lets talk about details.
This is how things were: You would develop an application, submit it to Apple, wait for a week at most then your application would change status from ‘waiting for review’ to ‘in review’. Then you’d wait another day and your app would be approved and you could sell it. Hardly ever would it take more than a couple hours to have your app approved after being in review. Sometimes process from submission to approval would take only two days. Great stuff!
This is situation for some (not yet all) developers: Develop an application, submit it to Apple. Wait for a week to ten days (or more?) for the application to switch status from ‘waiting for review’ to ‘in review’. Then you wait a day. Then another. And another. Then you start writing emails to the App Review Team asking them what’s going on and if there’s some bug that they’ve found and if they can give you any insight into the progress of the review.
Apple will send you an automated(?) reply that they need more time. No details.
You wait for another day, and another… it’s a week now. You write another email to request an expedited review because your customers have now been waiting for more than 20 days since you’ve announced submission to Apple. They’re writing emails to you asking why Apple is taking so long. Videos from betatesters of your app are appearing on the web because they can’t wait anymore and want to show off their creations.
Apple answers to your second mail with another reply that lacks any substance or real information.
Then you’re making calls to friends, trying to find out what’s going on. Turns out the Apple Review team is busy with all the submissions from the Mac App Store since Mac OS software developers are now finding out that App Stores are pretty neat.
Can you imagine how many submissions of regular Mac OS Apps must be happening as I write this? I can’t. More than ten years of Mac OS X development all over the world and every single one of these developers all of a sudden sees a chance to make (more) money with their applications.
So we’ve got a bottleneck here: App Review. Since Apple wants a walled garden for their iOS and Mac App Store ecosystems they will have to stick with reviewing each and every single app.
Even if the review team could handle the flood of Mac and iOS submissions right now without huge delays (which they can’t) - there’s another problem looming and it’s name is ‘In App Sales’.
Show me the money!
As of this year revenue from sales within apps has surpassed regular sales of apps. That means more money is made from selling stuff within an application than by selling the app itself. That’s why some apps are completely free but charge for important features (Wordlens being an example for that).
We’re going to offer In App Sales in SoundPrism (our own app) as well and since we’ve added this feature I’ve noticed something interesting in our team.
Our developers were all of a sudden coming to me with ideas of how to monetize features. Really good ideas, simple, feasible, easy to implement. Before that we would have to create a new application for every feature, which is bad for our customers, tricky to handle, a nightmare to support.
But let me emphasize this again: the tech guys were thinking about how to monetize features. And they were having fun with that.
A flood of submissions
Usually thinking about that is the job of the business people. Many development teams on the App Store most likely don’t even have business people in their team.
But when techies start having fun with thinking about how to monetize their work that means there’s going to be a lot more submissions for small features.
So we’re looking at a future that looks like this: more apps with InApp Stores, a lot more submissions for these InApp Stores by more people developers because monetizing your ideas has never been easier.
Most of this has to be handled by one company - Apple - because they’ve got the only ecosystem for which developing apps makes sense financially. At least until Android revenue surpasses roughly 20% of iOS revenue (which we might see in 2011).
Apple really has to begin to talk with their developers - and the public - about their App Store and their internal review process to make life easier for teams like ours. Their current way of handling this leaves developers in the dark about what’s going on, how potential problems like the one described above will be tackled. Otherwise it’s incredibly hard to plan for releases, marketing measures, tests.
This blog post is part of a series about the musical theory which SoundPrism is based upon. You might also want to read the next part: SoundPrism Pitch Layout I
Often we are asked which pitch layout SoundPrism is based on. With the following article I would like to start to talk about the idea behind SoundPrism and how it came to be until today. The development of the SoundPrism interface is a process that started many years ago. As I was writing my PhD thesis at the Fraunhofer Institute for Digital Media Technology I was allowed to discover many interesting relationships between musical structure and that what we actually feel when we’re listening to music.
Today I am able to say: The possibilities we have with instruments like SoundPrism are only the beginning.
Musical Imagination and musical interfaces
Before I start to explain the pitch layout of SoundPrism, let me talk about the basic motivation of the instruments we are developing.
The image below (Figure 1) shows a model of music creation and musical imagination.
Figure 1: Model of musical imagination and music creation
As you can see, the origin of every musical piece is some kind of „musical imagination“. This can be a certain feeling, a certain emotion, an association, a certain musical piece or something entirely differnt. In Figure 1 musical imagination is illustrated through the head icon at the top. To make a musical imagination come alive you need some kind of musical instrument which is represented through the blocks „sound synthesis“ and „musical interface“.
Using your hands for example you have to „encode“ your musical imagination.
This code is received by the musical interface e.g. the piano which again triggers the „sound synthesis“. The reproduced sound is fed back to your ear. Your brain will compare the former musical imagination to the actually heard music. This can lead to the following three results:
a) The perceived sound matches the former musical imagination, everything is fine.
b) The perceived sound does positively surprise you. In that case your musical imagination is extended and your musical tool box becomes larger.
c) The perceived sound does not match your imagination: In that case you have to improve your encoding until case a) or b) are reached.
Or until you give up.
Based on the model of Figure 1 the mission of Audanika is the following:
We want to create musical interfaces that reduce the coding process: We assume that the better a musical interface corresponds to the musical imagination the less coding is required. Less coding means a better musical progress, more time for musical ideas, less practicing, more making music. Our dream is that one day anyone will be able to express their own emotions musically.
We want to create musical interfaces that stimulate the musical imagination: A certain musical imagination can be the origin of a musical idea. Vice versa playing a new musical instrument can extend existing musical imaginations or create new ones. Our instruments shall have interfaces you have never seen before. By using them you are going to encounter completely new musical ideas.
We want to create musical interfaces that motivate to think about musical logic, to improvise and to compose: Active music creation stimulates the linkage of the left and right brain hemisphere. The reason for that is that music creation is both, an intuitive and creative activity on the one hand and a logic thinking process at the other. If a musical instrument‘s interface is logical, it will motivate to think about music. Instead of memorizing patterns you will understand relationships and make better musical decisions.
Ten years ago doing this would have been extremely hard or even impossible. The interfaces of instruments at that time were strongly determined by physical or technical constraints. In most cases it was not possible to change the interface dynamically.
But today with the help of multitouch based tools we are free to design completely new musical interfaces. Tones can be arranged in a way that the geometry of the tones corresponds much more to what we actually feel.
In my next article I will start to give some insight into the tone layout of SoundPrism.