New video: Asturias Part 2 (in case you ever need to prove that the iPad can be a musical instrument)
by Sebastian Dittmann
… with multiple Fingers … at the same time. It’s like a double rainbow! With MIDI!
And Gabriel has made a video about it (immediately after getting out of bed):
by Gabriel Gatzsche
This blog post is part of a series about the musical theory which SoundPrism is based upon. You might also want to read the next part: SoundPrism Pitch Layout I
Often we are asked which pitch layout SoundPrism is based on. With the following article I would like to start to talk about the idea behind SoundPrism and how it came to be until today. The development of the SoundPrism interface is a process that started many years ago. As I was writing my PhD thesis at the Fraunhofer Institute for Digital Media Technology I was allowed to discover many interesting relationships between musical structure and that what we actually feel when we’re listening to music.
Today I am able to say: The possibilities we have with instruments like SoundPrism are only the beginning.
Musical Imagination and musical interfaces
Before I start to explain the pitch layout of SoundPrism, let me talk about the basic motivation of the instruments we are developing.
The image below (Figure 1) shows a model of music creation and musical imagination.
Figure 1: Model of musical imagination and music creation
As you can see, the origin of every musical piece is some kind of „musical imagination“. This can be a certain feeling, a certain emotion, an association, a certain musical piece or something entirely differnt. In Figure 1 musical imagination is illustrated through the head icon at the top. To make a musical imagination come alive you need some kind of musical instrument which is represented through the blocks „sound synthesis“ and „musical interface“.
Using your hands for example you have to „encode“ your musical imagination.
This code is received by the musical interface e.g. the piano which again triggers the „sound synthesis“. The reproduced sound is fed back to your ear. Your brain will compare the former musical imagination to the actually heard music. This can lead to the following three results:
a) The perceived sound matches the former musical imagination, everything is fine.
b) The perceived sound does positively surprise you. In that case your musical imagination is extended and your musical tool box becomes larger.
c) The perceived sound does not match your imagination: In that case you have to improve your encoding until case a) or b) are reached.
Or until you give up.
Based on the model of Figure 1 the mission of Audanika is the following:
- We want to create musical interfaces that reduce the coding process: We assume that the better a musical interface corresponds to the musical imagination the less coding is required. Less coding means a better musical progress, more time for musical ideas, less practicing, more making music. Our dream is that one day anyone will be able to express their own emotions musically.
- We want to create musical interfaces that stimulate the musical imagination: A certain musical imagination can be the origin of a musical idea. Vice versa playing a new musical instrument can extend existing musical imaginations or create new ones. Our instruments shall have interfaces you have never seen before. By using them you are going to encounter completely new musical ideas.
- We want to create musical interfaces that motivate to think about musical logic, to improvise and to compose: Active music creation stimulates the linkage of the left and right brain hemisphere. The reason for that is that music creation is both, an intuitive and creative activity on the one hand and a logic thinking process at the other. If a musical instrument‘s interface is logical, it will motivate to think about music. Instead of memorizing patterns you will understand relationships and make better musical decisions.
Ten years ago doing this would have been extremely hard or even impossible. The interfaces of instruments at that time were strongly determined by physical or technical constraints. In most cases it was not possible to change the interface dynamically.
But today with the help of multitouch based tools we are free to design completely new musical interfaces. Tones can be arranged in a way that the geometry of the tones corresponds much more to what we actually feel.
In my next article I will start to give some insight into the tone layout of SoundPrism.
This is a preparation for the talks I’m going hold at TEDx Berlin on November 14th and 15th, 2010. Comments and criticism are welcome but I reserve the right to ignore them.
So please brace yourself now because I know what I’m going to write here will sound like blasphemy to many people. A friend of mine told me yesterday I can’t say this. I’ve thought about it. I can. Here goes nothing.
Music is being taught the wrong way nowadays.
I experienced the errors of todays music education and lots of people I’ve talked with have experienced the same. To demonstrate the amount of wrongness I am going to compare musical education to the process of teaching children how to read and write.
We’re taught to read and write with certain goals. After our education we’re supposed to be able to extract meaning from a text. We’re supposed to be able to express our opinion in a letter, article or in a twitter status message. We’re supposed to be able to articulate ourselves. And we’re supposed to be able to see the beauty of a text after reading it.
Musical education nowadays ignores most of these goals. When learning an instrument the goal is that we’re able to play. Not to compose. We’re not supposed to come up with our own musical pieces, but we’re supposed to be able to just play sheet music and maybe, just maybe understand a few basic harmonic and rhythmic wisdoms.
But that’s it.
We’re not taught to participate in the community of composers. It’s not expected of us.
We’re learning music without the goal of being able to express ourselves. Reciting a poem is the closest comparison of what we’re able to do after we’ve finished our musical education.
That’s the same as being taught to read and write without ever being encouraged to write an essay or even phrase a single opinion.
Playing Debussy well is the same as writing down a dictation flawlessly with nice letters. It’s nice but it’s not your music. You’re just playing stuff that someone else has come up with.
You’re just repeating someone else’s opinion. Worse even, you’re repeating it word by word.
Shakespeare was a great author and his works are great but they’re nothing compared to the importance of the notes people leave each other on the kitchen table, day by day.
A note reading ‘Honey, pick up the kids. I can’t make it today! Love you!’ is more important than any book of Mr. Shakespeare because it solves day to day problems. We wouldn’t be able to survive without these messages.
Can we live without music? Do we want to? I don’t think so.
But we’re accepting that most of us will never be able to write a musical note and leave it on the kitchen table that is life. It is accepted by society that we cannot express ourselves non-verbally with instruments. We’re a global society of illiterates when it comes to the field of music. And we think that’s ok.
Why is that?
Because we think that it’s too hard to compose and express ourselves through music.
Why do we think it is too hard?
Because the language we’ve created to write down music is fragmented, ineffective and inefficient. It’s old, complicated and doesn’t make use of any of todays technological accomplishments. Also we’re focusing our education on melody when our brains crave harmony.
We are using black and white cryptic numbers and icons printed on dead wood in the days of 3D Cinema and broad band mobile internet devices.
It’s a hurtful tradition.
It’s time to end this tradition.
Before the release of the newest and greatest version of SoundPrism ever, we made sure it runs on every generation of iPods and iPhones.
Our team worked for weeks on this to improve sound quality, tweak stuff here, make stuff faster there.
Most of the time was spent on performance tweaks for 1st generation iPods and iPhones.
Now, almost a week after launch I’ve sieved through the statistics some of our users have been sending us voluntarily.
We’ve included that little switch in our app so that users can flick it to send us anonymous statistics so we find out stuff like this:
For those of you not into Spreadsheets and pie charts here’s the relevant information in there:
We’ve got three users who have a 1st generation iPhone and two users who have a 1st generation iPod. Funnily enough I know both of the iPod users personally. Those 5 first generation devices are opposed to 1626 devices of newer generations.
These statistics are biased in that they only show the percentage of our users that actually care enough about SoundPrism to send us their data. They’re also leaning to iPads because SoundPrism has been an iPad app far longer than it has been an iPod app (twelve weeks as an iPad only app vs less than one week as a universal app).
But to me they’re still very interesting as they show that we’ve spent weeks of work on optimizing SoundPrism to run on a class of devices which is used by only 0.3% of our customers.
That’s not three percent. That’s zero point three percent.
You can also see that right now lots of our users have iPhone 4s and lots more of our users have iPhones than iPods. Early adopters (people who grabbed SoundPrism 1.1 early after launch) seem to be the ones with the newest devices.
I thought I’d share this because this information will shape a lot of our decisions.
Also it shows how much valuable information there is to gather that is not provided by Apple.
Ten Things we were wrong about
When you decide to pursue a career as a developer for mobile apps alone or if you think about founding a company for that purpose with a whole team then you have to do some planning.
If you’ve read the book ‘Rework' (which I strongly recommend) then you'll find the phrase 'Planning is Guessing' right on the cover.
Which in my opinion is correct and has been proven to myself to be true over the course of the last year. But not only is planning guessing but you’re also wrong most of the time.
That would be less surprising if I considered myself an idiot and my team wasn’t very bright either. But at least for my team I can say that they’ve all got degrees in engeneering, one of them will be a doctor soon, some of us have quite extensive experience in software design and digital goods, marketing and professional music creation (one of us has been a fairly successful producer in the past).
We even have an advisor with an MBA from Standford.
Not the typical bunch of ignorant fools I’d say.
So here’s a list of ten things that we’ve guessed wrong anyway (I’ve added some insights in brackets to some of the points):
- The time it would take us to create an iPod version after we’ve finished an iPad version of our app. (Money quote: “It already works, you’ll have the finished version in a week.”)
- The number of downloads you receive when you’re the #1 iPad music app all over the world. (You didn’t really think I’d disclose this, did you?)
- The importance of the chinese market for iPad apps. (Right now you can forget about it.)
- How hard it is to collaborate with other app developers. (Hard. If there are two companies that both don’t have any time to spare good intentions just don’t cut it)
- How important it is to have your app featured in newspapers. (If there’s not a clickable download link below the article, you will never know. What do you mean there are no clickable download links in newspapers!?)
- How many downloads a singe article from gizmodo generates. (Hi there!)
- How reliable and fast Google Analytics is. (Get rid of it as soon as possible.)
- How easy it is to create and manage different versions for different countries so you can adhere to different legal situations. (It’s not just one click)
- How vastly important a project management tool like Basecamp is. (@jasonfried: Tweet us one more time! Please? Your stuff rocks! Ok, I admit I’m a fanboy.)
- How much more important Twitter is to Facebook when it comes to a product that runs on shiny new hardware designed in California and made in China. (Love you Twitter!)
There’s one thing we were absolutely right with though.
From the beginning on we said that we didn’t know nearly enough about the market but that proper market research would take as much time as it would to release a product and just see how it goes. With the difference that we would know for sure instead of relying on outside sources.
We were spot on with that.