The app, described as a "sensory music player," provides us with four new Massive Attack tracks. Each is a totally unique remix for every user, created by an algorithm in the app that collates data from sensors in your phone to create one-off remixes you can then record and share."Why would we put a premixed album out when there's algorithms that'll mix this for you?"
The limitless potential of a project like this seems to fuel Del Naja's creativity. "I keep thinking about what we could do next; for example, if you aggregated the personal data of everyone at a concert who had the app then you might be able to remix the music we're playing in real-time," he imagines. "So you could create a sort of group sensory experience. It changes the way you think about performance, because while one might see apps like this as an escapist experience, it might be more pervasive. I think there's potential for it to affect the way we go out and how we share music with the people around us."
To create the experience, Thomas and Del Naja used studio masters of the new Massive Attack tracks and broke them down into minimalist fragments. In the app, these tracks evolve in accordance with personal and environmental factors picked up from the user's movement, camera images, times of day and location -- as well as biometric signals such as your heartbeat -- in the iPhone and Apple Watch. "I think people understand now that devices can use a lot of our personal data for useless stuff like targeted advertising," says Thomas. "But what we're producing is almost the complete opposite of that. It takes your data and creates a really great experience for you, without storing it or trying to sell you something. It's creation in the moment and a logical avenue for artists to explore."
According to Thomas, it's all about striking a balance between "nonlinear" composition and the formality of conventional music. "I think the mistake that people make when they think about this kind of work is that it's all randomly thrown together, but the way to make this stuff well is to build rules into the system," he says. "We have to make sure it makes sense melodically: so a piece from the chorus will never appear alongside a verse, which means the behavior of the system can change, but it will always change in a musical way."
"We have to make sure it's rewarding for the user and interesting artistically, never working against the musical intentions of the material."
Thomas is cautious about how the musical interaction with the user happens. "It's very curatorial" he tells me. "We have to make sure it's rewarding for the user and interesting artistically, never working against the musical intentions of the material. But in the future we're looking towards AI taking over some aspects of that curation, so it would be possible for a meta-reworking, where multiple remixes are possible."
This thing is really cool. Also free!