TagInteractive music

Otomata – Flash-based Cellular Automata Music Sequencer

Otomata

Wireless Music’s New Social Sound

Oh yeah, this is what I’m talking about:

As one participant naturally sways to the groove, the PDA’s motion sensor detects his motion and shifts the tempo of the song. With the song’s intensity building, another listener subconsciously grips her PDA tighter, introducing echo effects into the mix. The closer that listening partners move to each other, the more prominent their part in the song becomes. Meanwhile, the software applies various “error correction” techniques to prevent an onslaught of arrhythmic noise, unless of course that’s the goal. As they listen to it, the mobile music orchestra transforms the tune into a dubby, spacey version of the familiar Bjork song. […]

An artist, he says, might release a song from an upcoming album specially prepared for the Malleable Music System. Someday, malleable music may even become an art form in its own right, leading to a duet between the artist and the audience.

Interactive art and music projects

I need to get my hands on this:

The Audiovisual Environment Suite (AVES) is a set of five interactive systems which allow people to create and perform abstract animation and synthetic sound in real time. Each environment is an experimental attempt to design an interface which is supple and easy to learn, yet can also yield interesting, infinitely variable and personally expressive performances in both the visual and aural domains. Ideally, these systems permit their interactants to engage in a flow state of pure experience.

The AVES systems are built around the metaphor of an inexhaustible and dynamic audiovisual “substance,” which is freely deposited and controlled by the user’s gestures. Each instrument situates this substance in a context whose free-form structure inherits from the visual language of abstract painting and animation. The use of low-level synthesis techniques permits the sound and image to be tightly linked, commensurately malleable, and deeply plastic.

(via Creative Generalist)Very nearly what I’ve imagined here.

Here’s something similar which when I first saw the link, thought was the same project:

PANSE is an acronym and stands for Public Access Network Sound Engine. It’s a streaming audio program with a built-in tcp server. It’s meant to be an open platform for experimental interactive audio-visual netart and is open to all. So-called “modules” (clients) can be created using Flash, Java, Perl or whatever else you can think of. Messages can be sent to it to control the highly flexible audio that is set up as two 16 step sequencers, a monophonic synthesizer and an effects generator. But it also streams out numerical data about the audio being played. This data can be used to control visual representations. It’s very interesting to see how the design of an interface effects the way people interact with such a project. As with my previous projects, PANSE is multi-user based, so if more than one person is interacting with it at the same time, they will see and hear what the others are doing. This is why I prefer to call them modules rather than clients. It’s like a modular synthesizer where seperate units control seperate aspects of what’s going on. In PANSE, not all of the interfaces allow control over all parameters. In fact, currently there is only one interface that allows control over all of the different parameters.

(via Anne)

Sounds of mobility

a post about interactive audio projects, by Anne.

I meant to post this earlier, but forgot. She posts about two interesting projects: Sonic Cities and Glitch. Neither of them seems to offer a collaborative environment for creating audio. But, unlike my earlier idea, it creates sound in real time. I suppose it wouldn’t be too tough to write a an app that automatically mixes audio dropped into a directory via Audblog into a MOD file format, encode it into an mp3, and make it available for streaming.

Interactive music: The Last Signal

Independent Opposition Music Publishing is seeking short entries for an upcoming sound collage compilation based upon what the ending of this world might sound like.”

Via Fields | Weblog via Abe.

Idea-blogging: games as musical interface

I’m gonna do some idea-blogging over the next few days, trying to get some ideas out there for some feedback (or at least so I don’t forget them).

I’ve had this “games as musical interface” idea for a couple years. A number of “generative” and “fractal” music programs out there (check out this listing). Mostly the interfaces consist of typing in numbers, moving sliders around, or dragging something around the screen randomly. These don’t seem like engaging interfaces.

The idea of using games for an interface isn’t new: this guy has a 3D fractal music game: however, I’ve never been able to get it to run on my computer, and now I can’t even find the download on his web site. My idea is to use a series of constantly changing classic games clones – Pacman, Space Invaders, Tetris, etc. The position of different game objects act as the random data for a music and graphics generator, making it easy for almost anyone to create music and visual compositions; even if they’re not good with music or at playing games. It also creates a game in which the goal is not to “win” but to create interesting music. This could also work as a multi-player game, with the data being split between the two players.

One important aspect is that the “voices” should be configurable. Output to MIDI, or to a set of samples (a la MOD tracking programs) .

A bit of a head-trip feature I’d like to see in the game: the games constantly morph into each other. One minute you’re playing Tetris, moving a block around, and then suddenly the blocks you’ve stack start to look like a maze and your block is pac-man. Then ghosts show up and eventually the whole game is Pac-Man. You play this for a while, then it starts to turn into Space Invaders. Which then turns into Astroids. The changes are random, Tetris sometimes turns into Astroids or Space Invaders instead of Pac-Man.

Jeremy Winters doesn’t think Max/MSP is powerful enough to create something like this. I would like to see it done in Flash, but I kind of doubt that’s possible either.

See Also

Audience Participation in Music

More audience participation in music

More audience participation in music

Simple Text is a “mobile phone enabled performance” that’s very similar to my idea for an audblog based music project and not far off my game-as-musical interface idea.

SimpleTEXT is a collaborative audio/visual public performance that relies on audience participation through input from their mobile phones. The project focuses on connecting people in shared spaces by attempting to merge distributed devices with creative and collaborative experience. SimpleTEXT focuses on dynamic input from participants as essential to the overall output. The result is a public, shared performance where audience members interact by sending SMS, text, or voice to a central server from their input devices. These messages are then dynamically mixed, cut, parsed, and spliced to influence and change the visual and audio output. These communications are also run through a speech synthesizer and a picture synthesizer. The incoming images and text are dynamically mixed according to specified rule sets such as pixel values, length of text, specified keywords, and inherent meanings.

Via Cool Hunting

Audience participation in music

Einsturzende Neubauten had a subscription program through their web site through which subscribers could watch and listen to the band’s studio sessions and then leave comments in a forum. So essentially they were letting their fans have a say in the album before it was completed. This wouldn’t work for a lot of bands, but it makes sense for Neabauten. Pigface should do this as well.

Some things Pigface have done: let audience members call up and leave messages on the office answering machine for use in an album (Feels Like Heaven, Smells Like Shit) and more recently let fans send in tapes and CD-Rs of them saying “fuck [something]” to be collaged on a Pigface record (not sure if that stuff ever got used). Also, they let fans vote online for which songs they wanted to hear on the “best of” album.

I was thinking, someone could setup an audblog and have people upload sounds form their cell phone to be used in collage or glitch projects. Or, mixed live at a laptop gig. I don’t know how modern programs like Buzz or Fruity Loops work, but it would be pretty simple to use an old tracking program (like Impulse Tracker or Mod Plug Tracker) and create some “patterns” in advance and then download samples using a venue’s wifi connection during the show and then plug them into the song. A soundtrack of the world in almost real time.

© 2024 Technoccult

Theme by Anders NorénUp ↑