Jump to content


Photo

AUDIO beats to manipulate graphics?


  • Please log in to reply
19 replies to this topic

#1 markokon

markokon

    Newbie

  • Members
  • 37 posts

Posted 24 December 2006 - 08:30 AM

So I wanted to experiment with using an audio file to animate the scale/rotation/position (possibilty ZDepth) of layers/pre comps, based on the beats, within AE. I assume these are the high and low amplitude?

Also.... I know there is VJ software that does this really well. Specifically Modul8. But it only runs on MAC and I am a PC users. Does anyone else know of a similar program?

http://www.garagecube.com/modul8/

As for working within AE I assume there are expressions involved in trying to build up a composition that would do similar things.

Basically I would like help with two things:

1. Anyone know of other VJ softare for PC.
2. How do I aniamte layers/pre comps using the beats in an audio file.

Thanks!

Edited by markokon, 24 December 2006 - 08:31 AM.


#2 Guest_Sao_bento_*

Guest_Sao_bento_*
  • Guests

Posted 24 December 2006 - 08:51 AM

Trapcode Soundkeys is the answer.

#3 skimmas

skimmas

    Newbie

  • Members
  • 41 posts
  • Location:Caldas da Rainha, Portugal

Posted 24 December 2006 - 01:11 PM

VJ software for PC : www.resolume.com/

It has a sound analiser, that can directly interact with an animation (ex. changing frame number), or with a flash movie by sending variables.

#4 prupert

prupert

    Newbie

  • Members
  • 12 posts

Posted 24 December 2006 - 04:24 PM

VJ software? Here we go:

Visual Jockey - realtime graphics with audio sync

Pilgrim Pro 3d - realtime graphics with audio sync with integrated 3d design tool and video clip playback

Neonv2 - freeware alternative to Pilgrim - though you need Max to create the 3d models

vvvv - graphical programming language with support for audio sync - great tool, but complicated

pure data - another graphical programming tool - flexible but complicated

Processing - flash based visual tool - need to know programming language

Resolume - as skimmas said - can sync length of video loops to BPM, also can sync flash visuals to audio (though you need to know action script)

Flowmotion - sync video clips and flash to audio

Even Blender can be used via pure data and a python script.

Personally, the best option is Pilgrim - full control of 3d models, plus you can play video clips in realtime with loads of effects. though if you have Max, use Neonv2 if you wanna save money.

Go here to find links to alll of these: vjcentral.com

#5 Boomberry

Boomberry

    MoGraph Regular

  • Members
  • 92 posts

Posted 24 December 2006 - 04:50 PM

Processing - flash based visual tool - need to know programming language


*java* based visual tool

#6 prupert

prupert

    Newbie

  • Members
  • 12 posts

Posted 25 December 2006 - 10:33 AM

*java* based visual tool


oops, sorry, i'm always getting those two confused - what a doofus

#7 markokon

markokon

    Newbie

  • Members
  • 37 posts

Posted 26 December 2006 - 06:25 AM

Sorry for the late reply. Needed to take some time off... family and GF... no computer for two days at least once a year... you know. Anyway.

This is so helpful. Thanks so much :) It's like a gift.

I'll try these out and we'll see where they lead me. Also.. SOundKey I have I think so that should be helpful aswell. Thanks!

#8 Monguilhott

Monguilhott

    MoGraph Regular

  • Members
  • 68 posts
  • Location:Brazil
  • Interests:Something funny...

Posted 26 December 2006 - 02:30 PM

Arkaos... only to add one more

#9 RustyAce

RustyAce

    Mograph Deity

  • Members
  • 873 posts
  • Gender:Male
  • Location:Charlotte, NC

Posted 28 December 2006 - 07:56 PM

No Plug-in needed, just convert audio to key frames, and pick whip whatever property you want the audio to control to one of the channels, and boom they are in sync. If you audio is not that strong on beats, you can a multiplier to the front of the properties expression

it should read something like this when it controls the scale for example
thisComp.layer("Audio").scale*5
rusty ace
I am always doing that which I cannot do, in order that I may learn how to do it.
Pablo Picasso

#10 markokon

markokon

    Newbie

  • Members
  • 37 posts

Posted 28 December 2006 - 08:48 PM

No Plug-in needed, just convert audio to key frames, and pick whip whatever property you want the audio to control to one of the channels, and boom they are in sync. If you audio is not that strong on beats, you can a multiplier to the front of the properties expression

it should read something like this when it controls the scale for example
thisComp.layer("Audio").scale*5


that might be a little out of my skills set at this point. But I want to learn it. Would it be too much work for you to explain how to do this in a little more detail? or perhaps outline a few more steps? If so, it's cool. :) Thanks.

#11 RustyAce

RustyAce

    Mograph Deity

  • Members
  • 873 posts
  • Gender:Male
  • Location:Charlotte, NC

Posted 28 December 2006 - 09:52 PM

No, problem, this is one of the first expressions i learned

first import your audio, and place in a comp
2- animation>keyframe assistant> convert audio to keyframes
3-expand the new layer down, notice 3 layers,left, right, both
4-now go to the layer you want the audio to control>expand
5-choose the property you want to control, say scale
6- option click the stop watch, 3 icons apear
7-click and drag the pick whip(looks like a whip) to one of the audio channels
8-look beside the scale there should be an expression there and it should read something like
thisComp.layer("leftChannel").scale

click on the text and add say *5 to the end

thisComp.layer("Audio").scale*5

this links the scal to the audio frames times 5, divide can also be used
rusty ace
I am always doing that which I cannot do, in order that I may learn how to do it.
Pablo Picasso

#12 the_Monkey

the_Monkey

    simian

  • Members
  • 2,011 posts
  • Gender:Male
  • Location:Brooklyn, NY

Posted 29 December 2006 - 06:58 PM

I still use SoundKeys... even when I'm working in 3D... even with Cinema4D and with Mograph... I still find myself going back to SoundKeys and it's because of the visualization and the falloffs. If you're new to sound keyed animation I'll share an old, but successful technique with you.

So...

In one second of animation you seldom have more than 30 samples (frames).
In one second of sound you will have nearly 44,100.

How do you decide what part of the sound makes the cut? Most folks try to put too much sound information into their animation and the result is an epileptic fit of itunes screensaver-ness. A lot of programs allow you to narrow your sound information by averaging the samples or narrowing the bands but it's hard to know how well you keyed everything *before* you start animating. One way to find out is to create soundmaps. Run your keys to black and white values and stick them side by side. If the movement of the black<->white value is visually readable it is more likely the final animation will read well. If it's too similar or too jumpy in b/w there's a high probability your final animation will be difficult to watch. I still do all this with SoundKeys in AE because it's much easier to smooth the wave, narrow the bands, and compare keys visually. Otherwise you'll be creating a lot of shakey blobs.

I'm hesitant to share this movie because it's so damn old and my cameras sucked so bad back then (you'll have to forgive me I'd only been animating for a few months), but it's a decent visual example of a side by side soundmap and animation. This was back when I used soundmaps for everything and I would drive the animations with textures. That allows you to do some cool stuff like changing an objects animation by simply moving or rotating the flat mapped texture projection. These days I actually pull the keys into the timeline and run expressions because it's a million times faster, but I still create maps just for visual confirmation.

Take a look at this soundmap. It's a good example of "soft errors" that reveal themselves during the soundmapping process. This was a layout I used after experimenting for awhile. If you cover the top half and just look at the bottom... it looks pretty well keyed. The attack and falloff of the triggers are diverse and complement each other, but if you look at the top half (which visualizes displacement rather than value) you can see they don't match all that perfectly with the audio. If I would have tried to use these keys with a texture I'd be ok, but if it were used with a sweep nurbs or something of the like, it would appear very loose. After seeing this in the map I went back and adjusted my soundkeys and exported a tighter one for production (which I can't seem to find, sorry).

The thing is, that the slightest little adjustments in the interpolation of the soundwave will show up in your animation. If you're serious and you want it *really* tight I would really suggest a method like this. Standard EQs are ok, but nothing beats a soundmap, IMO and they're damn easy to build in AE with SoundKeys.

-m

#13 Guest_Sao_bento_*

Guest_Sao_bento_*
  • Guests

Posted 29 December 2006 - 08:15 PM

Great post the_Monkey.

#14 markokon

markokon

    Newbie

  • Members
  • 37 posts

Posted 05 January 2007 - 10:30 PM

RustyAce, the_Monkey:

Once again... sorry for a late replay. Thank you so much for giving me these tips. It clears up a lot of my questions and concerns. This weekend will be fun filled event with lots of experimenting, sampling and.... beer... it's the weekend afterall right? :D

If I run into any problems I'll bring them up but otherwise I think the explanations you both provided me with are great.

Thanks!

Edited by markokon, 05 January 2007 - 10:51 PM.


#15 waldteufel

waldteufel

    Newbie

  • Members
  • 4 posts

Posted 06 January 2007 - 12:20 AM

i think my brain just exploded.

all this time, and i never noticed that audio-to-keyframes-thingie lurking there. thanks a bunch, RustyAce and the_Monkey; you guys just saved me from endless hours of blood, sweat and waveforms. :H

#16 Boomberry

Boomberry

    MoGraph Regular

  • Members
  • 92 posts

Posted 07 January 2007 - 03:17 AM

thanks for sharing the_Monkey :) i still think having the midi tracks of the drum beats and melodies from propellerheads reason (for example) into after effects could save people who also make music quite a bit of time when using after effects, but this isn't too bad if the music is relatively simple.

cheers

#17 markokon

markokon

    Newbie

  • Members
  • 37 posts

Posted 07 January 2007 - 06:47 AM

OMG... it's easy to set up. I understand the whole concept of sampling the audio with soundkey now, but it's difficult. I spent like 4 hours trying to extract the beat in the audio and I had little luck. I think it would be a lot faster if I had all the instruments as separate files so all I would do is sample the whole frequency.

Second... I never used expression before... so I also had little luck with changing the settings.

This is obviously a learning curve for me... but I am getting a few ideas on how I can see this helping me with the project I am working on.

The main thing I was trying to figure out with expressions is how to change the values being linked to without having to resample the audio? RustyAce, you mentioned somethign about scaling, but when I tried to fit the code into various areas I kept getting error messages.

This is what the expression look like once I linked the position of a solid layer to the output settings on a soundkey layer.

temp = thisComp.layer("Black Solid 3").effect("Sound Keys")("Output 2");
[0, temp]

The sample ranges from 0 to 360. How can I tell the layer that is sampling the data to just divide the sample in half? Or scale it down to values between 0 -100? I know I can set this in the output settings, but I wanted to change it on the fly with expressions. Also how can I apply it to just the x, y, or z positions?

#18 Boomberry

Boomberry

    MoGraph Regular

  • Members
  • 92 posts

Posted 07 January 2007 - 07:42 PM

check your email mark. :)

#19 RustyAce

RustyAce

    Mograph Deity

  • Members
  • 873 posts
  • Gender:Male
  • Location:Charlotte, NC

Posted 08 January 2007 - 03:24 PM

OMG... it's easy to set up. I understand the whole concept of sampling the audio with soundkey now, but it's difficult. I spent like 4 hours trying to extract the beat in the audio and I had little luck. I think it would be a lot faster if I had all the instruments as separate files so all I would do is sample the whole frequency.

Second... I never used expression before... so I also had little luck with changing the settings.

This is obviously a learning curve for me... but I am getting a few ideas on how I can see this helping me with the project I am working on.

The main thing I was trying to figure out with expressions is how to change the values being linked to without having to resample the audio? RustyAce, you mentioned somethign about scaling, but when I tried to fit the code into various areas I kept getting error messages.

This is what the expression look like once I linked the position of a solid layer to the output settings on a soundkey layer.

temp = thisComp.layer("Black Solid 3").effect("Sound Keys")("Output 2");
[0, temp]

The sample ranges from 0 to 360. How can I tell the layer that is sampling the data to just divide the sample in half? Or scale it down to values between 0 -100? I know I can set this in the output settings, but I wanted to change it on the fly with expressions. Also how can I apply it to just the x, y, or z positions?



i think you can just add /2 to the end,
It should read something like


temp = thisComp.layer("layer sound keys is on").effect("Sound Keys")("Output2");
[temp, temp]/2

this comp would be the layer you want the sound keys to control, pick whiping from position of that layer to the sound key layer

but i'm not sure, i've never used it in combination with sound keys
rusty ace
I am always doing that which I cannot do, in order that I may learn how to do it.
Pablo Picasso

#20 PhAtlDesigns

PhAtlDesigns

    Newbie

  • Members
  • 3 posts
  • Gender:Male
  • Location:Fort Worth Texas

Posted 07 July 2010 - 01:31 AM

I still use SoundKeys... even when I'm working in 3D... even with Cinema4D and with Mograph... I still find myself going back to SoundKeys and it's because of the visualization and the falloffs. If you're new to sound keyed animation I'll share an old, but successful technique with you.

So...

In one second of animation you seldom have more than 30 samples (frames).
In one second of sound you will have nearly 44,100.

How do you decide what part of the sound makes the cut? Most folks try to put too much sound information into their animation and the result is an epileptic fit of itunes screensaver-ness. A lot of programs allow you to narrow your sound information by averaging the samples or narrowing the bands but it's hard to know how well you keyed everything *before* you start animating. One way to find out is to create soundmaps. Run your keys to black and white values and stick them side by side. If the movement of the black<->white value is visually readable it is more likely the final animation will read well. If it's too similar or too jumpy in b/w there's a high probability your final animation will be difficult to watch. I still do all this with SoundKeys in AE because it's much easier to smooth the wave, narrow the bands, and compare keys visually. Otherwise you'll be creating a lot of shakey blobs.

I'm hesitant to share this movie because it's so damn old and my cameras sucked so bad back then (you'll have to forgive me I'd only been animating for a few months), but it's a decent visual example of a side by side soundmap and animation. This was back when I used soundmaps for everything and I would drive the animations with textures. That allows you to do some cool stuff like changing an objects animation by simply moving or rotating the flat mapped texture projection. These days I actually pull the keys into the timeline and run expressions because it's a million times faster, but I still create maps just for visual confirmation.

Take a look at this soundmap. It's a good example of "soft errors" that reveal themselves during the soundmapping process. This was a layout I used after experimenting for awhile. If you cover the top half and just look at the bottom... it looks pretty well keyed. The attack and falloff of the triggers are diverse and complement each other, but if you look at the top half (which visualizes displacement rather than value) you can see they don't match all that perfectly with the audio. If I would have tried to use these keys with a texture I'd be ok, but if it were used with a sweep nurbs or something of the like, it would appear very loose. After seeing this in the map I went back and adjusted my soundkeys and exported a tighter one for production (which I can't seem to find, sorry).

The thing is, that the slightest little adjustments in the interpolation of the soundwave will show up in your animation. If you're serious and you want it *really* tight I would really suggest a method like this. Standard EQs are ok, but nothing beats a soundmap, IMO and they're damn easy to build in AE with SoundKeys.

-m



Dude love your work. Can you do a tutorial on what your talking about. I'm like you and your using your own head speak when going over this stuff and you're leaving out allot. I loved your pace on Grey Scale Gorilla, when you went over joints, maybe you could go over this Soundkeys stuff too. Thanks, cheers







0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users