Personal View site logo
PluralEyes for sound sync
  • 44 Replies sorted by
  • For a video about a sound product....the sound is pretty bad lol..

  • As I understand they are not responsible for sound here.
    And it is not really sound product, it is just math soft that operates with sound files. :-)

  • Well yeah I know that.. Im just sayin... Still good info was thinking about trying that out with the next GH2 project because sync is such a pain in the neck...

  • Yes the sound is bad but it's in sync!

  • It takes me about 10 to 20 seconds to draaaaaaaaaaaaaag my video file to the matching spot on the audio timeline. If I remembered to clap my hands three times it takes me five seconds. Am I missing something, or is this like a program to show you which mouse button is the left one?

  • Not really.

    It'll sync hundred of clips on it's own and find the right clip to do so – how fast are you doing this (and how much do you like it) ?

    Really great if you are not happy with the camera audio and do a separate high-quality recording. Very helpful with any camera that doesn't know what external TC is - like true GH2.

    We have been using it for about a year now for theater recordings and the like and it has saved us lots of tedious mousing…

  • @DrDave

    For example you can have 30-40 interviews short takes. Where you did not clapped to not look like fool.

    All this thing is doing - saving your time.

  • Imagine you would record some kind of event with multiple cameras. The operators don´t need to waste cardspace and keep their cams running all the time, but can just take pictures as usual. (Well, one wide should run all time as backup anyways..) Then you trow it all onto Plural eyes and voila, its all sorted in time.

  • @nomad A quick tip -- if you don't have a slate, just call a sequence number then use a cheap plastic dog clicker -- available for $2.50 from your local pet store. Makes a nice spike on the timeline for sync.

  • A glass jar and the side edge of a knife or spoon will produce a very sharp spike, sharper than the dog clicker or hands clapping.

  • While it can make working in video editor easier, it can dramatically mess up the workflow options if you export to an audio editing workstation. Understand you entire end to end workflow needs before using anything like this.

  • You don't run onto the stage and use a clapper during a theater recording.

    Professionals use cameras with wireless TC sync, of course ;-9

    I second the advice on testing your whole workflow before even starting a project, though.

  • I think the professional thing to do is to get into the theatre early, let the lights warm up, and then do several things. Film a WB card and optionally a Color Checker card from different angles and with different lighting from the computerized console, and synchronize the cams before the audience is let in with a clapper. You can use a large flash when the audience is seated to provide one more sync point without attracting attention. Then simply let the cams run the extra 15 minutes before the show starts, and reset at intermission if desired. As far as exporting to DAW, if done properly, this should present no problems with a sample accurate DAW. Many DAWS have video sync built in.

  • The clapper allows you to precisely calculate the audio offset for each camera, using the formula of roughly one ms delay for each foot of distance. A camera in a balcony can easily require two full frames of offset, as sound travels slower than light. A camcorder such as the Canon HF G10 will precisely calculate the distance in MF mode right on the screen, to make the calculation fast and easy. Even in a small room, I adjust the spot mics with a delay of 2-8 ms, depending on mic placement. At 30-50 feet, you will lose visible sync, which is one of several reasons why audio is often out of sync with video. Imagine that you have two cameras, connected by a time code. Those cameras will be visually in sync. However, the audio will reach the camcorders internal microphones at different times. This is not an issue if the separately recorded audio is slaved to a camera that is close to the source, as the offset, say 10ms, is small. The clapper allows you to quickly fix the delay, and also measure the built-in offset of the camcorder. You can test this easily by filming yourself clap (or use the glass jar) at a distance of say fifty feet with a spot mic. Once could take a program that purports to sync clips, and add a delay input feature that calculates the offset, based on distance, but you would need to know the internal offset for each camcorder (easy enough to measure, but there are lots of camcorders).

  • I've been stuck before trying to sync a guitar soloist on the timeline. BAD experience!

  • If it is a fast, lead guitar I find it helpful to zoom in to say 200 percent and then scrub forward at 1/4 speed. Then you can match the pluck of the guitar pick exactly with the sound. Slow motion emphasizes the contrast between the attack of the pick and the rest of the note.

  • @DrDave: Sure, good advice you are giving there for many situations.

    But we tend to shoot intra settings in the theater because of their extreme lighting, so we can't shoot through the whole performance. We need to stop cameras at certain intervals and we need to change cards. Plus, cameras like the GH2 wouldn't hold sync for 90 minutes, only 10 to 20 on continuous shooting. For us, PluralEyes is great.

  • Try a wedding and reception with various cameras stopping almost randomly as the batteries die. You see no tally light on the GF2

  • Another thing, I do vieos for some pro musicians. I used to hand sync like Dr. Dave but one pianist said the vid was out of sync, I looked at it over and over and it looked and sounded synced. She insisted it wasn't, that it was off by a fraction. So I tried plural eyes and she was happy with the sync. YMMV

  • @brianluce makes a good point which is that the performer may have a different idea of what "sync" is. Even with a harp player, where you can see the string released from the finger, some musicians will intuitively feel, rightly or wrongly, that the sync is off. It is important to ask, since they may not volunteer, about the sync. To me, about 80 percent of the music videos I see, the sync is slightly off. That's a pretty big percentage, a huge percentage. I do have a hard time believing that the clapper lies, however. In my experience, using a jar or clapper, the MOST you can be off is one frame, and of course you often have the peak in the middle of a frame, so then you are looking at frame splitting. In the case of the piano, unless you are videoing the actual hammers, you can't really tell from the key depression--although the sync cannot be more than a frame (1/24 sec) later than when the key hits the bottom. In this way, it is different from harp, guitar, etc where you can see the string begin to vibrate.

    I would like to see someone take two cams, one in a balcony (or at the end of a hall) and one close up, and see what PluralEyes does. If the software does perform some sort of magic, you will immediately see the result, yes or no. If it simply matches wave forms, then it will be visibly off. If it really works, I would consider using it, and I would not care how it accomplishes this seemingly impossible feat :)

    Splitting frames: if the sync is off by a "fraction", you can cut the audio out and remove one third of one frame. Then drop it back in and you can then tweak it below the division of the frame sample. Usually, a third of a frame is as low as you need to go, although you can of course shave samples if you really have spare time.

  • This is a quote from the PluralEyes website "The speed of sound is such that it travels about 1 frame for every 30 feet of distance between the cameras. PluralEyes will sync audio tracks without considering their physical distance, and a simple frame shift can bring far cameras into sync" So what they are saying is, in one word, "ballpark". They also do not mention two other things--the cameras internal offset, and the muxing drift in avi, mts, mp4 files, etc. However, "ballpark" is not a bad thing, it is a handy thing. I can't see using it for say a dozen clips, but if you have piles and piles of clips, let the proggy sort it out.

  • Also remember, if they are not timecode capable cameras, they also are not likely video reference resolved cameras. So every camera will have a little different sound relationship. It's a different frame alignment between every camera on every take. Which will all be slightly out of alignment with any audio recorder on set. So every take, on every camera angle, on every camera, and the audio recorder will all have different audio sync relationships when not using fully resolved cameras. When you get to post, just expect every cut will likely be a little off and you will not be able to fix all of it in any frame accurate video workstation. Even with those latest wiz bang apps.

  • You can reset the sync on every cut so it is perfect, you just have to slip the audio under the fade.

  • I'm thinking of upgrading today. Has anyone used PE3 yet?