Why Premiere Pro could use scripting

I’ve been testing the workflow from Premiere Pro to DaVinci Resolve (similarly to other more renowned people). For many reasons I want to avoid sending a flattened file, instead relying on XML interchange, and a few annoying simple issues make it pretty inconvenient:

  1. We’re using XDCAM EX in mp4 wrapper and NXCAM (AVCHD) files which Resolve does not support. Transcoding is necessary although it’s the subject for another entry.
  2. Time remapping in Resolve is much worse than even in Premiere, not mentioning After Effects. All speed changes should be rendered and replaced before exporting XML.
  3. Some effects should be rendered, but transitions should be left untouched.
  4. All Dynamic Link clips should be rendered and replaced.

Doing these things manually takes a whole lot of time, and is very prone to mistakes. This is a perfect example when a simple script would make one’s life so much easier. The script would:

  1. Traverse the timeline, looking for clips having properties mentioned in points 2-4.
  2. Create a new video layer or a sequence, whatever would be faster.
  3. Copy the clips there one by one and queue export for each to desired codec, encoding timecode and track either in metadata or the name.
  4. After the export is done, it would import the renders, and replace old clips with the new ones.

Alternatively, I could have one script to export (1-3), and another to reimport (4).

See? It’s relatively simple. The possibilities of scripting are almost infinite. For example, I could also change all the time remapped clips automatically into Dynamic Linked AE compositions and render them using its superior PixelMotion algorithm – although I would rather appreciate Adobe including it in Premiere itself, getting rid of the old and awful frame blending. I could even attempt to change them to their Twixtor equivalents, although I must say that my experience with this effect is pretty crashy.

I looked at SDK for Premiere Pro to see if I could write a plugin that would make this job easier, but as far as I know such possibility does not exist. Plugin architecture for Premiere is pretty limited, and compartmentalized, and using C++ for this seems like a bit of an overkill.

Adobe, please support scripting (JavaScript, Python, or any other obscure language) in Premiere Pro. This way users will be able to create their own tools to solve inefficiencies of the program, and your job will become much easier. And Premiere Pro will prosper and develop much quicker and much more effectively. Besides – you don’t want FCPX to overtake you, do you?

Advertisements
Posted in usability, video editing | Tagged , , , , | 5 Comments

Updated my post on vignette in Premiere Pro

Due to the popularity of my brief overview of methods to create a vignette in Premiere Pro, I updated it with a new method that I found in the meantime. Unfortunately, Premiere Pro CS6 still does not allow you to make a proper vignette.

However, possibly there will be some interesting developments on this front for Premiere Pro users in the near future. I will keep you posted.

Posted in tips, video editing | Tagged , | Leave a comment

Premiere Pro saves the day

Recently I was doing a small editing job for a friend, and ran into a few interesting problems.

The footage provided was shot partially on a Canon Powershot, which saves it as an AVCHD MTS stream. My computer is not really up to editing AVCHD, so I decided to transcode the clips into something less CPU intensive. The final output would be delivered in letterboxed 640x480p25 because of the limitations of the second camera, so the quality loss was of little concern. Having had decent experience with AVID’s DNxHD codecs, I decided to convert it to 1080p25 36 Mbps version. And then, the problems began.

Even though Premiere Pro did import the file without a problem, Adobe Media Encoder did hang right after opening the file for transcoding. I decided to move the footage to AVID thinking that perhaps it would be a good project to hone my skills on this NLE, but it was complaining about Dolby encoding of audio, and didn’t want to import the footage. I then tried to use Sorenson Squeeze to convert it, but it also threw an error, and crashed. Even the tried MPEGStreamclip did not help.

I was almost going to give up, but then came up with an idea to use Premiere’s internal render to transcode the footage by putting it on an XDCAM HD422 timeline, rendering it (Sequence -> Render Entire Work Area), and then exporting it with the switch that I almost never use – “Use previews“. I figured, that once the problematic footage is already converted, then the Media Encoder will handle the reconversion using previews without problems. I was happily surprised to have been proven correct. And because Premiere’s internal renderer was able to cope with the footage without a glitch, it all worked like a charm.

Use Previews switch

“Use Previews” can sometimes save not only rendering time, but also allow for encoding problematic files.

Afterwards the edit itself was relatively swift. I encountered another roadblock when I decided to explore DaVinci Resolve for color grading, and exported the project via XML. Resolve fortunately allows custom resolutions, so setting up a 640×480 project was not a problem. I also had to transcode the files again, this time to MXF container. This was a minor issue, and went relatively fast. However, due to the fact that some media was 480p, and some 1080p, and I have done quite a lot of resizing and rescaling of the latter, I wanted to use this information in Resolve. Unfortunately, Resolve did not want to cooperate. It’s handling of resize was very weird, and every time I clicked on the resized clip to grade it, it crashed. I’m certain that the scaling/panning was responsible, because when I imported the XML without this information, everything worked great. It might have something to do with the fact, that I was running it on an old GTX260, but still, I was not able to use the software for this gig.

In the end I graded the whole piece in Premiere Pro on its timeline. Here’s the whole thing for those of you who are interested:

Posted in color grading, video editing | Tagged , , , | Leave a comment

BlackMagic Design denies rumors – or do they?

Peter Chamberlain from BlackMagic Design did deny any rumors (guess which ones?) that they are working on the cheaper control surface, believing that the segment is well saturated by other manufacturers. This is of course based on an assumption that the lowest segment is the price range that AVID, Tangent and JL Cooper are targetting, ie. around $1500-$2000. I must admit, that the release of Tangent Element, with the basic control surface at the cost of about $1200 is interesting, however it is still far above what I would consider the real democratization barrier – around $500-$700.

I understand all the limitations of such pricing, including the fact that this kind of surface would be looked by all proffessionals as a toy, which it would indeed be out of necessity of using cheap materials. I still believe it can be done, if R&D costs can be covered, and that it would introduce more people to color grading, than all the plugins combined.

It might of course be my wish to have at my disposal something that I’m currently not able to afford. But I also can’t help but to notice certain wording in Peter’s message. Namely:

…we have no plans for a cheaper panel at NAB. (emphasis added)

So… will anyone pick up the challenge? Or is my premise inherently flawed, and the future of color grading lies somewhere else?

Posted in color grading, video editing | Tagged , , , | Leave a comment

Why slow motion seems majestic

There is much to be said about the memorability of slow motion footage in movies. While perhaps the most extreme recent example was the invention of “bullet time” in Matrix, and recently we have seen it taken to the extremes in Inception, the overcranking (shooting at higher framerate to later play it back at regular 24 fps) was the hallmark of cinematography ever since it was invented in 1904 by Austrian priest and physcist August Musger.

There is little doubt that slow motion footage for some reason does make the action seem more pronounced, more memorable, more impressive, and often more majestic.  Even though some cinematographers of note did tackle the reasons why slow motion has this certain effect, so far I have not found a convincing explanation in this field.

A possible insight into why slow motion might have this effect comes from the research on how people react under extreme stress: in combat, in sports, or in situations where one’s life is threatened. There exists a number of reactions that can happen to people in such situations. Alongside tunnel vision, selective deafness, there also is a perception of events occuring in slow motion. Most likely it is a result of the sudden flush of hormones like epinephrine, and the attempts of our brains to encode as much of what is happening as possible for future reference. Usually it is accompanied by the feeling of vividness, and awareness of being alive (this is also why such states of mind can become addictive, and life can seem pretty bland afterwards),  sometimes referred to as “hyper-reality”.

The important part is that the real slow motion effect in our brain is only an illusion, and the result of physiological processes of hastened memory creation. It does not grant the subject powers of Neo to dodge bullets, it only increases the awareness of occuring events. The reaction time remains as it is, even though the employed actions might be more efficient, than they would be in “normal time”.

However, it is highly probable, that our brain, when confronted with slow motion footage, takes it as a signal of something memorable happening, and tries to employ its standard procedure in such cases – trying to remember as much as it can, because it is an important, potentially life-threatening event. Due to the fact, that there is no hormonal rush, the effect is subsided, but it seems to have an impact nevertheless. How foolish of our brain to think so! And yet, we fall for the same trick again, and again. And slow-motion does work, even if used in excess.

Therefore, next time you see the slow motion footage employed to accentuate certain aspects of the action, or use this effect yourself, be aware that it works, because it references the state of mind that is already available to the viewer, and mimics what happens when our brains do firecely try to create a memorable event.

If readers are interested in further exploration of this topic, I suggest the book “On Combat” by Lt. Col. Dave Grossman, which has a great compilation of physiological effects that accompany events.

Posted in psychology, video editing | Tagged , , , , | Leave a comment

The anatomy of a promo

This is my latest production. It’s a promotional spot for a non-profit organization that is dedicated to another passion of mine – historical personal combat.

What follows is an overview of the production of this short movie, including how the screenplay changed during production, breakdown of my editing process, and a few techniques that we used in post-production to achieve the final result.

Production

It was a collaborative, voluntary effort, and included cooperation from parties from various cities in Poland. The Warsaw sequences (both office and training) were shot with Sony EX-1R, 1080i50, with the exception of slow-motion shots that were recorded at 720p60. Sequences from Wroclaw and Bielsko Biala were shoot with DSLRs at 1080p25. Therefore the decision was made to finish the project in 720p25, especially since the final distribution would be via youTube.

The most effort went into filming the Warsaw training, where we even managed to bring a small crane on set. Out of two shots that we filmed, in the final cut only one was partially used – the one where all people are running on the open clearing. We envisioned it as one of the opening shots. As a closing shot we filmed from the same place the goodbyes and people leaving the clearing, while the camera was moving up and away. It seemed a good idea at that time, one that would be a nice closure of the whole sequence, and perhaps of the movie as well.

We had some funny moments when Michal Rytel-Przelomiec (the camera operator, and the DOP) climbed up a tree to shot running people from above, and after a few takes he shouted that he can last only one more, because the ants definitely noticed his presence and started their assault. What a brave and dedicated guy!

A few days later we were able to shot the office sequence. The first (and back then still current) version of the screenplay involved a cut after the text message was send to what was supposedly a reminiscence from another training, and finished up with coming back to office, where Maciek (the guy in office) would pick up a sword and rush at the camera. Due to the spatial considerations on set (we were filming in Maciek’s office after hours), we decided to alter the scenario, especially since we had already filmed the training sequences, including the farewell closing shot.Therefore instead of Maciek picking up a sword and attacking the camera, he actually rushed away to training, leaving the office for something dearer to his heart. It was also Michal’s idea to shot the office space with 3200K white balance to create more distant, cold effect, and it worked really well.

Post-production

All footage (about 2 hours worth) was imported into Adobe Premiere CS5, that allowed skipping transcoding and working with the source files from the beginning right to the end. After Effects CS5 and Dynamic Link were used for modest city titles only, although perhaps it could have been used to improve a few retimed shots. Music and effects were also mixed in Premiere.

The promo was in production for over half a year, mostly because we were waiting for footage from other cities, some of which never materialized, and we decided to finish the project with what we had. Actual cutting was pretty quick, and mostly involved looking for the best sequences to include from other cities. Some more time was spend on coming up with a desired final look for the short movie.

Editing

The general sequence of events was laid out by the screenplay written by Maciek Talaga. At first the clip started immediately with corporate scene. We were supposed to have some similar stories from other cities, and I was ready to use dual or even quadruple split screen for parallel action, but since the additional footage never materialized, I decided to pass on this idea. In the end it allowed us to focus more on Maciej Zajac, and made him the main hero of our story, what was not planned from the start.

After leaving the office we had to transition to the training, and preferably to another place. Wroclaw had a nice gathering sequence, and completely different atmosphere (students, backpacks, friendship and warmth), which constituted excellent contrast to the cool corporate scenes from Warsaw, presenting another kind of people involved in pursuing the hobby.

The order of following cuts was determined by the fact, that we had very little material from Bielsko-Biala, and it all involved the middle of the warm-up. We had excellent opening shots from Warsaw, which were great for setting the mood, and adding some more mystery. I used them all, and even wanted to transition to push-ups and other exercises, however when the guys already stopped running, coming back to it in Bielsko sequence ruined the natural tempo of the event. Therefore with great regret I had to shorten the crane shot to the extent that it most likely does not register as a crane shot at all, and transition to Bielsko for the remaining part of the warm-up.

Coming back to Warsaw seemed a little odd, so I decided to cut to Wroclaw to emphasize the diversity, and a short sequence with a few shots of a warm-up with swords. Here I especially like the two last cuts, where one cuts on action with the move of the sword, that is underlined by the camera move in the next shot, and then the one that moves the action back to Warsaw, when a guy exits the frame with a thrust. I was considering using a wipe here, but it looked too cheesy, so I decided to stick to a straight cut.

As an alternate to this choice, I could at first come back to Warsaw, and move the Wroclaw sequence between the warm-up and sparrings, but this would then create an alternating cadence Warsaw-other place-Warsaw and I wanted to break this rhythm and avoid that. Therefore I was stuck in Warsaw for the remaining of the movie, even though it had at least two distinctive parts left. We had an ample selection of training footage from Wroclaw, however it was conducted in a gym, and including it would ruin the overall mood and contrast closed office space vs. open training space, so in the end we decided against it.

Unfortunately we did not have any footage from gearing up, so the transition between the florysh part in Warsaw to the sparrings is one of the weakest parts of this movie, and I would love to have something else to show. I did not come up with anything better than the cut on action though.

The sparring sequence is mostly cut to music selection of the most dynamic and most spectacular actions from our shoot (not choreographed in any way), including a few speed manipulations here and there to make sword hits at proper moments or to emphasize a few nice actions, including the disarm at the end. There were a few lucky moments during shooting, where Michal zoomed in on a successful thrust, and I tried to incorporate them as much as I could, to obtain the best dynamics, and to convey as much of the atmosphere of competitive freeplay as was possible.

The sequence ends on a positive note with fighters removing masks and embracing each other. I tried to avoid cutting in the middle of this shot, but it was too long, and I wanted to have both the moment where the fencing masks come off, and the glint on the blade of the sword at the end (which was not added in post). In the end the jump cut is still noticeable, but it defends itself. There is a small problem with music at the end, because I had to cut it down and extend a little bit to hold it for the closing sequence, but it is minor, and does not distract too much from the overall story.

Apart from the serious and confrontational aspect of the training, we wanted to stress the companionship, and I believe that both the meeting sequence in Wroclaw, and the final taking off the masks and embrace did convey the message well.

During cutting I realized that regardless of the added production value of the crane farewell shot, there is no way to include it at the end. It was too long, it lessened the emotional content, and paled in comparison to the final slow motion shots that I decided to use, including the final close-up of Maciek, that constituted the ellipse present in the first version of the screenplay. Therefore it had to go, regardless of our sentiment towards it.

The feedback from early watchers was that Maciej Zajac was not easily recognizable for people who did not know him, and made us wish for something more. The idea of the beginning with sounds and no picture came from Maciek Talaga, and I only tweaked it a little bit. We first thought about putting as the first shot the one where Maciej takes off the fencing mask, however it did not look good at all, and the transition to the office scene was awkward at best. In the end I proposed the closing close up as the first shot, which in our opinion nicely tied the whole thing together, being both introduction of Maciek, setting focus on him as a person, and also nicely contrasting the “middle ages dream or movie” with his later work at the office. Excellent brief textual messages authored by Maciek Talaga added also a lot to the whole idea.

Color grading

All color correction was done in Premiere Pro with the use of standard CC filters and blending modes. I experimented with the look in the midst of editing, trying to come up with something that would best convey the mood. I started with high-contrast, saturated theme, and moved quickly to a variation of bleach bypass with a slightly warmer, yellowish shift in midtones. However, it still lacked the necessary punch, and in the end I decided to over-emphasize the red color (an important one for the organization as well) with a slight Pleasantville effect. It gave the movie this slightly unreal, mysterious feeling, and the contrast underlined the seriousness of effort.

The office sequence did not need much more than the variation of bleach bypass, not having anything actually red. The increase of contrast and slight desaturation was mostly enough to bring it to the desired point, thanks to Michal’s idea of shooting it at lower Kelvin. Warsaw sequence required additional layer of “leave color” effect where everything apart from red was partially desaturated, a little more push towards yellow in highlights and in midtones, all blended in color mode over previous bleach bypass stack. I will do the detailed breakdown of color correction I used in a separate entry, although perhaps with the introduction of SpeedGrade in Adobe CS6 this technique might become  obsolete.

Michal also suggested a clearer separation between the various cities, so I pushed Wroclaw more towards blue, as it involved more open air, and Bielsko more towards yellowish-green, to emphasize its more “wild” aspect. In the end, I had the most trouble with footage from this place, because as shot it was dark, had bluish tint, and involved pretty heavy grading, which on H.264 is never pleasant. Overall I’m satisfied with the results, although there are a few places that could benefit from perhaps a few more touches.

The blooming highlight on the fade out of the opening and closing shot was a happy accident and a result of fading out all corrected layers simultaneously mixed with “Lightning effects”, at first intended only for vignetting (as mentioned in my another entry).

I like the overall result. I also enjoyed the production on every step of the way, and even though it could still perhaps be improved here and there, I am happy. It was an excellent teamwork effort, and I would like to thank all people who contributed to its final look.

Posted in color grading, video editing | Tagged , , , , | Leave a comment

Green screen primer

Having recently had an opportunity to do some green screen work, which at first glance seemed to be a quick job, and later turned out to require some pretty hefty rotoscopy and compositing, I decided to write down another caveat, this time on using a green screen. Please note, that the pictures are for illustrative purposes only. For convenience, wherever they are labelled as YCbCr colorspace, I used Photoshop Lab/YUV to create them, which is very similar, but not identical to YCbCr. Also, many devices use clever conversion and filters during chroma subsampling, which reduces aliasing and generally are better at preserving the detail, than Photoshop is in its RGB->Lab->RGB conversion, so the loss of detail and differences might be a little smaller, than depicted here, but are real nevertheless.

Green screen mostly came about because of the way that digital camera sensors are built. The most common bayer pixel pattern in CMOS sensors used by virtually all single-chip cameras consists of two green sensors, and a single blue and red ones (RGGB). Which is a sensible design, if you consider the fact that the human eye is the most sensitive in green-yellowish regions of the light spectrum. It also means, that you will automatically get twice as much resolution from the green channel of a typical single-chip camera, than from either red or blue one. Add to this the fact that the blue sensors most often have the most noise, due to the fact that the blue light has the least energy to deposit in a sensor, and the signal is simply the lowest there, and you might start to get a clue why green screen seems to be such a good idea for digital acquisition.

RGGB sensor mosaic

Typical CMOS RGGB pixel mosaic. There are two times as many green pixels than red or blue.

So far this discussion did not concern 3-sensor cameras or the newest Canon C300 with the sensor twice the size of encoded output, however the next part does.

Green channel has the most input (over 71% in Rec 709 color space specification) in the calculated luma (Y) value, which is most often the only one that gets encoded at full resolution when compression scheme called chroma subsampling is used – which is almost a given in most cases. All color information is usually compressed in one way or another. In 4:2:0 chroma subsampling scheme – common to AVCHD in DSLRs and XDCAM EX – the color channels are encoded at 1/4th of their resolution (half width and half height), and in 4:2:2 at half resolution (full height, half width). These encoding schemes were developed based upon the observation that a human eye is less sensitive to loss of detail in color than in brightness, and in horizontal plane, than in vertical. Regardless of how well they function as delivery codecs (4:2:2 is in this matter rather indistinguishable from uncompressed), they can have serious impact on compositing, especially on keying.

Various chroma subsampling methods

Graphical example of how various chroma subsampling methods compress color information

Recording 4:4:4 RGB gives you an uncompressed color information, and is ideal for any keying work, but it is important to remember, that you won’t get more resolution from the camera, than its sensor can give you. With typical RGGB pattern, and sensor resolution not significantly higher, than final delivery, you will still be limited by the debayering algorithm and the lowest number of pixels. It’s excellent if you can avoid introducing compression and decompression artifacts, which will inevitably happen with any sort of chroma subsampling, but it might turn out that there is little to be gained in pursuing 4:4:4 workflow due to the lack of proper signal path, as is for example with any HDMI interface from DSLRs, which outputs 8-bit 4:2:0 YCbCr signal anyway, or many cameras not having proper dual-link SDI to output digital 4:4:4 RGB. Analog YCbCr output signal (component) is always at least 4:2:2 compressed.

A good alternative to 4:4:4 is a raw output from camera sensor – provided that you remember about everything what I wrote before about the actual sensor resolution. So far there are only two sensible options in this regard – RED R3D and ArriRaw.

There are also not very many codecs and acquisition devices that allow you to record 4:4:4 RGB, and most still require fast and big storage arrays, and thus its application is rather limited to bigger productions with bigger budgets. It is slowly changing due to falling prices of SSD drives that easily satisfy the writing speed requirements, and portable recorders like Convergent Design Gemini, but storage space and archiving of such footage still remains a problem, even in the days of LTO-5.

Artifacts introduced by chroma subsampling

Chroma subsampling introduces artifacts that are mostly invisible to the naked eye, but can make proper keying hard or even impossible

Readers with more technical aptitude can consult two more detailed descriptions of problems associated with chroma subsampling:

  1. Merging computing with studio video: Converting between R’G’B’ and 4:2:2
  2. Towards Better Chroma Subsampling

The higher sensitivity of human eye and cameras to green color means also, that you don’t need as much light to light the green screen, as you would for the blue one. The downside however is that the green screen does have much more invasive spill, and due to the fact that it is not a complementary color to red, it is much more noticeable and annoying than the blue spill, and requires much more attention during removal. Plus spending a whole day in a green screen environment can easily give you a headache as well.

Generally it is understandable why the green screen is a default choice for digital pipeline. However, as with all rules of the thumb, there is more than meets (or irritates) the eye.

When considering keying, you need to remember that it is not enough that you get the highest resolution in the channel where your screen is present (assuming that it is correctly lit, does not spill into other channels, and there is not much noise in the footage). Keying algorithms still rely on contrasting values and/or colors, using separate RGB color channels. Those channels – if chroma subsampled – are reconstructed from YCbCr in your composition software.

Therefore, even assuming little or no spill from the green screen to the actors, if you have a gray object (let it be a shirt), which has similar value in green channel to the green screen, then this channel is made useless for keying by this very fact. You can’t get any contrast from it. You and your keying algorithm are left to try obtaining the proper separation in the remaining channels, first red, and then blue (where most likely most of the noise resides, and which has meager 7% input in luminance), which automatically reduces your resolution, also introducing more noise. In the best case you get a less crispy and a little unstable edge. In the worst, you have to resort to rotoscoping, defeating the purpose of shooting on the green screen in the first place.

Now consider the same object on a blue screen – when your blue screen has the same luminance as a neutral object, then you throw the blue channel away, and most likely can use green and red channels for keying. Much better option, wouldn’t you say?

Difference of blue screen and green screen keying with improper exposition

If the green value of an object on a green screen is similar to the screen itself, keying will be a problem

Of course this caveat holds true only for items with green channel level close to the level of the screen. If we want to extract shadows, it’s a completely different story – we need to get contrast in the shadows as well, and to this end green screen will most likely be more appropriate. But if we don’t, then choosing a color of the screen entails more than simply looking what color the uniforms or props are or a basic rule of the thumb that “green is better for digital”. You need to look at the exposure as well.

There are a few other ways to overcome this problem. One is to record 4:4:4 using a camera that can deliver proper signal, then you are only limited by the amount of noise in each channel. Another is to shoot at twice the resolution of final image (4K against 2K delivery), and then to reduce the footage size before keying and compositing. This way the noise will be seriously reduced, and the resolution in every channel will be improved. Of course, then it is advisable to output the intermediates to any 4:4:4 codec (most VFX software will make excellent use of DPX files) to retain the information.

Another sometimes useful – and cheap – solution might be to shoot vertically (always progressive, right?), thus gaining some resolution, however remember that in 4:2:2, and in 4:1:1 compression schemes, it is the horizontal (and now vertical) resolution that gets squashed, so the gain might not be as high as you hoped, and in the dimension that is more critical for perception, so make sure that you’re not making your situation worse.

The key in keying is not only to know what kind of algorithm or plugin to use. The key is also to know what kind of equipment, codec and surface should be used to obtain the optimal results, and it all starts – as with most things – even before the set is build. Especially if you’re on a budget.

To sum up:

  • Consult your VFX supervisor, and make sure he’s involved throughout the production process.
  • Use field monitoring to see how the exposition in the green channel looks like, and if you are gettting proper separation.
  • Consider different camera and/or codec for green/blue screen work.
  • Try to avoid chroma subsampling. If it’s not feasible, try to get the best possible signal from your camera.
  • Consider shooting VFX scenes in twice the final resolution to get the best resolution and the least noise.
Posted in tips, visual effects | Tagged , , , , , | Leave a comment