video

How to Compress Video: CreativePro.com article

If you need to compress video for online streaming from websites such as YouTube or Vimeo, the sheer number of export settings can be bewildering. Where do you start?

Luckily, you only need to pay attention to a few key choices. I talk about those in an article I wrote for CreativePro.com, which you can read at the following link:

How to Compress Video

The article is written from the point of view of exporting from Adobe Premiere Pro or Adobe Media Encoder, but the general approach works in other software.

Learn Adobe Premiere Pro CC for Video Communication: Now available!

Learn Adobe Premiere Pro CC for Video Communication: Adobe Certified Associate Exam Preparation

If you’re getting ready to take the Adobe Certified Associate (ACA) Exam for Adobe Premiere Pro CC, I recently helped write a study guide for it. Learn Adobe Premiere Pro CC for Video Communication: Adobe Certified Associate Exam Preparation (yeah, it’s a long title) isn’t just a book. Buying the printed or ebook versions of Learn Adobe Premiere Pro CC for Video Communication: Adobe Certified Associate Exam Preparation also gives you access to the Web Edition with embedded videos by experienced Premiere Pro instructor Joe Dockery. I wrote the text that accompanies Joe’s videos.

(more…)

New OS X color profiles strengthen Mac digital cinema support

For creative professionals, one of the most interesting things about the Late 2015 release of the 4K and 5K Retina iMac is that it uses the first wide gamut display Apple has ever made. And the color gamut it uses is not the Adobe RGB gamut usually seen on wide gamut monitors, but a gamut called P3 which is used in digital cinema.

Mac websites have not gone into much detail about this display except to more or less repeat what Apple says in their marketing materials, so I took a closer look at this display in my earlier article, A look at the P3 color gamut of the iMac display (Retina, Late 2015). As I was examining the wide gamut P3 display, I realized that there are several color profiles installed with OS X that I haven’t seen before. What led me to write this article was that almost no one seems to have mentioned these new profiles…and what they have in common.

(more…)

Discovering a time lapse video in long exposure stills

One of the most important tools for creativity is keeping an open mind. While reviewing a photo shoot of long exposure still photography, I uncovered an even more fun project hiding among the images.

(more…)

Hyperlapse, time lapse, and video stabilization: Different problems, different solutions

Instagram Hyperlapse app

Instagram introduced its Hyperlapse app on the iOS App Store not long after Microsoft showed results from its own Hyperlapse research project in August 2014. Online reactions suggest that a lot of people are confused about what Instagram and Microsoft are actually doing. Are these companies copying each other, or is hyperlapse a trend they both want to ride? Is hyperlapse just a fancy repackaging of time lapse, which many apps already do? Or is hyperlapse stabilization just another form of the video image stabilization that’s already been available in video editing applications for years?

The short answer is that time lapse, hyperlapse, and conventional video stabilization are distinct techniques with different challenges. The recent efforts by Instagram and Microsoft specifically address the instability of hyperlapse video. But they aren’t copying each other, because they use contrasting strategies.

Time lapse versus hyperlapse

First, let’s compare time lapse and hyperlapse. In time lapse photography you record sequential frames at a much lower rate than a normal video or film frame rate. For time lapse you might record one frame every 5 seconds. After recording, you play back the frames at a normal frame rate such as 30 frames per second to produce the effect of compressed time. In the following time lapse, I compressed about 45 real-time minutes into less than one video minute:

In most time lapse photography, the camera stays in one place. The only way the camera gets to rotate or move a short distance if it’s on a motion-control rig. (In the time lapse above, the camera was locked down on a tripod; the movement was simulated in software by panning and zooming a 1920 x 1080 pixel HD video frame across a sequence of 5184 x 3456 pixel still frames from a digital SLR camera.)

In a hyperlapse, the camera can physically change position over a long distance. For example, the camera might be mounted on a car recording a 200-mile road trip, or it might be a helmet camera as you climb a mountain, or you might hold a camera as it records while you walk down the street. Hyperlapses are often recorded with a first-person point of view, especially as wearable action cameras have become affordable and popular like the GoPro. Many hyperlapse videos are recorded manually using frame-by-frame methods that are labor-intensive, as shown in the video below by DigitalRev:

Because a typical hyperlapse recording makes the camera cover a significant distance, it’s just about impossible to maintain consistent framing as you move the camera again and again. During playback, this results in much more shakiness and instability than you’d see in a traditional time lapse, making it difficult to watch. This inherent instability is the hyperlapse challenge that Instagram and Microsoft have tried to overcome.

Comparing how Instagram and Microsoft approach hyperlapse instability

One answer to the problem of hyperlapse instability comes from Microsoft, which published the results of a research project where they found a better way to analyze first-person hyperlapse footage and remove the instability. To achieve this, their solution tries to figure out the original 3D scene and motion path from the 2D video recorded by the camera, and then it uses that synthesized 3D data to reconstruct each frame so that you see much smoother playback. Here’s the demonstration video from Microsoft Research:

The Instagram solution takes advantage of both iPhone hardware and iOS APIs to acquire additional data while recording video. The Instagram Hyperlapse app takes 3D positioning data from the iPhone gyroscope and camera so that it can immediately apply accurate alterations to each frame as it renders the final video. (Instagram says Android APIs currently don’t provide the needed access to an Android phone’s gyroscope and camera.) This is a short demonstration video of the Hyperlapse app by Instagram:

Both approaches are useful in different ways. The Instagram approach is potentially more accurate because it records 3D orientation data directly from the camera at the time each frame is recorded. Having actual orientation data can greatly reduce the amount of processing needed; there’s no need to guess the original 3D motion path because it already recorded that data along with the video. The lower processing load also means it’s much easier to run it on a smartphone, where both processing power and battery power are limited. The Microsoft approach is better when the original video was recorded by a camera that couldn’t provide the necessary gyroscope and camera data, but because it doesn’t have original motion data it needs much more processing power to figure out how the camera moved during the shoot.

The Instagram Hyperlapse app currently has some additional advantages: Instagram paid a lot of attention to user experience, so using the Hyperlapse app is easier, simpler, and faster than creating and stabilizing hyperlapse videos the manual way. And it’s available to millions of people now, while Microsoft is still in the labs and its final ease of use is unknown.

Both Instagram and Microsoft are trying to solve a problem that’s increasingly common now that there’s so much more footage from action cameras like the GoPro, but their approaches are so different that they are clearly not copying each other.

[Update: Microsoft published their own response to questions about the differences between the Instagram and Hyperlapse stabilization techniques. In it they point out another advantage of the Microsoft technique, which is the ability to reconstruct missing pixels by sampling them from adjacent frames. This greatly helps the stabilization results from video taken when your hand or head jumps around too much from frame to frame.]

Hyperlapse stabilization versus software video stabilization

Some have asked: Are these hyperlapse solutions the same as the image stabilization that you find in video editing software? Mostly not. Video image stabilization in software is usually designed to address high frequency camera movement during real time recording, like when a clip looks shaky because you handheld the camera.

Advanced video stabilizing software can go beyond basic software or digital stabilization. Some, such as Adobe Warp Stabilizer VFX, try to work out the camera’s 3D motion path instead of analyzing just 2D shifts in position. Like Warp Stabilizer, the Microsoft hyperlapse solution does a 3D analysis of 2D footage, but Microsoft does additional processing to adapt and extend the 3D analysis for time scales as long as those in a hyperlapse.

The Microsoft approach can also be considered a form of digital image stabilization in that each frame is processed after a frame is recorded. In contrast, you can think of the Instagram solution as a variation on optical image stabilization where a camera or lens includes stabilizing hardware such as a gyroscope, so that an image is already stabilized before it’s recorded.

Each solution has a purpose

This overview should make it clear that these different approaches to stabilization aren’t redundant. They all exist because each of them solves a different problem.

Optical, digital, and software-based image stabilization are options for stabilizing footage that’s both recorded and played back in real time. The Instagram and Microsoft methods are ways to stabilize long-duration footage that’s recorded for a hyperlapse playback speed.

Optical stabilization and the Instagram hyperlapse approach use recording hardware that helps produce cleaner source footage. By stabilizing the image that’s originally recorded, there’s less need for additional stabilization processing.

Digital image stabilization, image stabilization in video editing software, and the Microsoft hyperlapse approach are for post-processing footage that was recorded without physical orientation data from the hardware. They require more processing power, but they work with recordings from any camera.

[Update, May 2015: Microsoft has now made its Hyperlapse technology available in desktop and mobile apps. For details, see the Microsoft Hyperlapse web page.]