Instagram introduced its Hyperlapse app on the iOS App Store not long after Microsoft showed results from its own Hyperlapse research project in August 2014. Online reactions suggest that a lot of people are confused about what Instagram and Microsoft are actually doing. Are these companies copying each other, or is hyperlapse a trend they both want to ride? Is hyperlapse just a fancy repackaging of time lapse, which many apps already do? Or is hyperlapse stabilization just another form of the video image stabilization that’s already been available in video editing applications for years?
The short answer is that time lapse, hyperlapse, and conventional video stabilization are distinct techniques with different challenges. The recent efforts by Instagram and Microsoft specifically address the instability of hyperlapse video. But they aren’t copying each other, because they use contrasting strategies.
Time lapse versus hyperlapse
First, let’s compare time lapse and hyperlapse. In time lapse photography you record sequential frames at a much lower rate than a normal video or film frame rate. For time lapse you might record one frame every 5 seconds. After recording, you play back the frames at a normal frame rate such as 30 frames per second to produce the effect of compressed time. In the following time lapse, I compressed about 45 real-time minutes into less than one video minute:
In most time lapse photography, the camera stays in one place. The only way the camera gets to rotate or move a short distance if it’s on a motion-control rig. (In the time lapse above, the camera was locked down on a tripod; the movement was simulated in software by panning and zooming a 1920 x 1080 pixel HD video frame across a sequence of 5184 x 3456 pixel still frames from a digital SLR camera.)
In a hyperlapse, the camera can physically change position over a long distance. For example, the camera might be mounted on a car recording a 200-mile road trip, or it might be a helmet camera as you climb a mountain, or you might hold a camera as it records while you walk down the street. Hyperlapses are often recorded with a first-person point of view, especially as wearable action cameras have become affordable and popular like the GoPro. Many hyperlapse videos are recorded manually using frame-by-frame methods that are labor-intensive, as shown in the video below by DigitalRev:
Because a typical hyperlapse recording makes the camera cover a significant distance, it’s just about impossible to maintain consistent framing as you move the camera again and again. During playback, this results in much more shakiness and instability than you’d see in a traditional time lapse, making it difficult to watch. This inherent instability is the hyperlapse challenge that Instagram and Microsoft have tried to overcome.
Comparing how Instagram and Microsoft approach hyperlapse instability
One answer to the problem of hyperlapse instability comes from Microsoft, which published the results of a research project where they found a better way to analyze first-person hyperlapse footage and remove the instability. To achieve this, their solution tries to figure out the original 3D scene and motion path from the 2D video recorded by the camera, and then it uses that synthesized 3D data to reconstruct each frame so that you see much smoother playback. Here’s the demonstration video from Microsoft Research:
The Instagram solution takes advantage of both iPhone hardware and iOS APIs to acquire additional data while recording video. The Instagram Hyperlapse app takes 3D positioning data from the iPhone gyroscope and camera so that it can immediately apply accurate alterations to each frame as it renders the final video. (Instagram says Android APIs currently don’t provide the needed access to an Android phone’s gyroscope and camera.) This is a short demonstration video of the Hyperlapse app by Instagram:
Both approaches are useful in different ways. The Instagram approach is potentially more accurate because it records 3D orientation data directly from the camera at the time each frame is recorded. Having actual orientation data can greatly reduce the amount of processing needed; there’s no need to guess the original 3D motion path because it already recorded that data along with the video. The lower processing load also means it’s much easier to run it on a smartphone, where both processing power and battery power are limited. The Microsoft approach is better when the original video was recorded by a camera that couldn’t provide the necessary gyroscope and camera data, but because it doesn’t have original motion data it needs much more processing power to figure out how the camera moved during the shoot.
The Instagram Hyperlapse app currently has some additional advantages: Instagram paid a lot of attention to user experience, so using the Hyperlapse app is easier, simpler, and faster than creating and stabilizing hyperlapse videos the manual way. And it’s available to millions of people now, while Microsoft is still in the labs and its final ease of use is unknown.
Both Instagram and Microsoft are trying to solve a problem that’s increasingly common now that there’s so much more footage from action cameras like the GoPro, but their approaches are so different that they are clearly not copying each other.
[Update: Microsoft published their own response to questions about the differences between the Instagram and Hyperlapse stabilization techniques. In it they point out another advantage of the Microsoft technique, which is the ability to reconstruct missing pixels by sampling them from adjacent frames. This greatly helps the stabilization results from video taken when your hand or head jumps around too much from frame to frame.]
Hyperlapse stabilization versus software video stabilization
Some have asked: Are these hyperlapse solutions the same as the image stabilization that you find in video editing software? Mostly not. Video image stabilization in software is usually designed to address high frequency camera movement during real time recording, like when a clip looks shaky because you handheld the camera.
Advanced video stabilizing software can go beyond basic software or digital stabilization. Some, such as Adobe Warp Stabilizer VFX, try to work out the camera’s 3D motion path instead of analyzing just 2D shifts in position. Like Warp Stabilizer, the Microsoft hyperlapse solution does a 3D analysis of 2D footage, but Microsoft does additional processing to adapt and extend the 3D analysis for time scales as long as those in a hyperlapse.
The Microsoft approach can also be considered a form of digital image stabilization in that each frame is processed after a frame is recorded. In contrast, you can think of the Instagram solution as a variation on optical image stabilization where a camera or lens includes stabilizing hardware such as a gyroscope, so that an image is already stabilized before it’s recorded.
Each solution has a purpose
This overview should make it clear that these different approaches to stabilization aren’t redundant. They all exist because each of them solves a different problem.
Optical, digital, and software-based image stabilization are options for stabilizing footage that’s both recorded and played back in real time. The Instagram and Microsoft methods are ways to stabilize long-duration footage that’s recorded for a hyperlapse playback speed.
Optical stabilization and the Instagram hyperlapse approach use recording hardware that helps produce cleaner source footage. By stabilizing the image that’s originally recorded, there’s less need for additional stabilization processing.
Digital image stabilization, image stabilization in video editing software, and the Microsoft hyperlapse approach are for post-processing footage that was recorded without physical orientation data from the hardware. They require more processing power, but they work with recordings from any camera.
[Update, May 2015: Microsoft has now made its Hyperlapse technology available in desktop and mobile apps. For details, see the Microsoft Hyperlapse web page.]
It’s been a busy week over at Adobe, with the release of Adobe Creative Suite 5.5 and a free update to Adobe Photoshop CS5 12.0.4. There are lots of places on the web where you can read about specific new features, so here I’ve got a more customer-oriented take on these updates.
Adobe Creative Suite 5.5
Adobe Creative Suite 5.5 is a paid upgrade, and yet it isn’t CS6, so you’ll naturally ask whether you need it. You’ll probably be happiest with the CS5.5 feature set if you want to more easily integrate the latest technologies and formats into your workflow, such as HD video from the newest digital cinema and DSLR cameras; or if you’ve wanted more efficient ways to create, preview, and publish ebooks and other content for tablets and smartphones using Adobe InDesign, Adobe Flash, or Dreamweaver. It’s primarily because of these fast-moving new technologies and delivery media that Adobe felt a .5 release was warranted. If your day-to-day work is not so cutting-edge, you may have less of an need to upgrade.
If you edit video, the upgrade may be well worth it. Adobe CS5.5 Production Premium gets quite a boost, with enhancements like expanded GPU support and dual-system sound in Adobe Premiere Pro, fast 64-bit Adobe Media Encoder with an efficient new UI and customizable presets, a first-ever Mac version of Adobe Audition pro audio software, and the advanced Warp Stabilizer in After Effects for steadying shaky handheld footage. (If it sounds like I’m more familiar with Production Premium here, it’s because I was involved in producing some of the launch content about its new features.)
If you haven’t upgraded to CS5 yet, you do get a pretty long list of new features when you put CS5 and CS.5 together. You can see handy lists of CS5.5 new features versus CS5, CS4, and CS3 on the Adobe Creative Suite web page (pick a suite, then click Features).
Adobe also announced a move to 24-month major upgrade cycles with a minor .5 upgrade halfway between those. While cynics will say that more frequent upgrades is a way for Adobe to charge customers more often, the increased frequency can be a good thing overall. Shorter cycles actually make it easier to skip upgrades since you know the next one’s just another 12 months down the road, yet if you find yourself in a situation where new client requirements or business needs require new capabilities, you’ll likely get them sooner than with a longer upgrade cycle. It’s like when a train starts running more often: You’re not going to ride them all, but when you do need one, you won’t have to wait as long.
The version of Photoshop that ships with Creative Suite 5.5 is numbered 12.1, which is the same as 12.0.4 except that it also works with the new subscription licensing that Adobe announced along with Creative Suite CS5.5. (Note that there is no Photoshop CS5.5.)
To get the update, start Photoshop CS5 and choose Help > Updates. If you prefer to download the standalone installer or want to read the release notes, go to:
Because Adobe tends to provide Camera Raw plug-in updates only for the current major version of Photoshop, some users have expressed concerns about whether a paid upgrade is needed to continue getting free Camera Raw updates. Because the current major version of Photoshop remains Photoshop CS5, your free Camera Raw updates will continue, presumably until CS6.