Rethinking the Pixel: It’s All Relative Now — article

How big is a pixel? It’s widely thought that a pixel is the smallest dot that screen hardware can physically display: One pixel is one pixel. That was safe to assume for over a quarter century because the pixel density of most of our screens was stuck between 72 and 120 pixels per inch (ppi) during that era, even while everything else about our computers got exponentially faster and bigger. But screens would finally make their move, and for designers that would change how a pixel is defined.

Want the whole story? Click the link below to read my article at
Rethinking the Pixel: It’s All Relative Now

Rethinking the Pixel: It’s All Relative Now

Hyperlapse, time lapse, and video stabilization: Different problems, different solutions

Instagram Hyperlapse app

Instagram introduced its Hyperlapse app on the iOS App Store not long after Microsoft showed results from its own Hyperlapse research project in August 2014. Online reactions suggest that a lot of people are confused about what Instagram and Microsoft are actually doing. Are these companies simply copying each other to get on the hyperlapse bandwagon? Is hyperlapse just a fancy repackaging of time lapse, which many apps already do? Or is hyperlapse stabilization just another form of the video image stabilization that’s already been available in video editing applications for years?

The short answer is that time lapse, hyperlapse, and conventional video stabilization are distinct techniques with different challenges. The recent efforts by Instagram and Microsoft specifically address the instability of hyperlapse video. But they aren’t copying each other, because they use contrasting strategies.

Time lapse versus hyperlapse

First, let’s compare time lapse and hyperlapse. In time lapse photography you record sequential frames at a much lower rate than a normal video or film frame rate. For example, you might record one frame every 10 seconds. After recording, you play back the frames at a normal frame rate such as 30 frames per second to produce the effect of compressed time. In the following time lapse, I compressed about 20 minutes into 20 seconds:

In most time lapse photography, the camera stays in one place. The only way the camera gets to rotate or move a short distance if it’s on a motion-control rig. (In the time lapse above, the camera was locked down on a tripod; the movement was simulated in software by panning a 1920 x 1080 pixel video frame across a sequence of 5184 x 3456 pixel still frames.)

In a hyperlapse, the camera can change position over a long distance. For example, the camera might be mounted on a car recording a 200-mile road trip, or it might be a helmet camera as you climb a mountain, or you might hold a camera as it records while you walk down the street. Hyperlapses are often recorded with a first-person point of view, especially as wearable action cameras have become affordable and popular like the GoPro. Many hyperlapse videos are recorded manually using frame-by-frame methods that are labor-intensive, as shown in the video below by DigitalRev:

Because a typical hyperlapse recording makes the camera cover a significant distance, it’s just about impossible to maintain consistent framing as you move the camera again and again. During playback, this results in much more shakiness and instability than you’d see in a traditional time lapse, making it difficult to watch. This inherent instability is the hyperlapse challenge that Instagram and Microsoft have tried to overcome.

Comparing how Instagram and Microsoft approach hyperlapse instability

One answer to the problem of hyperlapse instability comes from Microsoft, which published the results of a research project where they found a better way to analyze first-person hyperlapse footage and remove the instability. To achieve this, their solution tries to figure out the original 3D scene and motion path that was recorded by the camera in 2D, and then it uses that synthesized 3D data to reconstruct each frame so that you see much smoother playback. Here’s the demonstration video from Microsoft Research:

The Instagram solution takes advantage of both iPhone hardware and iOS APIs to acquire additional data while recording video. The Instagram Hyperlapse app takes 3D positioning data from the iPhone gyroscope and camera so that it can immediately apply accurate alterations to each frame as it renders the final video. (Instagram says Android APIs currently don’t provide the needed access to an Android phone’s gyroscope and camera.) This is a short demonstration video of the Hyperlapse app by Instagram:

Both approaches are useful in different ways. The Instagram approach is potentially more accurate because it records 3D orientation data directly from the camera at the time each frame is recorded. Having actual orientation data can greatly reduce the amount of processing needed, because there’s no need to guess the original 3D motion path. The lower processing load also means it’s much easier to run it on a smartphone, where both processing power and battery power are limited. The Microsoft approach is better when the original video was recorded by a camera that couldn’t provide the necessary gyroscope and camera data, but it needs much more processing power.

The Instagram Hyperlapse app currently has some additional advantages: Instagram paid a lot of attention to user experience, so using the Hyperlapse app is easier, simpler, and faster than creating and stabilizing hyperlapse videos the manual way. And it’s available to millions of people now, while Microsoft is still in the labs and its final ease of use is unknown.

Both Instagram and Microsoft are trying to solve a problem that’s increasingly common now that there’s so much more footage from action cameras like the GoPro, but their approaches are so different that they are clearly not copying each other.

Hyperlapse stabilization versus software video stabilization

Some have asked whether these hyperlapse solutions are the same as the image stabilization that’s already common in video editing software. Mostly not. Video image stabilization in software is usually designed to address high frequency camera movement during real time recording, like when a clip looks shaky because you handheld the camera.

Advanced video stabilizing software can go beyond basic software or digital stabilization. Some, such as Adobe Warp Stabilizer VFX, try to work out the camera’s 3D motion path instead of analyzing just 2D shifts in position. Like Warp Stabilizer, the Microsoft hyperlapse solution does a 3D analysis of 2D footage, but Microsoft does additional processing to adapt and extend the 3D analysis for time scales as long as those in a hyperlapse.

The Microsoft approach can also be considered a form of digital image stabilization in that each frame is processed after a frame is recorded. In contrast, you can think of the Instagram solution as a variation on optical image stabilization where a camera or lens includes stabilizing hardware such as a gyroscope, so that an image is already stabilized before it’s recorded.

Each solution has a purpose

This overview should make it clear that these different approaches to stabilization aren’t redundant. They all exist because each of them solves a different problem.

Optical, digital, and software-based image stabilization are options for stabilizing footage that’s both recorded and played back in real time. The Instagram and Microsoft methods are ways to stabilize long-duration footage that’s recorded for a hyperlapse playback speed.

Optical stabilization and the Instagram hyperlapse approach use recording hardware that helps produce cleaner source footage. By stabilizing the image that’s originally recorded, there’s less need for additional stabilization processing.

Digital image stabilization, image stabilization in video editing software, and the Microsoft hyperlapse approach are for post-processing footage that was recorded without physical orientation data from the hardware. They require more processing power, but they work with recordings from any camera.

Editing Highlights and Shadows in Adobe Lightroom and Camera Raw — article

Adobe Photoshop Lightroom and Adobe Camera Raw have two sets of controls for making tone and contrast adjustments: The Basic panel Tone sliders and the Tone Curve. Because the slider names in these two sets of tools are almost the same, some believe that both sets of sliders do the same thing, while others believe the newer Basic Tone sliders are better and there is no longer a need for the Tone Curve. But neither statement is true: A closer look reveals that each set of controls affects your images in subtle but important ways.

Want the whole story? Click the link below to read my article at
Editing Highlights and Shadows in Adobe Lightroom and Camera Raw

Editing Highlights and Shadows in Adobe Lightroom and Camera Raw article on

How to Blur Backgrounds with a Compact Camera — article

Blurring the background of a photo is often used to help draw attention to the subject. It’s not hard to do with an SLR camera because of the size of the digital sensor or film frame, but what if all you have is the little digital camera in your pocket? You can still get it done using traditional techniques for depth-of-field control, but with a small camera you’ll have to work a little harder at it. The good news? Some newer digital compact cameras give you more of the depth-of-field control that used to be available only with larger cameras.

Want the whole story? Click the link below to read my article at
How to Blur Backgrounds with a Compact Camera

How to Blur Backgrounds with a Compact Camera on

How to install earlier versions of Creative Cloud applications

You can now install older versions of Adobe software using the Creative Cloud desktop application. For some reason this feature is not easy to find in the Creative Cloud app, so I’ll lead you through the steps.

Start by clicking the icon for the Creative Cloud desktop application to open it. In OS X, that icon is in the menu bar; in Windows it’s a tile on the Start screen or an icon in the Taskbar.

In the Creative Cloud application, click Apps and scroll down to the Find New Apps section. Click the blue Filters & Versions text at the right side of the Find New Apps section heading and choose Previous Version. This adds a drop-down arrow to the Install buttons in the Find New Apps section.

Choosing Previous Version from the Filters & Versions menu in the Creative Cloud desktop application

Find Photoshop in the Find New Apps section, click its Install button, and choose the earlier version you want. The Creative Cloud app will install that version.

Choosing a previous version from the Install button for Photoshop in the Creative Cloud desktop application

Not all versions are available. You can install only the versions that have been designed or adapted to be installed by the Adobe Creative Cloud desktop application. In general that means CS6 or later.

For example, the earliest version of Photoshop you can install is Photoshop CS6, because older versions (such as Photoshop CS5, CS4, etc.) were not adapted to work with Creative Cloud. If you want to install Adobe software that isn’t listed in the Creative Cloud app, you have to use that application’s own installer and have a valid serial number to complete the installation (in other words, you have to install it the traditional way).