Walk into a graphic design or photography studio and you’ll probably see a graphics tablet on the desk. With an app called Astropad you can use an iPad or iPhone as a graphics tablet for a Mac, painting and drawing with your finger or stylus. Astropad even supports pressure sensitivity with a compatible stylus or with 3D Touch on the iPhone 6s and 6s Plus. But how close does it get to using a Wacom graphics tablet?
To see what I think, read Review: Astropad, published on CreativePro.com.
Instagram introduced its Hyperlapse app on the iOS App Store not long after Microsoft showed results from its own Hyperlapse research project in August 2014. Online reactions suggest that a lot of people are confused about what Instagram and Microsoft are actually doing. Are these companies copying each other, or is hyperlapse a trend they both want to ride? Is hyperlapse just a fancy repackaging of time lapse, which many apps already do? Or is hyperlapse stabilization just another form of the video image stabilization that’s already been available in video editing applications for years?
The short answer is that time lapse, hyperlapse, and conventional video stabilization are distinct techniques with different challenges. The recent efforts by Instagram and Microsoft specifically address the instability of hyperlapse video. But they aren’t copying each other, because they use contrasting strategies.
Time lapse versus hyperlapse
First, let’s compare time lapse and hyperlapse. In time lapse photography you record sequential frames at a much lower rate than a normal video or film frame rate. For time lapse you might record one frame every 5 seconds. After recording, you play back the frames at a normal frame rate such as 30 frames per second to produce the effect of compressed time. In the following time lapse, I compressed about 45 real-time minutes into less than one video minute:
In most time lapse photography, the camera stays in one place. The only way the camera gets to rotate or move a short distance if it’s on a motion-control rig. (In the time lapse above, the camera was locked down on a tripod; the movement was simulated in software by panning and zooming a 1920 x 1080 pixel HD video frame across a sequence of 5184 x 3456 pixel still frames from a digital SLR camera.)
In a hyperlapse, the camera can physically change position over a long distance. For example, the camera might be mounted on a car recording a 200-mile road trip, or it might be a helmet camera as you climb a mountain, or you might hold a camera as it records while you walk down the street. Hyperlapses are often recorded with a first-person point of view, especially as wearable action cameras have become affordable and popular like the GoPro. Many hyperlapse videos are recorded manually using frame-by-frame methods that are labor-intensive, as shown in the video below by DigitalRev:
Because a typical hyperlapse recording makes the camera cover a significant distance, it’s just about impossible to maintain consistent framing as you move the camera again and again. During playback, this results in much more shakiness and instability than you’d see in a traditional time lapse, making it difficult to watch. This inherent instability is the hyperlapse challenge that Instagram and Microsoft have tried to overcome.
Comparing how Instagram and Microsoft approach hyperlapse instability
One answer to the problem of hyperlapse instability comes from Microsoft, which published the results of a research project where they found a better way to analyze first-person hyperlapse footage and remove the instability. To achieve this, their solution tries to figure out the original 3D scene and motion path from the 2D video recorded by the camera, and then it uses that synthesized 3D data to reconstruct each frame so that you see much smoother playback. Here’s the demonstration video from Microsoft Research:
The Instagram solution takes advantage of both iPhone hardware and iOS APIs to acquire additional data while recording video. The Instagram Hyperlapse app takes 3D positioning data from the iPhone gyroscope and camera so that it can immediately apply accurate alterations to each frame as it renders the final video. (Instagram says Android APIs currently don’t provide the needed access to an Android phone’s gyroscope and camera.) This is a short demonstration video of the Hyperlapse app by Instagram:
Both approaches are useful in different ways. The Instagram approach is potentially more accurate because it records 3D orientation data directly from the camera at the time each frame is recorded. Having actual orientation data can greatly reduce the amount of processing needed; there’s no need to guess the original 3D motion path because it already recorded that data along with the video. The lower processing load also means it’s much easier to run it on a smartphone, where both processing power and battery power are limited. The Microsoft approach is better when the original video was recorded by a camera that couldn’t provide the necessary gyroscope and camera data, but because it doesn’t have original motion data it needs much more processing power to figure out how the camera moved during the shoot.
The Instagram Hyperlapse app currently has some additional advantages: Instagram paid a lot of attention to user experience, so using the Hyperlapse app is easier, simpler, and faster than creating and stabilizing hyperlapse videos the manual way. And it’s available to millions of people now, while Microsoft is still in the labs and its final ease of use is unknown.
Both Instagram and Microsoft are trying to solve a problem that’s increasingly common now that there’s so much more footage from action cameras like the GoPro, but their approaches are so different that they are clearly not copying each other.
[Update: Microsoft published their own response to questions about the differences between the Instagram and Hyperlapse stabilization techniques. In it they point out another advantage of the Microsoft technique, which is the ability to reconstruct missing pixels by sampling them from adjacent frames. This greatly helps the stabilization results from video taken when your hand or head jumps around too much from frame to frame.]
Hyperlapse stabilization versus software video stabilization
Some have asked: Are these hyperlapse solutions the same as the image stabilization that you find in video editing software? Mostly not. Video image stabilization in software is usually designed to address high frequency camera movement during real time recording, like when a clip looks shaky because you handheld the camera.
Advanced video stabilizing software can go beyond basic software or digital stabilization. Some, such as Adobe Warp Stabilizer VFX, try to work out the camera’s 3D motion path instead of analyzing just 2D shifts in position. Like Warp Stabilizer, the Microsoft hyperlapse solution does a 3D analysis of 2D footage, but Microsoft does additional processing to adapt and extend the 3D analysis for time scales as long as those in a hyperlapse.
The Microsoft approach can also be considered a form of digital image stabilization in that each frame is processed after a frame is recorded. In contrast, you can think of the Instagram solution as a variation on optical image stabilization where a camera or lens includes stabilizing hardware such as a gyroscope, so that an image is already stabilized before it’s recorded.
Each solution has a purpose
This overview should make it clear that these different approaches to stabilization aren’t redundant. They all exist because each of them solves a different problem.
Optical, digital, and software-based image stabilization are options for stabilizing footage that’s both recorded and played back in real time. The Instagram and Microsoft methods are ways to stabilize long-duration footage that’s recorded for a hyperlapse playback speed.
Optical stabilization and the Instagram hyperlapse approach use recording hardware that helps produce cleaner source footage. By stabilizing the image that’s originally recorded, there’s less need for additional stabilization processing.
Digital image stabilization, image stabilization in video editing software, and the Microsoft hyperlapse approach are for post-processing footage that was recorded without physical orientation data from the hardware. They require more processing power, but they work with recordings from any camera.
[Update, May 2015: Microsoft has now made its Hyperlapse technology available in desktop and mobile apps. For details, see the Microsoft Hyperlapse web page.]
Have you ever had trouble reading a PDF file on an iOS device such as an iPad? A PDF file that was emailed to me wouldn’t open on iPhone or iPad, and not even the file name showed up correctly. The file opened normally on my computer, so I knew the PDF file wasn’t completely corrupted. While it’s still a mystery why the PDF file didn’t work on iOS, in the end I did fix the problem. Here’s how.
I didn’t have access to the original document, so I couldn’t export the PDF file again from the source. I had to try and fix it on my side. I started by opening it in Adobe Acrobat X Pro, where I tried choosing File > Save As > PDF to write out a new copy of the file. After that didn’t work, I tried Reduced Size PDF on the same submenu. That didn’t work either. It didn’t help to open the PDF file in Apple Preview and choose File > Save As.
At this point I was stumped. Knowing that the file worked fine on a computer, I was still convinced that there had to be a way to fix it.
I next used Acrobat Pro X to save it to PDF-X/1A, a standard for high-end prepress. This time it failed even to convert, which turned out to be a good thing because the failure showed me an error message. The message suggested that I run the PDF through the Preflight feature using the Convert to sRGB preflight profile. That was a great idea; I should have thought of using Preflight sooner since the purpose of a preflight feature is to catch file problems before they cost time and money later down the line.
In Acrobat X Pro, Preflight is buried in the Print Production panel in the Tools pane on the right side of the Acrobat workspace. I selected Convert to sRGB, and then clicked the Analyze and Fix button.
That worked! The next time I transferred the PDF file to iPad, it was perfectly readable.
In the end, my troubleshooting guess was correct: Find something that can rewrite the PDF file radically enough to change whatever was causing the error, even without knowing the exact problem.
Of course, not everyone has Acrobat Pro and it is not cheap, but if you have access to some versions of Adobe Creative Suite, Acrobat Pro is included and so you have it. Keep Acrobat Pro in mind if you run into problems like this one. [Edit: As of 2013 Adobe Creative Suite is now Adobe Creative Cloud.]
While Apple Preview on a Mac doesn’t have the variety of production tools available in Acrobat Pro, there is another way to do something similar: Open ColorSync Utility, choose File > Open and open the PDF file, and then choose Create Generic PDFX-3 from the Filter pop-up menu at the bottom of the document window. I did not try that in my case, though, since I had already fixed mine.
What might have caused the problem? I may never know for sure, but based on what fixed it, I’d guess that there was a problem with at least one of the color images in the PDF file. It looked like it had been created in Word with maps pasted from Windows screen shots. Maybe there were indexed-color BMP or GIF images in it. While that should not have been a problem, what we do know is that the Convert to sRGB preflight profile did fix the problem.
While this solution solved my problem, it may not fix every problem with reading PDF files on iOS devices. If it doesn’t solve your issue, I hope that describing a successful troubleshooting process helps point you in the right direction. Good luck!
Has your iPhone all of a sudden started asking you for your voicemail password? Is it not letting you in even though you’re sure you entered the right voicemail password? Or have you completely forgotten the password?
When this started happening to me, I ran a search on Twitter and found a lot of people complaining about the same thing. Which means it might have been an AT&T system glitch and nothing we users did wrong on our phones. People have proposed various solutions out there, everything from calling AT&T customer service to having the phone send you a new temporary password via SMS. But what worked for me was a lot simpler and I didn’t have to use a different password.
My fix. I used my iPhone to dial my own phone number, which answers by putting me directly into my voicemail account. I then followed the voicemail menu to where you can change your password, and get this: it didn’t ask me for the old password before entering the new one. How convenient. I entered the password I wanted, and the next time AT&T voicemail asked me for my password, I entered that one and it worked.
When you get to the AT&T voicemail menu, here’s the sequence (or just listen to the menus if they changed them around):
Press 4 for personal options.
Press 2 for administrative options.
Press 1 to manage passwords.
Press 1 to change the password.
When the system asks you to enter the password you want, do it.
When the system lets you know the new password is set, press * to back out of the menus until the system says “Goodbye!”
The next time the Voicemail screen asks you for your password, the one you just set up should work. And all of your saved voicemails should show up again. That’s what happened to me, anyway; if it isn’t working for you I really don’t know what to do next except maybe contact AT&T.
OK, that was easy. But if there was a kung-fu film called Enter the Password, at this point its hero might say, looking around with suspicion, “…that was too easy.”
Security concern. While it was convenient to be able to change my password without having to know whatever mystery password AT&T was expecting before, security-minded readers may see this as a security hole. It means that if your iPhone is in the wrong hands for less than a minute, they could easily lock you out of your own voicemail by changing your password. Just another reason why every smartphone user should use the feature that locks your phone when you don’t use it for a couple of minutes, requiring a passcode to get back in. Yes, a phone passcode is a hassle, but there’s just too much personal information on these phones now and too much access to key parts of your life to allow a smartphone to be unsecured.