Timesdelhi.com

November 21, 2018
Category archive

Photography

Gift Guide: 11 picture perfect gifts for your photographer friends

in Delhi/Gadgets/gift guide/Gift Guide 2018/Hardware/India/Photography/Politics/TC by

Photographers are tricky to get gifts for because every one of them has preferences they may already have spent years indulging. But we have blind spots, we photographers. We will spend thousands on lenses but never buy a proper camera bag, or properly back up our shots, or splurge for a gadget that makes certain shots ten times easier. Scroll on for gift recommendations that any photographer can appreciate.

Gnarbox or Western Digital backup drive

Okay, these are definitely expensive, so keep scrolling if you’re on a budget, but they can also totally change how someone shoots. If your photographer/loved one tends to travel or go out into the wilderness when they shoot, a backup solution is a must. These drives act as self-contained rugged backup solutions, letting you offload your SD card at the end of a shoot and preview the contents, no laptop required.

They’ve been around for years but early ones were pretty janky and “professional” ones cost thousands. The latest generation, typified by the Gnarbox and Western Digital’s devices, strike a balance and have been pretty well-reviewed.

The Gnarbox is the better device (faster, much better interface and tools), but it’s more expensive — the latest version with 256 GB of space onboard (probably the sweet spot in terms of capacity) costs $400. A comparable WD device costs about half that. If you and a couple friends want to throw down together, I’d recommend getting the former, but both do more or less the same thing.


Microfiber wipes

On the other end of the price spectrum, but no less important, are lens and screen wipes. One of the best things I ever did for myself was order a big pack of these things and stash them in every jacket, coin pocket, and bag I own. Now when anyone needs their glasses, lens, phone, laptop screen, or camera LCD cleaned, I’m right there and sometimes even give them the cloth to keep. I’ve been buying these and they’re good, but there are lots more sizes and packs to choose from.


SD cards and hard cases

Most cameras use SD cards these days, and photographers can never have too many of them. Anything larger than 16 GB is useful — just make sure it’s name brand. A nice touch would be to buy an SD card case that holds eight or ten of the things. Too many photographers (myself included) keep their cards in little piles, drawers, pockets and so on. A nice hardcase for cards is always welcome — Pelican is the big brand for these, but as long as it isn’t from the bargain bin another brand is fine.


Moment smartphone lens case

The best camera is the one you have with you, and more often than not, even for photographers, that’s a phone. There are lots of stick-on, magnet-on, and so on lens sets but Moment’s solution seems the most practical. You use their cases — mostly tasteful, fortunately — and pick serious lenses to pop into the built-in mount.

The optics are pretty good and the lenses are big but not so big they’ll weigh down a purse or jacket pocket. Be sure to snoop and figure out what model phone your friend is using.



Waxed canvas camera bag (or any good one really)

Every photographer should have a padded, stylish bag for their gear. I’m partial to waxed canvas, and of the ones I recently reviewed I think the ONA Union Street is the best one out there as far as combination camera/day trip bags go. That said everyone is into these Peak design ones as well.


Lomo’Instant Automat or Fujifilm SQ6 instant film camera

Everyone shoots digital these days, but if it’s a party or road trip you’re going on and capturing memories is the goal, an instant film camera might be the best bet. I’ve been using an Automat since they raised money on Kickstarter and I’ve loved this thing: the mini film isn’t too expensive, the shooting process is pleasantly analog but not too difficult, and the camera itself is compact and well designed.

If on the other hand you’d like something a little closer to the Polaroids of yore (without spending the cash on a retro one and Impossible film) then the Fujifilm SQ6 is probably your best bet. It’s got autofocus rather than zone focus, meaning it’s dead simple to operate, but it has lots of options if you want to tweak the exposure.


Circular polarizer filter

Our own photo team loves these filters, which pop onto the end of a lens and change the way light comes through it. This one in particular lets the camera see more detail in clouds and otherwise change the way a scene with a top and bottom half looks. Everyone can use one, and even if they already have one, it’s good to have spares. Polaroid is a good brand for these but again, any household name with decent reviews should be all right.

The only issue here is that you need to get the right size. Next time you see your friend’s camera lying around, look at the lens that’s on it. Inside the front of it, right next to the glass, there should be a millimeter measurement — NOT the one on the side of the lens, that’s the focal length. The number on the end of the lens tells you the diameter of filter to get.



Wireless shutter release

If you’re taking a group photo or selfie, you can always do the classic 10 second timer hustle, but if you don’t want to leave anything to chance a wireless remote is clutch. These things basically just hit the shutter button for you, though some have things like mode switches and so on.

Unfortunately, a bit like filters, shutter release devices are often model-specific. The big camera companies have their own, but if you want to be smart about it go for a cross-platform device like the Hama DCCSystem. These can be a bit hard to find so don’t feel bad about getting the camera-specific kind instead.


Blackrapid strap (or any nice custom strap)

Another pick from our video and photo team, Blackrapid’s cross-body straps take a little time to get used to, but make a lot of sense. The camera hangs upside-down and you grab it with one hand and bring it to shooting position with one movement. When you’re done, it sits out of the way instead of bumping into your chest. And because it attaches to the bottom plate of your camera, you don’t have the straps in the way pretty much from any angle you want to hold the camera in.

If you feel confident your photographer friend isn’t into this unorthodox style of shooting, don’t worry — a nice “normal” strap is also a great gift. Having a couple to choose from, especially ones that can be swapped out quickly, is always nice in case one is damaged or unsuitable for a certain shoot.


Adobe subscription

Most photographers use Adobe software, usually Lightroom or Photoshop, and unlike back in the day you don’t just buy a copy of these any more — it’s a subscription. Fortunately you can still buy a year of it for someone in what amounts to gift card form. Unfortunately you can’t buy half a year or whatever fits your budget — it’s the $120 yearly photography bundle or nothing.


Print services

Too many digital photos end up sitting on hard drives, only to be skimmed now and then or uploaded to places like Facebook in much-degraded form. But given the chance (and a gift certificate from you) they’ll print giant versions of their favorite shots and be glad they did it.

I bought a nice printer a long while back and print my own shots now, so I haven’t used these services. However I trust Wirecutter’s picks, Nations Photo Lab and AdoramaPix. $30-$40 will go a long way.


News Source = techcrunch.com

Flickr revamps under SmugMug with new limits on free accounts, unlimited storage for Pros

in Delhi/flickr/India/Photography/Photos/Politics/Smugmug/TC by

Flickr is making some big changes, following its acquisition by SmugMug earlier this year. The company announced this week it’s addressing a series of issues on the site, including spam, customer support, and use of the Yahoo login, for example. But more notably, it’s also revamping its account structure to impose increased limits for free users, while rolling out unlimited storage for Pro subscribers.

Back in 2013, Flickr introduced a full terabyte of free storage for members – a move it hoped would bring more users to its service. But in the years since, consumers have shifted to services like Apple’s iCloud and Google Photos, which are integrated with iPhones and Android smartphones, as a way to backup their photos.

The free storage attracted the wrong kind of user to Flickr, says Andrew Stadlen, VP of Product at Flickr, in an announcement explaining the move.

“In 2013, Yahoo lost sight of what makes Flickr truly special and responded to a changing landscape in online photo sharing by giving every Flickr user a staggering terabyte of free storage. This, and numerous related changes to the Flickr product during that time, had strongly negative consequences,” Stadlen writes.

“First, and most crucially, the free terabyte largely attracted members who were drawn by the free storage, not by engagement with other lovers of photography. This caused a significant tonal shift in our platform, away from the community interaction and exploration of shared interests that makes Flickr the best shared home for photographers in the world,” he adds.

In other words, the company doesn’t want to be an online shoebox any more – it wants to return to being a real photo community.

The other issue Flickr’s new team has with the “free storage” giveaway is that it meant the Yahoo-owned Flickr was beholden to advertisers. Shifting to a subscription model allows Flickr’s new owners, SmugMug, to focus on building features for members, not advertisers.

Stadlen also says that giving away storage devalued Flickr in users’ eyes – they no longer saw it as product worth paying for.

Flickr now aims to change that by revamping who Flickr is for. It’s reducing free storage to 1,000 photos – a limit it came up with based on observations of how free and Pro members were already using the site. The vast majority of free users have fewer than 1,000 photos uploaded, so won’t be impacted, the company claims.

Pro members can now choose to upgrade to a paid plan for $5.99 per month, or they can save 30% and pay $50 per year, when they opt for annual billing.

The Pro membership includes unlimited photo storage, an ad-free experience, advanced statistics, automatic backup through Auto-Uploader, and discounts from Adobe, Blurb, SmugMug, and Priime.

Alongside the news of the account changes, the company announced product changes to the site itself. Flickr is rolling out support for photo resolutions up the 5K (5120 x 5120) – that’s 26 Megapixels, or up to 6X larger than Flickr’s current 4 Megapixel maximum, notes Don MacAskill, Co-Founder and CEO at SmugMug.

He says Flickr will also offer full support for embedded color profiles across all modern browsers, devices, and displays; and will introduce improvements to the photo ingestion process for faster uploads with fewer errors.

Customer support is getting a revamp, too, with a dozen experts who will respond to members’ issues, an expanded self-help section, and a new Trust & Safety department.

The company says it’s now partnered with Sift to help fight spam, and with cybersecurity firm HackerOne to continually test its defense systems. The latter is particular helpful given the stain Flickr has on its name by being associated with Yahoo, whose data breaches impacted billions.

The new Flickr will also dump the Yahoo Login, with a new login rolled out to the site in early 2019 that will allow users to sign up with any email address, not just a Yahoo account.

The changes go to address a number of complaints users – especially pro photographers – had with Flickr during its Yahoo years. The challenge, however, is to win back the disgruntled customers, and get them to pay for storage and features.

MacAskill believes SmugMug will be able to do so, by listening and working with Flickr’s community.

“We bought Flickr because it’s the largest photographer-focused community in the world. I’ve been a fan for 14 years. There’s nothing else like it. It’s the best place to explore, discover, and connect with amazing photographers and their beautiful photography,” he says. “Flickr is a priceless Internet treasure for everyone and we’re so excited to be investing in its future. Together, hand-in-hand with the the most amazing community on the planet, we can shape the future of photography.”

(Disclosure: Yahoo merged with AOL to become Oath, which also owns TechCrunch. Flickr is no longer owned by Yahoo or Oath as SmugMug bought it in April, 2018.) 

News Source = techcrunch.com

The corpse of Kodak coughs up another odd partnership

in Apps/cryptocurrency/Delhi/Digital Cameras/Economy/ICO/India/kodak/Photography/Politics/spokesperson/Startups/TC by

Kodak isn’t feeling very well. The company, which sold off most of its legacy assets in the last decade, is licensing its name to partners who build products like digital cameras and, most comically, a cryptocurrency. In that deal, Wenn Digital bought the rights to the Kodak name for an estimated $1.5 million, a move that they hoped would immediately lend gravitas to the crypto offering.

Reader, it didn’t. After multiple stories regarding the future of the coin it still has not hit the ICO stage. Now Kodak is talking about another partnership, this time with a Tennessee-based video and film digitization.

The new product is essentially a rebranding of LegacyBox, a photo digitization company that has gone through multiple iterations after a raft of bad press.

“The Kodak Digitizing Box is a brand licensed product from AMB Media, the creators of Legacy Box. So yes, we’ve licensed the brand to them for this offering,” said Kodak spokesperson Nicholas Rangel. Not much has changed between Kodak’s offering and LegacyBox. The LegacyBox site is almost identical to the Kodak site and very similar to another AMB media product, Southtree.

The product itself is a fairly standard photo digitization service although Southtree does have a number of complaints including a very troubling case of missing mementos. The entry level product is a box into which you can stuff hundreds of photos and videos and have them digitized for a fee.

Ultimately it’s been interesting to see Kodak sell itself off in this way. Like Polaroid before it, the company is now a shell of its former self and this is encouraging parasitical partners to cash in on its brand. Given that Kodak is still a household name for many, it’s no wonder a smaller company like AMB wants hitch itself to that star.

News Source = techcrunch.com

The future of photography is code

in Delhi/India/Photography/Politics/TC by

What’s in a camera? A lens, a shutter, a light-sensitive surface, and, increasingly, a set of highly sophisticated algorithms. While the physical components are still improving bit by bit, Google, Samsung, and Apple are increasingly investing in (and showcasing) improvements wrought entirely from code. Computational photography is the only real battleground now.

The reason for this shift is pretty simple: cameras can’t get too much better than they are right now, or at least not without some rather extreme shifts in how they work. Here’s how smartphone makers hit the wall on photography, and how they were forced to jump over it.

Not enough buckets

An image sensor one might find in a digital camera.

The sensors in our smartphone cameras are truly amazing things. The work that’s been done by the likes of Sony, Omnivision, Samsung and others to design and fabricate tiny yet sensitive and versatile chips is really pretty mind-blowing. For a photographer who’s watched the evolution of digital photography from the early days, the level of quality these microscopic sensors deliver is nothing short of astonishing.

But there’s no Moore’s Law for those sensors. Or rather, just as Moore’s Law is now running into quantum limits at sub-10-nanometer levels, camera sensors hit physical limits much earlier. Think about light hitting the sensor as rain falling on a bunch of buckets; you can place bigger buckets, but there are fewer of them; you can put smaller ones, but they can’t catch as much each; you can make them square or stagger them or do all kinds of other tricks, but ultimately there are only so many raindrops and no amount of bucket-rearranging can change that.

Sensors are getting better, yes, but not only is this pace too slow to keep consumers buying new phones year after year (imagine trying to sell a camera that’s 3 percent better), but phone manufacturers often use the same or similar camera stacks, so the improvements (like the recent switch to backside illumination) are shared amongst them. So no one is getting ahead on sensors alone.

Perhaps they could improve the lens? Not really. Lenses have arrived at a level of sophistication and perfection that is hard to improve on, especially at small scale. To say space is limited inside a smartphone’s camera stack is a major understatement — there’s hardly a square micron to spare. You might be able to improve them slightly as far as how much light passes through and how little distortion there is, but these are old problems that have been mostly optimized.

The only way to gather more light would be to increase the size of the lens, either by having it A: project outwards from the body; B: displace critical components within the body; or C: increase the thickness of the phone. Which of those options does Apple seem likely to find acceptable?

In retrospect it was inevitable that Apple (and Samsung, and Huawei, and others) would have to choose D: none of the above. If you can’t get more light, you just have to do more with the light you’ve got.

Isn’t all photography computational?

The broadest definition of computational photography includes just about any digital imaging at all. Unlike film, even the most basic digital camera requires computation to turn the light hitting the sensor into a usable image. And camera makers differ widely in the way they do this, producing different JPEG processing methods, RAW formats, and color science.

For a long time there wasn’t much of interest on top of this basic layer, partly from a lack of processing power. Sure, there have been filters, and quick in-camera tweaks to improve contrast and color. But ultimately these just amount to automated dial-twiddling.

The first real computational photography features were arguably object identification and tracking for the purposes of autofocus. Face and eye tracking made it easier to capture people in complex lighting or poses, and object tracking made sports and action photography easier as the system adjusted its AF point to a target moving across the frame.

These were early examples of deriving metadata from the image and using it proactively, to improve that image or feeding forward to the next.

In DSLRs, autofocus accuracy and flexibility are marquee features, so this early use case made sense; but outside a few gimmicks, these “serious” cameras generally deployed computation in a fairly vanilla way. Faster image sensors meant faster sensor offloading and burst speeds, some extra cycles dedicated to color and detail preservation, and so on. DSLRs weren’t being used for live video or augmented reality. And until fairly recently, the same was true of smartphone cameras, which were more like point and shoots than the all-purpose media tools we know them as today.

The limits of traditional imaging

Despite experimentation here and there and the occasional outlier, smartphone cameras are pretty much the same. They have to fit within a few millimeters of depth, which limits their optics to a few configurations. The size of the sensor is likewise limited — a DSLR might use an APS-C sensor 23 by 15 millimeters across, making an area of 345 mm2; the sensor in the iPhone XS, probably the largest and most advanced on the market right now, is 7 by 5.8mm or so, for a total of 40.6 mm2.

Roughly speaking it’s collecting an order of magnitude less light than a “normal” camera, but is expected to reconstruct a scene with roughly the same fidelity, colors, and such — around the same number of megapixels, too. On its face this is sort of an impossible problem.

Improvements in the traditional sense help out — optical and electronic stabilization, for instance, make it possible to expose for longer without blurring, collecting more light. But these devices are still being asked to spin straw into gold.

Luckily, as I mentioned, everyone is pretty much in the same boat. Because of the fundamental limitations in play, there’s no way Apple or Samsung can reinvent the camera or come up with some crazy lens structure that puts them leagues ahead of the competition. They’ve all been given the same basic foundation.

All competition therefore comprises what these companies build on top of that foundation.

Image as stream

The key insight in computational photography is that an image coming from a digital camera’s sensor isn’t a snapshot, the way it is generally thought of. In traditional cameras the shutter opens and closes, exposing the light-sensitive medium for a fraction of a second. That’s not what digital cameras do, or at least not what they can do.

A camera’s sensor is constantly bombarded with light; rain is constantly falling on the field of buckets, to return to our metaphor, but when you’re not taking a picture, these buckets are bottomless and no one is checking their contents. But the rain is falling nevertheless.

To capture an image the camera system picks point at which to start counting the raindrops, measuring the light that hits the sensor. Then it picks a point to stop. For the purposes of traditional photography, this enables nearly arbitrarily short shutter speeds, which isn’t much use to tiny sensors.

Why not just always be recording? Theoretically you could, but it would drain the battery and produce a lot of heat. Fortunately, in the last few years image processing chips have gotten efficient enough that they can, when the camera app is open, keep a certain duration of that stream — limited resolution captures of the last 60 frames, for instance. Sure, it costs a little battery, but it’s worth it.

Access to the stream allows the camera to do all kinds of things. It adds context.

Context can mean a lot of things. It can be photographic elements like the lighting and distance to subject. But it can also be motion, objects, intention.

A simple example of context is what is commonly referred to as HDR, or high dynamic range imagery. This technique uses multiple images taken in a row with different exposures to more accurately capture areas of the image that might have been underexposed or overexposed in a single exposure. The context in this case is understanding which areas those are and how to intelligently combine the images together.

This can be accomplished with exposure bracketing, a very old photographic technique, but it can be accomplished instantly and without warning if the image stream is being manipulated to produce multiple exposure ranges all the time. That’s exactly what Google and Apple now do.

Something more complex is of course the “portrait mode” and artificial background blur or bokeh that is becoming more and more common. Context here is not simply the distance of a face, but an understanding of what parts of the image constitute a particular physical object, and the exact contours of that object. This can be derived from motion in the stream, from stereo separation in multiple cameras, and from machine learning models that have been trained to identify and delineate human shapes.

These techniques are only possible, first, because the requisite imagery has been captured from the stream in the first place (an advance in image sensor and RAM speed), and second, because companies developed highly efficient algorithms to perform these calculations, trained on enormous datasets and immense amounts of computation time.

What’s important about these techniques, however, is not simply that they can be done, but that one company may do them better than the other. And this quality is entirely a function of the software engineering work and artistic oversight that goes into them.

DxOMark did a comparison of some early artificial bokeh systems; the results, however, were somewhat unsatisfying. It was less a question of which looked better, and more of whether they failed or succeeded in applying the effect. Computational photography is in such early days that it is enough for the feature to simply work to impress people. Like a dog walking on its hind legs, we are amazed that it occurs at all.

But Apple has pulled ahead with what some would say is an almost absurdly over-engineered solution to the bokeh problem. It didn’t just learn how to replicate the effect — it used the computing power it has at its disposal to create virtual physical models of the optical phenomenon that produces it. It’s like the difference between animating a bouncing ball and simulating realistic gravity and elastic material physics.

Why go to such lengths? Because Apple knows what is becoming clear to others: that it is absurd to worry about the limits of computational capability at all. There are limits to how well an optical phenomenon can be replicated if you are taking shortcuts like Gaussian blurring. There are no limits to how well it can be replicated if you simulate it at the level of the photon.

Similarly the idea of combining five, ten, or a hundred images together into a single HDR image seems absurd but the truth is that in photography, more information is almost always better. If the cost of these computational acrobatics is negligible and the results measurable, why shouldn’t our devices be performing these calculations? In a few years they too will seem ordinary.

If the result is a better product, the computational power and engineering ability has been deployed with success; just as Leica or Canon might spend millions to eke fractional performance improvements out of a stable optical system like a $2,000 zoom lens, Apple and others are spending money where they can create value: not in glass, but in silicon.

Double vision

One trend that may appear to conflict with the computational photography narrative I’ve described is the advent of systems comprising multiple cameras.

This technique doesn’t add more light to the sensor — that would be prohibitively complex and expensive optically, and probably wouldn’t work anyway. But if you can free up a little space lengthwise (rather than depthwise, which we found impractical) you can put a whole separate camera right by the first that captures photos extremely similar to those taken by the first.

A mockup of what a line of color iPhones could look like.

Now, if all you want to do is re-enact Wayne’s World at an imperceptible scale (camera one, camera two… camera one, camera two…) that’s all you need. But no one actually wants to take two images simultaneously, a fraction of an inch apart.

These two cameras operate either independently (as wide-angle and zoom) or one is used to augment the other, forming a single system with multiple inputs.

The thing is that taking the data from one camera and using it to enhance the data from another is — you guessed it — extremely computationally intensive. It’s like the HDR problem of multiple exposures, except far more complex as the images aren’t taken with the same lens and sensor. It can be optimized, but that doesn’t make it easy.

So although adding a second camera is indeed a way to improve the imaging system by physical means, the possibility only exists because of the state of computational photography. And it is the quality of that computational imagery that results in a better photograph — or doesn’t. The Light camera with its 16 sensors and lenses is an example of an ambitious effort that simply didn’t produce better images, though it was using established computational photography techniques to harvest and winnow an even larger collection of images.

Light and code

The future of photography is computational, not optical. This is a massive shift in paradigm and one that every company that makes or uses cameras is currently grappling with. There will be repercussions in traditional cameras like SLRs (rapidly giving way to mirrorless systems), in phones, in embedded devices, and everywhere that light is captured and turned into images.

Sometimes this means that the cameras we hear about will be much the same as last year’s, as far as megapixel counts, ISO ranges, f-numbers, and so on. That’s okay. With some exceptions these have gotten as good as we can reasonably expect them to be: glass isn’t getting any clearer, and our vision isn’t getting any more acute. The way light moves through our devices and eyeballs isn’t likely to change much.

What those devices do with that light, however, is changing at an incredible rate. This will produce features that sound ridiculous, or pseudoscience babble on stage, or drained batteries. That’s okay, too. Just as we have experimented with other parts of the camera for the last century and brought them to varying levels of perfection, we have moved onto a new, non-physical “part” which nonetheless has a very important effect on the quality and even possibility of the images we take.

News Source = techcrunch.com

See the new iPhone’s ‘focus pixels’ up close

in cameras/Delhi/Gadgets/Hardware/India/iPhone/iPhone XS/iPhone Xs Max/mobile/Photography/Politics by

The new iPhones have excellent cameras, to be sure. But it’s always good to verify Apple’s breathless on-stage claims with first-hand reports. We have our own review of the phones and their photography systems, but teardowns provide the invaluable service of letting you see the biggest changes with your own eyes — augmented, of course, by a high-powered microscope.

We’ve already seen iFixit’s solid-as-always disassembly of the phone, but TechInsights gets a lot closer to the device’s components — including the improved camera of the iPhone XS and XS Max.

Although the optics of the new camera are as far as we can tell unchanged since the X, the sensor is a new one and is worth looking closely at.

Microphotography of the sensor die show that Apple’s claims are borne out and then some. The sensor size has increased from 32.8mm2 to 40.6mm2 — a huge difference despite the small units. Every tiny bit counts at this scale. (For comparison, the Galaxy S9 is 45mm2, and the soon-to-be-replaced Pixel 2 is 25mm2.)

The pixels themselves also, as advertised, grew from 1.22 microns (micrometers) across to 1.4 microns — which should help with image quality across the board. But there’s an interesting, subtler development that has continually but quietly changed ever since its introduction: the “focus pixels.”

That’s Apple’s brand name for phase detection autofocus (PDAF) points, found in plenty of other devices. The basic idea is that you mask off half a sub-pixel every once in a while (which I guess makes it a sub-sub-pixel), and by observing how light enters these half-covered detectors you can tell whether something is in focus or not.

Of course, you need a bunch of them to sense the image patterns with high fidelity, but you have to strike a balance: losing half a pixel may not sound like much, but if you do it a million times, that’s half a megapixel effectively down the drain. Wondering why that all the PDAF points are green? Many camera sensors use an “RGBG” sub-pixel pattern, meaning there are two green sub-pixels for each red and blue one — it’s complicated why. But there are twice as many green sub-pixels and therefore the green channel is more robust to losing a bit of information.

 

Apple introduced PDAF in the iPhone 6, but as you can see in TechInsights’ great diagram, the points are pretty scarce. There’s one for maybe every 64 sub-pixels, and not only that, they’re all masked off in the same orientation: either the left or right half gone.

The 6S and 7 Pluses saw the number double to one PDAF point per 32 sub-pixels. And in the 8 Plus, the number is improved to one per 20 — but there’s another addition: now the phase detection masks are on the tops and bottoms of the sub-pixels as well. As you can imagine, doing phase detection in multiple directions is a more sophisticated proposal, but it could also significantly improve the accuracy of the process. Autofocus systems all have their weaknesses, and this may have addressed one Apple regretted in earlier iterations.

Which brings us to the XS (and Max, of course), in which the PDAF points are now one per 16 sub-pixels, having increased the frequency of the vertical phase detection points so that they’re equal in number to the horizontal one. Clearly the experiment paid off and any consequent light loss has been mitigated or accounted for.

I’m curious how the sub-pixel patterns of Samsung, Huawei, and Google phones compare, and I’m looking into it. But I wanted to highlight this interesting little evolution. It’s an interesting example of the kind of changes that are hard to understand when explained in simple number form — we’ve doubled this, or there are a million more of that — but which make sense when you see them in physical form.

News Source = techcrunch.com

1 2 3 5
Go to Top