December 15, 2018
Category archive


Move over notch, the hole-punch smartphone camera is coming

in Apple/Asia/Canada/CFO/China/computing/Delhi/electronics/Europe/Gadgets/huawei/India/mobile/Paris/Politics/Samsung/Samsung Electronics/samsung galaxy/selfie/Sina/smartphones/TC/Technology/United States/Xiaomi by

First it was the notch, now the hole-punch has emerged as the latest tech for concealing selfie cameras whilst keeping our smartphones as free of bezel as possible to maximize the screen space.

This week, Samsung and Huawei both unveiled new phones that dispense with the iconic ‘notch’ — pioneered by Apple but popularized by everyone — in favor of positioning the front-facing camera in a small “Infinity-O” hole located on the top left side of the screen.

Dubbed hole-punch, the approach is part of Samsung’s new Galaxy A8s and Huawei’s View 20, which were unveiled hours apart on Tuesday. Huawei was first by just hours, although Samsung has been pretty public with its intention to explore a number notch alternatives including the hole-punch, which makes sense given that it has persistently mocked Apple for the feature.

The Samsung Galaxy S8a will debut in China with a hole-punch spot for the camera [Image via Samsung]

Don’t expect to see any hole-punches just yet though.

The Samsung A8s is just for China right now while the View 20 isn’t being fully unveiled until December 26 in China and, for global audiences, January 22 in Paris. We also don’t have a price for either, but they do represent a new trend that could become widely-adopted across phones from other OEMs in 2019.

That’s certainly Samsung’s plan. The Korea firm is rolling the hole-punch out on the A8s, but it has plans to expand its adoption into other devices and series. The A8s itself is pretty mid-range, but that makes it an ideal candidate to test the potential appeal of a more subtle selfie camera since Samsung’s market share has fallen in China where local rivals have pushed it hard. It starts there, but it could yet be adopted in higher-end devices with global availability.

As the View 20, Huawei has also been pretty global with its ambitions, except in the U.S. where it hasn’t managed to strike a carrier deal despite reports that it has been close before. The current crisis with its CFO — the daughter of the company’s founder who was arrested during a trip to Canada — is another stark reminder that Huawei’s business is unlikely to ever get a break in the U.S. market: so except the View 20 to be a model for Europe and Asia.

Huawei previewed its View 20 with a punch-hole selfie camera lens this week [Image via Huawei]

Samsung hasn’t said a tonne about the hole-punch design, but our sister publication Engadget — which attended the View 20’s early launch event in Hong Kong — said it was mounted below the display “like a diamond” to maintain the structure.

“This hole is not a traditional hole,” Huawei told Engadget.

Huawei will no doubt also talk up the fact that its hole is 4.5mm versus an apparent 6mm from Samsung.

Small details aside, one important upcoming trend from these new devices is the birth of the ‘mega’ megapixel smartphone camera.

The View 20 packs a whopping 48-megapixel lens for a rear camera which something that we’re going to see a lot more of in 2019. Xiaomi, for one, is preparing a January launch for a device that’ll have the 48-megapixels, according to a message on Sina Weibo from company co-founder Bin Lin. There’s no word on what camera enclosure that device will have, though.

Xiaomi teased an upcoming smartphone that’ll sport a 48-megapixel camera [Image via Bin Lin/Weibo]

News Source = techcrunch.com

Bright spots in the VR market

in computing/Delhi/driver/Facebook/Google/Google Cardboard/head-mounted display/HTC/HTC Vive/idc/India/iPad/lenovo/mixed reality/Oculus Rift/Politics/reality/Samsung/samsung gear vr/smartphones/TC/Virtual Reality/Within/Xiaomi by

Virtual Reality is in a public relations slump. Two years ago the public’s expectations for virtual reality’s potential was at its peak. Many believed (and still continue to believe) that VR would transform the way we connect, interact, and communicate in our personal and professional lives.

Google Trends highlighting search trends related to Virtual Reality over time; the “note” refers to an improvement in Google’s data collection system that occurred in early 2016

It’s easy to understand why this excitement exists once you put on a head mounted display. While there are still a limited number of compelling experiences, after you test some of the early successes in the field, it’s hard not to extrapolate beyond the current state of affairs to a magnificent future where the utility of virtual reality technology is pervasive.

However, many problems still exist. The all-in cost for state of the art headsets is still out of reach for the mass market. Most ‘high-quality’ virtual reality experiences still require users to be tethered to their desktops. The setup experience for mass market users is lathered in friction. When it comes down to it, the holistic VR experience is a non-starter for most people. We are effectively in what Gartner refers to as the “trough of disillusionment.”

Gartner’s hype cycle for “Human-Machine Interface” in 2018 places many related VR related fields (e.g., Mixed Reality, AR, HMDs, etc.) in the “Trough of Disillusionment”

Yet, the virtual reality market has continued its slow march to mass adoption, and there are tangible indicators that suggest we could be nearing an inflection point.

A shift towards sustainable hardware growth

What you do and do not consider a virtual reality display can dramatically impact your view on the state of the VR hardware industry. Head-mounted displays (HMDs) can be categorized in three different ways:

  • Screenless viewers — affordable devices that turn smartphones into a VR experience (e.g., Google Glass, Samsung Gear VR, etc.)
  • Standalone HMDs — devices that are not connected to a computer and can independently run content (e.g., Oculus Go, Lenovo Mirage Solo, etc.)
  • Tethered HMDs — devices that are connected to a desktop computer in order to run content (e.g., HTC Vive, Oculus Pro, etc.)

2018 has seen disappointing progress in aggregate headset growth. The overall market is forecasted to ship 8.9M headsets in 2018, up from an approximate aggregate shipment of ~8.3M in 2017, according to IDC. On the surface, those numbers hardly describe a market at its inflection point.

However, most of the decline in growth rate can be attributed to two factors. First, screenless viewers have seen a significant decline in shipments as device manufacturers have stopped shipping them alongside smartphones. In the second quarter of 2018, 409K screenless viewers were shipped compared to approximately 1M in the second quarter of 2017. Second, tethered VR headsets have also declined as manufacturers have slowed down the pricing discounts that acted as a steroid to sales growth in 2017.

Looking at the market for standalone HMDs, however, reveals a more promising figure. Standalone VR headsets grew 417% due to the global availability of the Oculus Go and Xiaomi Mi VR. Over time, these headsets are going to be the driver of the VR market as they offer significant advantages compared to tethered headsets.

The shift from tethered to standalone VR headsets is significant. It represents a paradigm shift within the immersive ecosystem, where developers have a truly mobile platform that is powerful enough to enable compelling user experiences.

IDC forecasts for AR/VR headset market share by form factor, 2018–2022

A premium market segment

There are a few names that come to mind when thinking about products that are available for purchase in the VR market: Samsung, Facebook (Oculus), HTC, and Playstation. A plethora of new products from these marquee names —  and products from new companies entering the market —  are opening the category for a new customer segment.

For the past few years, the market effectively had two segments. The first was a “mass market” segment with notorious devices such as the Google Cardboard and the Samsung Gear, which typically sold for under $100 and offered severely constrained experiences to consumers. The second segment was a “pro market” with a few notable devices, such as the HTC Vive, that required absurdly powerful computing rigs to operate, but offered consumers more compelling, immersive experiences.

It’s possible that this new emerging segment will dramatically open up the total addressable VR market. This “premium” market segment offers product alternatives that are somewhat more expensive than the mass market, but are significantly differentiated in the potential experiences that can be offered (and with much less friction than the “pro market”).

The Oculus Go, the Xiaomi Mi VR, and the Lenovo Solo are the most notable products in this segment. They are the fastest growing devices in this segment, and represent a new wave of products that will continue to roll out. This segment could be the tipping point for when we move from the early adopters to the early majority in the VR product adoption curve.

A number of other products have also been released throughout 2018 that fall into this category, such as Lenovo’s Mirage Solo and Xiaomi’s Mi VR. Even more so, Oculus recently announced that  they’ll be shipping a new headset called Quest this spring, which will sell for $399 and will be the most powerful example of a premium device to date. The all-in price range of ~$200–400 places these devices in a segment consumers are already conditioned to pay (think iPad’s, gaming consoles, etc.), and they offer differentiated experiences primarily attributed to the fact that they are standalone devices.

News Source = techcrunch.com

Agtech startup Imago AI is using computer vision to boost crop yields

in AgTech/AI/Artificial Intelligence/Asia/Battlefield/berlin/Computer Vision/Delhi/disrupt berlin 2018/Food/food waste/genomics/GreenTech/Imago AI/India/Machine Learning Technology/New Delhi/Politics/satellite imagery/smartphones/Startup Battlefield Disrupt Berlin 2018/Startups/TC/TechCrunch Disrupt Berlin 2018 by

Presenting onstage today in the 2018 TC Disrupt Berlin Battlefield is Indian agtech startup Imago AI, which is applying AI to help feed the world’s growing population by increasing crop yields and reducing food waste. As startup missions go, it’s an impressively ambitious one.

The team, which is based out of Gurgaon near New Delhi, is using computer vision and machine learning technology to fully automate the laborious task of measuring crop output and quality — speeding up what can be a very manual and time-consuming process to quantify plant traits, often involving tools like calipers and weighing scales, toward the goal of developing higher-yielding, more disease-resistant crop varieties.

Currently they say it can take seed companies between six and eight years to develop a new seed variety. So anything that increases efficiency stands to be a major boon.

And they claim their technology can reduce the time it takes to measure crop traits by up to 75 percent.

In the case of one pilot, they say a client had previously been taking two days to manually measure the grades of their crops using traditional methods like scales. “Now using this image-based AI system they’re able to do it in just 30 to 40 minutes,” says co-founder Abhishek Goyal.

Using AI-based image processing technology, they can also crucially capture more data points than the human eye can (or easily can), because their algorithms can measure and asses finer-grained phenotypic differences than a person might pick up on or be easily able to quantify just judging by eye alone.

“Some of the phenotypic traits they are not possible to identify manually,” says co-founder Shweta Gupta. “Maybe very tedious or for whatever all these laborious reasons. So now with this AI-enabled [process] we are now able to capture more phenotypic traits.

“So more coverage of phenotypic traits… and with this more coverage we are having more scope to select the next cycle of this seed. So this further improves the seed quality in the longer run.”

The wordy phrase they use to describe what their technology delivers is: “High throughput precision phenotyping.”

Or, put another way, they’re using AI to data-mine the quality parameters of crops.

“These quality parameters are very critical to these seed companies,” says Gupta. “Plant breeding is a very costly and very complex process… in terms of human resource and time these seed companies need to deploy.

“The research [on the kind of rice you are eating now] has been done in the previous seven to eight years. It’s a complete cycle… chain of continuous development to finally come up with a variety which is appropriate to launch in the market.”

But there’s more. The overarching vision is not only that AI will help seed companies make key decisions to select for higher-quality seed that can deliver higher-yielding crops, while also speeding up that (slow) process. Ultimately their hope is that the data generated by applying AI to automate phenotypic measurements of crops will also be able to yield highly valuable predictive insights.

Here, if they can establish a correlation between geotagged phenotypic measurements and the plants’ genotypic data (data which the seed giants they’re targeting would already hold), the AI-enabled data-capture method could also steer farmers toward the best crop variety to use in a particular location and climate condition — purely based on insights triangulated and unlocked from the data they’re capturing.

One current approach in agriculture to selecting the best crop for a particular location/environment can involve using genetic engineering. Though the technology has attracted major controversy when applied to foodstuffs.

Imago AI hopes to arrive at a similar outcome via an entirely different technology route, based on data and seed selection. And, well, AI’s uniform eye informing key agriculture decisions.

“Once we are able to establish this sort of relation this is very helpful for these companies and this can further reduce their total seed production time from six to eight years to very less number of years,” says Goyal. “So this sort of correlation we are trying to establish. But for that initially we need to complete very accurate phenotypic data.”

“Once we have enough data we will establish the correlation between phenotypic data and genotypic data and what will happen after establishing this correlation we’ll be able to predict for these companies that, with your genomics data, and with the environmental conditions, and we’ll predict phenotypic data for you,” adds Gupta.

“That will be highly, highly valuable to them because this will help them in reducing their time resources in terms of this breeding and phenotyping process.”

“Maybe then they won’t really have to actually do a field trial,” suggests Goyal. “For some of the traits they don’t really need to do a field trial and then check what is going to be that particular trait if we are able to predict with a very high accuracy if this is the genomics and this is the environment, then this is going to be the phenotype.”

So — in plainer language — the technology could suggest the best seed variety for a particular place and climate, based on a finer-grained understanding of the underlying traits.

In the case of disease-resistant plant strains it could potentially even help reduce the amount of pesticides farmers use, say, if the the selected crops are naturally more resilient to disease.

While, on the seed generation front, Gupta suggests their approach could shrink the production time frame — from up to eight years to “maybe three or four.”

“That’s the amount of time-saving we are talking about,” she adds, emphasizing the really big promise of AI-enabled phenotyping is a higher amount of food production in significantly less time.

As well as measuring crop traits, they’re also using computer vision and machine learning algorithms to identify crop diseases and measure with greater precision how extensively a particular plant has been affected.

This is another key data point if your goal is to help select for phenotypic traits associated with better natural resistance to disease, with the founders noting that around 40 percent of the world’s crop load is lost (and so wasted) as a result of disease.

And, again, measuring how diseased a plant is can be a judgement call for the human eye — resulting in data of varying accuracy. So by automating disease capture using AI-based image analysis the recorded data becomes more uniformly consistent, thereby allowing for better quality benchmarking to feed into seed selection decisions, boosting the entire hybrid production cycle.

Sample image processed by Imago AI showing the proportion of a crop affected by disease

In terms of where they are now, the bootstrapping, nearly year-old startup is working off data from a number of trials with seed companies — including a recurring paying client they can name (DuPont Pioneer); and several paid trials with other seed firms they can’t (because they remain under NDA).

Trials have taken place in India and the U.S. so far, they tell TechCrunch.

“We don’t really need to pilot our tech everywhere. And these are global [seed] companies, present in 30, 40 countries,” adds Goyal, arguing their approach naturally scales. “They test our technology at a single country and then it’s very easy to implement it at other locations.”

Their imaging software does not depend on any proprietary camera hardware. Data can be captured with tablets or smartphones, or even from a camera on a drone or using satellite imagery, depending on the sought for application.

Although for measuring crop traits like length they do need some reference point to be associated with the image.

“That can be achieved by either fixing the distance of object from the camera or by placing a reference object in the image. We use both the methods, as per convenience of the user,” they note on that.

While some current phenotyping methods are very manual, there are also other image-processing applications in the market targeting the agriculture sector.

But Imago AI’s founders argue these rival software products are only partially automated — “so a lot of manual input is required,” whereas they couch their approach as fully automated, with just one initial manual step of selecting the crop to be quantified by their AI’s eye.

Another advantage they flag up versus other players is that their approach is entirely non-destructive. This means crop samples do not need to be plucked and taken away to be photographed in a lab, for example. Rather, pictures of crops can be snapped in situ in the field, with measurements and assessments still — they claim — accurately extracted by algorithms which intelligently filter out background noise.

“In the pilots that we have done with companies, they compared our results with the manual measuring results and we have achieved more than 99 percent accuracy,” is Goyal’s claim.

While, for quantifying disease spread, he points out it’s just not manually possible to make exact measurements. “In manual measurement, an expert is only able to provide a certain percentage range of disease severity for an image example; (25-40 percent) but using our software they can accurately pin point the exact percentage (e.g. 32.23 percent),” he adds.

They are also providing additional support for seed researchers — by offering a range of mathematical tools with their software to support analysis of the phenotypic data, with results that can be easily exported as an Excel file.

“Initially we also didn’t have this much knowledge about phenotyping, so we interviewed around 50 researchers from technical universities, from these seed input companies and interacted with farmers — then we understood what exactly is the pain-point and from there these use cases came up,” they add, noting that they used WhatsApp groups to gather intel from local farmers.

While seed companies are the initial target customers, they see applications for their visual approach for optimizing quality assessment in the food industry too — saying they are looking into using computer vision and hyper-spectral imaging data to do things like identify foreign material or adulteration in production line foodstuffs.

“Because in food companies a lot of food is wasted on their production lines,” explains Gupta. “So that is where we see our technology really helps — reducing that sort of wastage.”

“Basically any visual parameter which needs to be measured that can be done through our technology,” adds Goyal.

They plan to explore potential applications in the food industry over the next 12 months, while focusing on building out their trials and implementations with seed giants. Their target is to have between 40 to 50 companies using their AI system globally within a year’s time, they add.

While the business is revenue-generating now — and “fully self-enabled” as they put it — they are also looking to take in some strategic investment.

“Right now we are in touch with a few investors,” confirms Goyal. “We are looking for strategic investors who have access to agriculture industry or maybe food industry… but at present haven’t raised any amount.”

News Source = techcrunch.com

Google Fi now officially supports most Android devices and iPhones

in Android/app-store/Delhi/fi/Google/India/iOS/iPhone/LG/mobile/Motorola/Politics/project fi/Samsung/smartphones/TC/vpn by

Google is making a major move to expand the availability of its Fi wireless service.

It’s been a few years since Google launched Project Fi with the promise of doing things a bit differently than the large carriers. Because it could switch between the cell networks of multiple providers to give you the best signal, the service only ever officially supported a select number of handsets. You could always trick it by activating the service on a supported phone and then moving your SIM card to another (including an iPhone), but that was never supported.

That’s changing today, though. The company is opening up Fi — and renaming it to Google Fi — and officially expanding device support to most popular Android phones, as well as iPhones. Supported Android phones include devices from Samsung, LG, Motorola and OnePlus. iPhone support is currently in beta, and there are a few extra steps to set it up, but the Fi iOS app should now be available in the App Store.

One thing you might not get with many of the now-supported phones is the full Fi experience, with network switching and access to Google’s enhanced network features, including Google’s VPN network. For that, you’ll still need a Pixel phone, the Moto G6 or any other device that you can buy directly in the Fi store.

Fi on all phones comes with the usual features, like bill protection, free high-speed international roaming and support for group plans.

To sweeten the deal, Google is also launching a somewhat extraordinary promotion today: If you open a new Fi account — or if are an existing user — you can buy any phone in the Fi shop today and get your money back in the form of a travel gift card that you can use for a flight with Delta or Southwest, or lodging with Airbnb and Hotels.com. There’s some fine print, of course (you need to keep your account active for a few months, etc.), but if you were looking at getting Fi anyway, like to travel and want to get a Pixel 3 XL, that’s not a bad deal at all.

The fine print is below:

Travel on Fi with Any Device Purchase Promotion Terms (Google Fi)

Limited time, 24-hour offer applies to any qualifying device purchased from fi.google.com from 11/28/18 12:00 AM PT through 11/28/18 11:59 PM PT, or while supplies last. When you purchase a qualifying device on fi.google.com, you can redeem a travel gift card in the amount you paid for the device, excluding taxes (details below).

To qualify for this promotion, a device must be activated within 15 days of device shipment and remain active for 60 consecutive days within 75 days of device shipment. The device must be activated within the same plan that was used to purchase the device. Activation must be for full service (i.e., activation does not apply to a data-only SIM).

This offer is available for new Google Fi customers as of 11/28/18 12:00 AM PT and existing, active Google Fi customers. If the customer is new to Google Fi, the customer must transfer (port-in) their current personal number over to Google Fi during sign up. The number being transferred must be currently active and have been active with the previous carrier and the customer since 8/28/18 12:00 AM PT.

After the terms have been satisfied, the customer will receive an email from Google Fi (around 75 – 90 days after device activation) with instructions on how to obtain a gift card from Tango subject to Tango’s terms and conditions. The user can redeem gift card amounts with select travel partners: Airbnb, Delta Airlines, Hotels.com, and Southwest Airlines. Gift cards may also be subject to the terms of the travel partners.

If Fi service is paused for more than 7 days or cancelled within 120 days of activation, the value of the gift card will be charged to your Google Payments account to match the purchased price of the device. Limit one per person. This offer is only available for U.S. residents ages 18 and older, and requires Google Payments and Google Fi accounts. Unless otherwise stated, this offer cannot be combined with other offers. Offer and gift card redemption are not transferable, and are not valid for cash or cash equivalent. Void where prohibited.

News Source = techcrunch.com

Popular Chinese selfie app Meitu now includes 3D editing

in Apps/China/Delhi/India/meitu/Politics/selfie/smartphone/smartphones/Snapseed by

You’ve probably had the experience of posing awkwardly for a photo while everyone else looks great. Now China’s top photo-editing firm Meitu has a solution that helps you resist the urge to trash that photo.

Meitu’s namesake app, which claims over 100 million monthly active users as of August, recently launched a feature that lets users virtually rotate their faces up, down, to the left, or to the right. There’s already a plethora of editing apps out there that allows people to polish their shots like a pro, but Meitu wants to take retouching to another level.

“Traditional image processing technology can only perform plane stretching in two dimensions, and the image has no depth information and therefore is unable to truly reflect the changes in the posture of real life,” says a spokesperson for the company.

The feature, called “3D Reshape,” takes hints from a static portrait and applies face recognition and reconstruction technologies to generate 3D information of the user’s face. In other words, it simulates how the user’s head tilts or rotates in real life, yielding results that the firm claims are more “natural” and “realistic”.

The process is a bit eerie, but the result looks satisfying. / Credit: Meitu

The feature also works for group photos, so users can choose to fix a particular person’s unflattering pose. The Chinese company isn’t the only photo app that’s come up with 3D editing. Google’s Snapseed has a similar offer.

Meitu goes all out to perfect portraits by maintaining an in-house R&D team of 200 staff. For the 3D project, the researchers collected 18 unique facial expressions from 1,200 people who were primarily Chinese and aged between 12 and 60.

Despite being a dominator in its space, Meitu has had to look beyond photo editing for monetization since its early days. For the six months ended June 30, the firm generated 72 percent of its revenues from selling smartphones designed to take outstanding selfies, while internet-related services brought in the rest of the money.

Nonetheless, Meitu has seen its hardware revenues drop as smartphone shipments slow in China and competition heats up. By contrast, internet-based revenues jumped 132 percent year-over-year thanks to growth in advertising and “value-added” services. The latter stands for virtual items sales on Meitu’s video streaming app Meipai.

Meitu’s trove of users may have other practical use. In July, the firm shelled out $30 million for an undisclosed stake in Gengmei, a social media platform that connects customers with plastic surgeons who offer them advice. It’s not hard to imagine a future where Meitu links its beauty-seeking users to not only virtual tools but also long-lasting, real-life means.

News Source = techcrunch.com

1 2 3 17
Go to Top