Timesdelhi.com

August 18, 2018
Category archive

Hardware

The Automatica automates pour-over coffee in a charming and totally unnecessary way

in Coffee/Crowdfunding/Delhi/Gadgets/Hardware/India/Kickstarter/Politics/TC by

Most mornings, after sifting through the night’s mail haul and skimming the headlines, I make myself a cup of coffee. I use a simple pour-over cone and paper filters, and (in what is perhaps my most tedious Seattleite affectation), I grind the beans by hand. I like the manual aspect of it all. Which is why this robotic pour-over machine is to me so perverse… and so tempting.

Called the Automatica, this gadget, currently raising funds on Kickstarter but seemingly complete as far as development and testing, is basically a way to do pour-over coffee without holding the kettle yourself.

You fill the kettle and place your mug and cone on the stand in front of it. The water is brought to a boil and the kettle tips automatically. Then the whole mug-and-cone portion spins slowly, distributing the water around the grounds, stopping after 11 ounces has been distributed over the correct duration. You can use whatever cone and mug you want as long as they’re about the right size.

Of course, the whole point of pour-over coffee is that it’s simple: you can do it at home, while on vacation, while hiking, or indeed at a coffee shop with a bare minimum of apparatus. All you need is the coffee beans, the cone, a paper filter — although some cones omit even that — and of course a receptacle for the product. (It’s not the simplest — that’d be Turkish, but that’s coffee for werewolves.)

Why should anyone want to disturb this simplicity? Well, the same reason we have the other 20 methods for making coffee: convenience. And in truth, pour-over is already automated in the form of drip machines. So the obvious next question is, why this dog and pony show of an open-air coffee bot?

Aesthetics! Nothing wrong with that. What goes on in the obscure darkness of a drip machine? No one knows. But this – this you can watch, audit, understand. Even if the machinery is complex, the result is simple: hot water swirls gently through the grounds. And although it’s fundamentally a bit absurd, it is a good-looking machine, with wood and brass accents and a tasteful kettle shape. (I do love a tasteful kettle.)

The creators say the machine is built to last “generations,” a promise which must of course be taken with a grain of salt. Anything with electronics has the potential to short out, to develop a bug, to be troubled by humidity or water leaks. The heating element may fail. The motor might stutter or a hinge catch.

But all that is true of most coffee machines, and unlike those this one appears to be made with care and high quality materials. The cracking and warping you can expect in thin molded plastic won’t happen to this thing, and if you take care of it it should at least last several years.

And it better, for the minimum pledge price that gets you a machine: $450. That’s quite a chunk of change. But like audiophiles, coffee people are kind of suckers for a nice piece of equipment.

There is of course the standard crowdfunding caveat emptor; this isn’t a pre-order but a pledge to back this interesting hardware startup, and if it’s anything like the last five or six campaigns I’ve backed, it’ll arrive late after facing unforeseen difficulties with machining, molds, leaks, and so on.

News Source = techcrunch.com

Autonomous retail startup Inokyo’s first store feels like stealing

in Apps/Artificial Intelligence/Delhi/eCommerce/Food/Hardware/India/mobile/Politics/robotics/Startups/TC by

Inokyo wants to be the indie Amazon Go. It’s just launched its prototype cashierless autonomous retail store. Cameras track what you grab from shelves, and with a single QR scan of its app on your way in and out of the store, you’re charged for what you got.

Inokyo‘s first store is now open on Mountain View’s Castro Street selling an array of bougie kombuchas, snacks, protein powders, and bath products. It’s sparse and a bit confusing, but offers a glimpse of what might be a commonplace shopping experience five years from now. You can get a glimpse yourself in our demo video below:

“Cashierless stores will have the same level of impact on retail as self-driving cars will have on transportation” Inokyo co-founder Tony Francis tells me. “This is the future of retail. It’s inevitable that stores will become increasingly autonomous.”

Inokyo (rhymes with Tokyo) is now accepting signups for beta customers who want early access to its Mountain View store. The goal is to collect enough data to dictate the future product array and business model. Inokyo is deciding whether it wants to sell its technology as a service to other retail stores, run its own stores, or work with brands to improve their product’s positioning based on in-store sensor data on custom behavior.

We knew that building this technology in a lab somewhere wouldn’t yield a successful product” says Francis. “Our hypothesis here is that whoever ships first, learns in the real world, and iterates the fastest on this technology will be the ones to make these stores ubiquitous.” Inokyo might never rise into a retail giant ready to compete with Amazon and Whole Foods. But its tech could even the playing field, equipping smaller businesses with the tools to keep tech giants from having a monopoly on autonomous shopping experiences.

It’s About What Cashiers Do Instead

Amazon isn’t as ahead as we assumed” Francis remarks. He and his co-founder Rameez Remsudeen took a trip to Seattle to see the Amazon Go store that first traded cashiers for cameras in the US. Still, they realized “This experience can be magical”. The two had met at Carnegie Mellon through machine learning classes before they went on to apply that knowledge at Instaram and Uber. The two decided that if they jumped into autonomous retail soon enough, they could still have a say in shaping its direction.

Next week, Inokyo will graduate from Y Combinator’s accelerator that provided its initial seed funding. In six weeks during the program, they found a retail space on Mountain View’s main drag, studied customer behaviors in traditional stores, built an initial product line, and developed the technology to track what user are taking off the shelves.

Here’s how the Inokyo store works. You download its app and connect a payment method, and you get a QR code that you wave in front of a little sensor as you stroll into the shop. Overhead cameras will scan your body shape and clothing without facial recognition in order to track you as you move around the store. Meanwhile, on-shelf cameras track when products are picked up or put back. Combined, knowing who’s where and what’s grabbed lets it assign the items to your cart. You scan again on your way out, and later you get a receipt detailing the charges.

Originally, Inokyo actually didn’t make you scan on the way out, but it got the feedback that customers were scared they were actually stealing. The scan-out is more about peace of mind than engineering necessity. There is a subversive pleasure to feeling like “well, if Inokyo didn’t catch all the stuff I chose, that’s not my problem.” And if you’re overcharged, there’s an in-app support button for getting a refund.

Inokyo co-founders (from left): Tony Francis and Rameez Remsudeen

Inokyo was accurate in what it charged me despite me doing a few switcharoos with products I nabbed. But there were only about three people in the room with at the time. The real test for these kinds of systems are when a rush of customers floods in and that cameras have to differentiate between multiple similar-looking people. Inokyo will likely need to be over 99 percent accurate to be more of a help than a headache. An autonomous store that constantly over- or undercharges would be more trouble than it’s worth, and patrons would just go to the nearest classic shop.

Just because autonomous retail stores will be cashier-less doesn’t mean they’ll have no staff. To maximize cost-cutting, they could just trust that people won’t loot it. However, Inokyo plans to have someone minding the shop to make sure people scan in the first place and to answer questions about the process. But theirs also an opportunity in reassigning labor from being cashiers to concierges that can recommend the best products or find what’s the right fit for the customer. These stores will be judged by the convenience of the holistic experience, not just the tech. At the very least, a single employee might be able to handle restocking, customer support, and store maintenance once freed from cashier duties.

The Amazon Go autonomous retail store in Seattle is equipped with tons of overhead cameras.

While Amazon Go uses cameras in a similar way to Inokyo, it also relies on weight sensors to track items. There are plenty of other companies chasing the cashierless dream. China’s BingoBox has nearly $100 million in funding and has over 300 stores, though they use less sophisticated RFID tags. Fellow Y Combinator startup Standard Cognition has raised $5 million to equip old school stores with autonomous camera-tech. AiFi does the same, but touts that its cameras can detect abnormal behavior that might signal someone is a shoplifter.

The store of the future seems like more and more of a sure thing. The race’s winner will be determined by who builds the most accurate tracking software, easy-to-install hardware, and pleasant overall shopping flow. If this modular technology can cut costs and lines without alienating customers, we could see our local brick-and-mortars adapt quickly. The bigger question than if or even when this future arrives is what it will mean for the millions of workers who make their living running the checkout lane.

News Source = techcrunch.com

VR optics could help old folks keep the world in focus

in accessibility/Delhi/disability/Gadgets/Hardware/Health/India/Politics/Science/siggraph/stanford/Stanford University/TC/Wearables by

The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smart glasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own.

I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision.

There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time?

That’s what Padmanaban and colleagues Robert Konrad and Gordon Wetzstein are working on, and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound.

Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way what we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only ten feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought.

This is an old prototype, but you get the idea.

It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table 3 feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map.

Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can them make an intelligent decision as to whether and how to adjust the lenses of the glasses.

In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes.

The whole process of checking the gaze, depth of the selected object, and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again.

“Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore,the ‘natural’ operation of the Autofocals makes them usable on first wear.”

The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches, or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.

News Source = techcrunch.com

To fight the scourge of open offices, ROOM sells rooms

in Delhi/eCommerce/Enterprise/funding/Fundings & Exits/furniture/Hardware/India/offices/Politics/Recent Funding/room/Startups/TC by

Noisy open offices don’t foster collaboration, they kill it, according to a Harvard study that found the less-private floor plan led to a 73 percent drop in face-to-face interaction between employees and a rise in emailing. The problem is plenty of young companies and big corporations have already bought into the open office fad. But a new startup called ROOM is building a prefabricated, self-assembled solution. It’s the Ikea of office phone booths.

The $3495 ROOM One is a sound-proofed, ventilated, powered booth that can be built in new or existing offices to give employees a place to take a video call or get some uninterrupted flow time to focus on work. For comparison, ROOM co-founder Morten Meisner-Jensen says “Most phone booths are $8,000 to $12,000. The cheapest competitor to us is $6,000 — almost twice as much.” Though booths start at $4,500 from TalkBox and $3,995 from Zenbooth, they tack on $1,250 and $1,650 for shipping while ROOM ships for free. They’re all dividing the market of dividing offices.

The idea might seem simple, but the booths could save businesses a ton of money on lost productivity, recruitment, and retention if it keeps employees from going crazy amidst sales call cacophony. Less than a year after launch, ROOM has hit a $10 million revenue run rate thanks to 200 clients ranging from startups to Salesforce, Nike, NASA, and JP Morgan. That’s attracted a $2 million seed round from Slow Ventures that adds to angel funding from Flexport CEO Ryan Petersen. “I am really excited about it since it is probably the largest revenue generating company Slow has seen at the time of our initial Seed stage investment” says partner Kevin Colleran.

“It’s not called ROOM because we build rooms” Meisner-Jensen tells me. “It’s called ROOM because we want to make room for people, make room for privacy, and make room for a better work environment.”

Phone Booths, Not Sweatboxes

You might be asking yourself, enterprising reader, why you couldn’t just go to Home Depot, buy some supplies, and build your own in-office phone booth for way less than $3,500. Well, ROOM’s co-founders tried that. The result was…moist.

Meisner-Jensen has design experience from the Danish digital agency Revolt that he started befor co-founding digital book service Mofibo and selling it to Storytel. “In my old job we had to go outside and take the class, and I’m from Copenhagen so that’s a pretty cold experience half the year.” His co-founder Brian Chen started Y Combinator-backed smart suitcase company Bluesmart where he was VP of operations. They figured they could attack the office layout issue with hammers and saws. I mean, they do look like superhero alter-egos.

Room co-founders (from left): Brian Chen and Morten Meisner-Jensen

“To combat the issues I myself would personally encounter with open offices, as well as colleagues, we tried to build a private ‘phone booth’ ourselves” says Meisner-Jensen. “We didn’t quite understand the specifics of air ventilation or acoustics at the time, so the booth got quite warm – warm enough that we coined it ‘the sweatbox.’”

With ROOM, they got serious about the product. The 10 square foot ROOM One booth ships flat and can be assembled in under 30 minutes by two people with a hex wrench. All it needs is an outlet to plug into to power its light and ventilation fan. Each is built from 1088 recycled plastic bottles for noise cancelling so you’re not supposed to hear anything from outsides. The whole box is 100 percent recyclable plus ith can be torn down and rebuilt if your startup implodes and you’re being evicted from your office.

The ROOM One features a bar-height desk with outlets and a magnetic bulletin board behind it, though you’ll have to provide your own stool of choice. It actually designed not to be so comfy that you end up napping inside, which doesn’t seem like it’d be a problem with this somewhat cramped spot. “To solve the problem with noise at scale you want to provide people with space to take a call but not camp out all day” Meisner-Jensen notes.

Booths by Zenbooth, Cubicall, and TalkBox (from left)

A Place To Get Into Flow

Couldn’t office managers just buy noise-cancelling headphones for everyone? “It feels claustrophobic to me” he laughs, but then outlines why a new workplace trend requires more than headphones. “People are doing video calls and virtual meetings much, much more. You can’t have all these people walking by you and looking at your screen. [A booth is] also giving you your own space to do your own work which I don’t think you’d get from a pair of Bose. I think it has to be a physical space.”

But with plenty of companies able to construct physical spaces, it will be a challenge for ROOM to convey to subtleties of its build quality that warrant its price. “The biggest risk for ROOM right now are copycats” Meisner-Jensen admits. “Someone entering our space claiming to do what we’re doing better but cheaper.” Alternatively, ROOM could lock in customers by offering a range of office furniture products. The co-founder hinted at future products, saying ROOM is already receiving demand for bigger multi-person prefab conference rooms and creative room divider solutions.

The importance of privacy goes beyond improved productivity when workers are alone. If they’re exhausted from overstimulation in a chaotic open office, they’ll have less energy for purposeful collaboration when the time comes. The bustle could also make them reluctant to socialize in off-hours, which could lead them to burn out and change jobs faster. Tech companies in particular are in a constant war for talent, and ROOM Ones could be perceived as a bigger perk than free snacks or a ping-pong table that only makes the office louder.

“I don’t think the solution is to go back to a world of cubicles and corner offices” Meisner-Jensen concludes. It could take another decade for office architects to correct the overenthusiasm for open offices despite the research suggesting their harm. For now, ROOM’s co-founder is concentrating on “solving the issue of noise at scale” by asking “How do we make the current workspaces work in the best way possible?”

News Source = techcrunch.com

This robot maintains tender, unnerving eye contact

in Delhi/Gadgets/Hardware/India/Politics/robocalypse/robotics/WTF by

Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected.

The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, a creation of Takayuki Todo, is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression.

It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do.

At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time.

In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side.

Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine.

That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems.

This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.

News Source = techcrunch.com

1 2 3 62
Go to Top