Timesdelhi.com

July 18, 2018
Category archive

San Francisco

Apple is rebuilding Maps from the ground up

in Apple/apple inc/Apple Maps/Artificial Intelligence/california/ceo/Chevron/computing/Delhi/driver/eddy cue/Google/gps/India/iPad/iPhone/Japan/journalist/location services/mac pro/machine learning/mobile/mobile devices/OpenStreetMap/Politics/San Francisco/satellite imagery/smartphones/Software/TC/Technology/TomTom/United States/vp by

I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world class service.

Maps needs fixing.

Apple, it turns out, is aware of this, so It’s re-building the maps part of Maps.

It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 Beta and will cover Northern California by fall.

Every version of iOS will get the updated maps eventually and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.

This is nothing less than a full re-set of Maps and it’s been 4 years in the making, which is when Apple began to develop its new data gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.

“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps in an interview last week.  “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”

But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.

“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”

In addition to Cue, I spoke to Apple VP Patrice Gautier and over a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.

If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of Maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.

Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.

It wasn’t enough.

“We decided to do this just over four years ago. We said, “Where do we want to take Maps? What are the things that we want to do in Maps? We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.

Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.

Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.

There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.

Cue points to the proliferation of devices running iOS, now numbering in the millions, as a deciding factor to shift its process.

“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”

I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.

“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map real-time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.

“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”

So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high resolution satellite imagery and brand new intensely high resolution image data gathered from its ground cars until it had what it felt was a ‘best in class’ mapping product.

There is only really one big company on earth who owns an entire map stack from the ground up: Google .

Apple knew it needed to be the other one. Enter the vans.

Apple vans spotted

Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with ‘Apple Maps’ signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.

The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.

Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.

Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.

Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.

In addition to a beefed up GPS rig on the roof, four LiDAR arrays mounted at the corners and 8 cameras shooting overlapping high-resolution images – there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping capture software runs on an iPad.

While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven and monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.

More on why Apple needs this level of data detail later.

When the images and data are captured, they are then encrypted on the fly immediately and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case which is delivered to Apple’s data center where a suite of software eliminates private information like faces, license plates and other info from the images. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.

This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.

Probe data and Privacy

Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design as it wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.

Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.

The consistent message is that the team feels it can deliver a high quality navigation, location and mapping product without the directly personal data used by other platforms.

“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it —in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”

The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the ‘ground truth’ data provided by its own mapping vehicles with this ‘probe data’ sent back from iPhones.

Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows who that ID refers to. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.

Because Apple’s business model does not rely on it serving, say, an ad for a Chevron on your route to you, it doesn’t need to even tie advertising identifiers to users.

Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.

That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.

In short: traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.

The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.

If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving it can also send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your maps app has been active, say you check the map, look for directions etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.

All of this, of course, is governed by whether you opted into location services and can be toggled off using the maps location toggle in the Privacy section of settings.

Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.

From the point cloud on up

But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.

After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.

But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full on point cloud that maps the world around the mapping van in 3D. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.

It seems like it could also enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on ‘any future plans’ for such things.

Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.

The coupling of high resolution image data from car and satellite, plus a 3D point cloud results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher resolution and easier to see, visually. And it’s synchronized with the ‘panoramic’ images from the car, the satellite view and the raw data. These techniques are used in self driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to ‘see’ through brush or tree cover that would normally obscure roads, buildings and addresses.

This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.

Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.

Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the ‘last 50 feet’ of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.

“When we take you to a business and that business exists, we think the precision of where we’re taking you to, from being in the right building,” says Cue. “When you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”

Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those as well.

Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.

And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.

Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.

A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.

Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever changing biomass wreaks havoc on roadways, addresses and the occasional park.

Here there be Helvetica

Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.

These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.

The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the US, it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.

This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.

It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate through into the digital world represented by Maps.

Bottom line

The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the ‘current’ version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.

Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover including grass and trees represented on the map as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.

Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included as well.

What you won’t see, for now, is a full visual redesign.

“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”

Apple Maps is getting the long awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.

The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.

“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the US.”

News Source = techcrunch.com

Implantable 3D-printed organs could be coming sooner than you think

in 3d printing/3Scan/advisor/biology/Biotech/biotech startups/biotechnology/Co-founder/Delhi/diabetes/drug development/engineer/Health/India/medicine/organ/Politics/printing/Rice University/San Francisco/Science/TC/tumor/United States/university of Manchester by

At MBC Biolabs, an incubator for biotech startups in San Francisco’s Dogpatch neighborhood, a team of scientists and interns working for the small startup Prellis Biologics have just taken a big step on the path toward developing viable 3D-printed organs for humans.

The company, which was founded in 2016 by research scientists Melanie Matheu and Noelle Mullin, staked its future (and a small $3 million investment) on a new technology to manufacture capillaries, the one-cell-thick blood vessels that are the pathways which oxygen and nutrients move through to nourish tissues in the body.

Without functioning capillary structures, it is impossible to make organs, according to Matheu. They’re the most vital piece of the puzzle in the quest to print viable hearts, livers, kidneys and lungs, she said.

“Microvasculature is the fundamental architectural unit that supports advanced multicellular life and it therefore represents a crucial target for bottom-up human tissue engineering and regenerative medicine,” said Jordan Miller, an assistant professor of bioengineering at Rice University and an expert in 3D-printed implantable biomaterial structures, in a statement.

This real-time video shows tiny fluorescent particles – 5 microns in diameter (the same size as a red blood cell) – moving through an array of 105 capillaries printed in parallel, inside a 700 micron diameter tube. Each capillary is 250 microns long.

Now, Prellis has published findings indicating that it can manufacture those capillaries at a size and speed that would deliver 3D-printed organs to the market within the next five years. 

Prellis uses holographic printing technology that creates three-dimensional layers deposited by a light-induced chemical reaction that happens in five milliseconds.

This feature, according to the company, is critical for building tissues like kidneys or lungs. Prellis achieves this by combining a light-sensitive photo-initiator with traditional bioinks that allows the cellular material to undergo a reaction when blasted with infrared light, which catalyzes the polymerization of the bioink.

Prellis didn’t invent holographic printing technology. Several researchers are looking to apply this new approach to 3D printing across a number of industries, but the company is applying the technology to biofabrication in a way that seems promising.

The speed is important because it means that cell death doesn’t occur and the tissue being printed remains viable, while the ability to print within structures means that Prellis’ technology can generate the internal scaffolding to support and sustain the organic material that surrounds it, according to the company.

The video above, courtesy of Prellis Biologics, shows real-time printing of a cell encapsulation device that is useful for producing small human cells containing organoids. The structure is designed to be permeable and the size is 200 microns in diameter and can contain up to 2000 cells.

Prellis isn’t the first company to develop three-dimensional organ printing. There have been decades of research into the technology, and companies like BioBots (which made its debut on the TechCrunch stage) are already driving down the cost of printing living tissue.

Now called Allevi, the company formerly known as BioBots has seen its founders part ways and its business  strategy shift (it’s now focusing on developing software to make its bioprinters easier to use), according to a report in Inc. Allevi has slashed the cost of bioprinting with devices that sell for less than $10,000, but Prellis contends that the limitations of extrusion printing mean that technology is too low resolution and too slow to create capillaries and keep cells alive.

Prellis’ organs will also need to be placed in a bioreactor to sustain them before they’re transplanted into an animal, but the difference is that the company aims to produce complete organs rather than sample tissue or a small cell sample, according to a statement. The bioreactors can simulate the biomechanical pressures that ensure an organ functions properly, Matheu said.

“Vasculature is a key feature of complex tissues and is essential for engineering tissue with therapeutic value,” said Todd Huffman, the chief executive officer of 3Scan, an advanced digital tissue imaging and data analysis company (and a Prellis advisor). “Prellis’ advancement represents a key milestone in the quest to engineer organs.”

Matheu estimates that it will take two-and-a-half years and $15 million to bring implantable organs through their first animal trials. “That will get a test kidney into an animal,” she said.

The goal is to print a quarter-sized kidney that could be transplanted into rats. “We want something that would be able to handle a kidney that we would transplant into a human,” Matheu said.

One frame of a 3D map of animal tissue from 3Scan .

Earlier this year, researchers at the University of Manchester href=”https://newatlas.com/working-kidney-cells-grown-mice/53354/”> grew functional human kidney tissue from stem cells for the first time. The scientists implanted small clusters of capillaries that filter waste products from the blood that had been grown in a Petri dish into genetically engineered mice. After 12 weeks, the capillaries had grown nephrons — the elements that make up a functional human kidney.

Ultimately, the vision is to export cells from patients by taking a skin graft or blood, stem cell or bone marrow harvest — and then use those samples to create the cellular material that will grow organs. “Tissue rejection was the first thing I was thinking about in how I was designing the process and how we could do it,” says Matheu.

While Prellis is spending its time working to perfect a technique for printing kidneys, the company is looking for partners to take its manufacturing technology and work on processes to develop other organs.

“We’ll be doing collaborative work with other groups,” Matheu said. “Our technology will come to market in many other ways prior to the full kidney.”

Last year, the company outlined a go-to-market strategy that included developing lab-grown tissues to produce antibodies for therapeutics and drug development. The company’s first targeted human tissue printed for clinical development were cells called “islets of Langerhans,” which are the units within a pancreas that produce insulin.

“Type 1 diabetics lose insulin-producing islets of Langerhans at a young age. If we can replace these, we can offer diabetes patients a life free of daily insulin shots and glucose monitoring,” said Matheu in a statement at the time.

Matheu sees the technology she and her co-founder developed as much about a fundamental shift in manufacturing biomaterials as a novel process to print kidneys, specifically.

“Imagine if you want to build a tumor for testing… In the lab it would take you five hours to print one… With our system it would take you three and a half seconds,” said Matheu. “That is our baseline optical system… The speed is such a shift in how you can build cells and fundamental structures we are going to be working to license this out.”

Meanwhile, the need for some solution to the shortage in organ donations keeps growing. Matheu said that one in seven adults in the U.S. have some sort of kidney ailment, and she estimates that 90 million people will need a kidney at some point in their lives.

Roughly 330 people die every day from organ failure, and if there were a fast way to manufacture those organs, there’s no reason for those fatalities, says Matheu. Prellis estimates that because of the need for human tissue and organ replacement alternatives, as well as human tissue for drug discovery and toxicology testing, the global tissue engineering market will reach $94 billion by 2024, up from $23 billion in 2015.

“We need to help people faster,” says Matheu. 

News Source = techcrunch.com

Scaling startups are setting up secondary hubs in these cities

in Amazon.com/America/apttus/austin/boston/coinbase/Column/crowdstrike/Delhi/Docker/GitHub/India/New York/north carolina/Orlando/Politics/Portland/raleigh/San Francisco/Silicon Valley/Startups/TC/Tennessee/texas/United States by

America’s mayors have spent the past nine months tripping over each other to curry favor with Amazon.com in its high-profile search for a second headquarters.

More quietly, however, a similar story has been playing out in startup-land. Many of the most valuable venture-backed companies are venturing outside their high-cost headquarters and setting up secondary hubs in smaller cities.

Where are they going? Nashville is pretty popular. So is Phoenix. Portland and Raleigh also are seeing some jobs. A number of companies also have a high number of remote offerings, seeking candidates with coveted skills who don’t want to relocate.

Those are some of the findings from a Crunchbase News analysis of the geographic hiring practices of U.S. unicorns. Since most of these companies are based in high-cost locations, like the San Francisco Bay Area, Boston and New York, we were looking to see if there is a pattern of setting up offices in smaller, cheaper cities. (For more on survey technique, see Methodology section below.)

Here is a look at some of the hotspots.

Nashville

One surprise finding was the prominence of Nashville among secondary locations for startup offices.

We found at least four unicorns scaling up Nashville offices, plus another three with growing operations in or around other Tennessee cities. Here are some of the Tennessee-loving startups:

When we referred to Nashville’s popularity with unicorns as surprising, that was largely because the city isn’t known as a major hub for tech startups or venture funding. That said, it has a lot of attributes that make for a practical and desirable location for a secondary office.

Nashville’s attractions include high quality of life ratings, a growing population and economy, mild climate and lots of live music. Home prices and overall cost of living are also still far below Silicon Valley and New York, even though the Nashville real estate market has been on a tear for the past several years. An added perk for workers: Tennessee has no income tax on wages.

Phoenix

Phoenix is another popular pick for startup offices, particularly West Coast companies seeking a lower-cost hub for customer service and other operations that require a large staff.

In the chart below, we look at five unicorns with significant staffing in the desert city:

 

Affordability, ease of expansion and a large employable population look like big factors in Phoenix’s appeal. Homes and overall cost of living are a lot cheaper than the big coastal cities. And there’s plenty of room to sprawl.

One article about a new office opening also cited low job turnover rates as an attractive Phoenix-area attribute, which is an interesting notion. Startup hubs like San Francisco and New York see a lot of job-hopping, particularly for people with in-demand skill sets. Scaling companies may be looking for people who measure their job tenure in years rather than months.

Those aren’t the only places

Nashville and Phoenix aren’t the only hotspots for unicorns setting up secondary offices. Many other cities are also seeing some scaling startup activity.

Let’s start with North Carolina. The Research Triangle region is known for having a lot of STEM grads, so it makes sense that deep tech companies headquartered elsewhere might still want a local base. One such company is cybersecurity unicorn Tanium, which has a lot of technical job openings in the area. Another is Docker, developer of software containerization technology, which has open positions in Raleigh.

The Orlando metro area stood out mostly due to Robinhood, the zero-fee stock and crypto trading platform that recently hit the $5 billion valuation mark. The Silicon Valley-based company has a significant number of open positions in Lake Mary, an Orlando suburb, including HR and compliance jobs.

Portland, meanwhile, just drew another crypto-loving unicorn, digital currency transaction platform Coinbase. The San Francisco-based company recently opened an office in the Oregon city and is currently in hiring mode.

Anywhere with a screen

But you don’t have to be anywhere in particular to score jobs at many fast-growing startups. A lot of unicorns have a high number of remote positions, including specialized technical roles that may be hard to fill locally.

GitHub, which makes tools developers can use to collaborate remotely on projects, does a particularly good job of practicing what it codes. A notable number of engineering jobs open at the San Francisco-based company are available to remote workers, and other departments also have some openings for telecommuters.

Others with a smattering of remote openings include Silicon Valley-based cybersecurity provider CrowdStrike, enterprise software developer Apttus and also Docker.

Not everyone is doing it

Of course, not every unicorn is opening large secondary offices. Many prefer to keep staff closer to home base, seeking to lure employees with chic workplaces and lavish perks. Other companies find that when they do expand, it makes strategic sense to go to another high-cost location.

Still, the secondary hub phenomenon may offer a partial antidote to complaints that a few regions are hogging too much of the venture capital pie. While unicorns still overwhelmingly headquarter in a handful of cities, at least they’re spreading their wings and providing more jobs in other places, too.

Methodology

For this analysis, we were looking at U.S. unicorns with secondary offices in other North American cities. We began with a list of 125 U.S.-based companies and looked at open positions advertised on their websites, focusing on job location.

We excluded job offerings related to representing a local market. For instance, a San Francisco company seeking a sales rep in Chicago to sell to Chicago customers doesn’t count. Instead, we looked for openings for team members handling core operations, including engineering, finances and company-wide customer support. We also excluded secondary offices outside of North America.

Additionally, we were looking principally for companies expanding into lower-cost areas. In many cases, we did see companies strategically adding staff in other high-cost locations, such as New York and Silicon Valley.

A final note pertains to Austin, Texas. We did see several unicorns based elsewhere with job openings in Austin. However, we did not include the city in the sections above because Austin, although a lower-cost location than Silicon Valley, may also be characterized as a large, mature technology and startup hub in its own right.

News Source = techcrunch.com

Shared housing startups are taking off

in affordable housing/Airbnb/Breather/Business/business incubators/China/Collaborative Consumption/Column/costs/CrunchBase/Delhi/Economy/Europe/gen z/homeshare/HubHaus/India/industrious/Knotel/Los Angeles/New York/pittsburgh/Politics/Private equity/roomie/roommate/San Francisco/Seattle/Southeast Asia/Starcity/TC/United States/wi-fi by

When young adults leave the parental nest, they often follow a predictable pattern. First, move in with roommates. Then graduate to a single or couple’s pad. After that comes the big purchase of a single-family home. A lawnmower might be next.

Looking at the new home construction industry, one would have good reason to presume those norms were holding steady. About two-thirds of new homes being built in the U.S. this year are single-family dwellings, complete with tidy yards and plentiful parking.

In startup-land, however, the presumptions about where housing demand is going looks a bit different. Home sharing is on the rise, along with more temporary lease options, high-touch service and smaller spaces in sought-after urban locations.

Seeking roommates and venture capital

Crunchbase News analysis of residential-focused real estate startups uncovered a raft of companies with a shared and temporary housing focus that have raised funding in the past year or so.

This isn’t a U.S.-specific phenomenon. Funded shared and short-term housing startups are cropping up across the globe, from China to Europe to Southeast Asia. For this article, however, we’ll focus on U.S. startups. In the chart below, we feature several that have raised recent rounds.

Notice any commonalities? Yes, the startups listed are all based in either New York or the San Francisco Bay Area, two metropolises associated with scarce, pricey housing. But while these two metro areas offer the bulk of startups’ living spaces, they’re also operating in other cities, including Los Angeles, Seattle and Pittsburgh.

From white picket fences to high-rise partitions

The early developers of the U.S. suburban planned communities of the 1950s and 60s weren’t just selling houses. They were selling a vision of the American Dream, complete with quarter-acre lawns, dishwashers and spacious garages.

By the same token, today’s shared housing startups are selling another vision. It’s not just about renting a room; it’s also about being part of a community, making friends and exploring a new city.

One of the slogans for HubHaus is “rent one of our rooms and find your tribe.” Founded less than three years ago, the company now manages about 80 houses in Los Angeles and the San Francisco Bay Area, matching up roommates and planning group events.

Starcity pitches itself as an antidote to loneliness. “Social isolation is a growing epidemic—we solve this problem by bringing people together to create meaningful connections,” the company homepage states.

The San Francisco company also positions its model as a partial solution to housing shortages as it promotes high-density living. It claims to increase living capacity by three times the normal apartment building.

Costs and benefits

Shared housing startups are generally operating in the most expensive U.S. housing markets, so it’s difficult to categorize their offerings as cheap. That said, the cost is typically lower than a private apartment.

Mostly, the aim seems to be providing something affordable for working professionals willing to accept a smaller private living space in exchange for a choice location, easy move-in and a ready-made social network.

At Starcity, residents pay $2,000 to $2,300 a month, all expenses included, depending on length of stay. At HomeShare, which converts two-bedroom luxury flats to three-bedrooms with partitions, monthly rents start at about $1,000 and go up for larger spaces.

Shared and temporary housing startups also purport to offer some savings through flexible-term leases, typically with minimum stays of one to three months. Plus, they’re typically furnished, with no need to set up Wi-Fi or pay power bills.

Looking ahead

While it’s too soon to pick winners in the latest crop of shared and temporary housing startups, it’s not far-fetched to envision the broad market as one that could eventually attract much larger investment and valuations. After all, Airbnb has ascended to a $30 billion private market value for its marketplace of vacation and short-term rentals. And housing shortages in major cities indicate there’s plenty of demand for non-Airbnb options.

While we’re focusing here on residential-focused startups, it’s also worth noting that the trend toward temporary, flexible, high-service models has already gained a lot of traction for commercial spaces. Highly funded startups in this niche include Industrious, a provider of flexible-term, high-end office spaces, Knotel, a provider of customized workplaces, and Breather, which provides meeting and work rooms on demand. Collectively, those three companies have raised about $300 million to date.

At first glance, it may seem shared housing startups are scaling up at an off time. The millennial generation (born roughly 1980 to 1994) can no longer be stereotyped as a massive band of young folks new to “adulting.” The average member of the generation is 28, and older millennials are mid-to-late thirties. Many even own lawnmowers.

No worries. Gen Z, the group born after 1995, is another huge generation. So even if millennials age out of shared housing, demographic forecasts indicate there will plenty of twenty-somethings to rent those partitioned-off rooms.

News Source = techcrunch.com

Instagram opens a San Francisco office

in Delhi/Facebook/India/instagram/Politics/San Francisco/TC by

Last year, Facebook was reportedly scouting for office space in San Francisco in order to find a space suitable to house some 100 Instagram employees. Today, the company is officially confirming its San Francisco plans with an announcement that it has leased four floors at 181 Fremont in San Francisco. It will initially house its under-200 person Creation & Communication team, which builds for Stories, Direct, Live and more, but plans to expand its San Francisco headcount in time.

TechCrunch had reported last summer that the Fremont location was being considered, among others. At the time, the 70-story tower wasn’t yet open, and no lease had been signed.

Today, Instagram confirms its lobby will be on the 7th floor of the Fremont building and will be connected to the TransBay Transit Center City Park.

Employees started moving in on May 7th, but that transition remains in progress. It’s also still putting the finishing touches on the space, but shared a few photos (see above and below) of what the space looks like today.

Instagram notes it has just under 200 employees in the Fremont office at present, but it expects that number to grow over the course of the year. It also has around 200 in New York, and 400 at its main office in Menlo Park.

Having a space in the city will likely help Instagram with its recruiting efforts – the new office may attract those who prefer to live in the city, for all its advantages, including the fact that they would no longer have to endure the hour-plus shuttle ride to Instagram’s Menlo Park headquarters, near the main Facebook campus.

“We have space to grow the team and plan to do so considerably this year,” an Instagram spokesperson said.

The company declined to share details like square footage or the cost of the lease at this time.

However, real estate data firm the CoStar Group told the San Francisco Chronicle that Facebook signed a lease for 432,000 square feet of office space in the tower, which could house around 2,000 employees. So this is clearly an investment in the future.

“This is not a pilot,” the spokesperson acknowledged, referencing the claims that it’s a way for the company to “test” out having San Francisco office space.

“Instagram is officially establishing a presence in San Francisco and growing the team on-the-ground here,” they said.

News Source = techcrunch.com

1 2 3
Go to Top