Timesdelhi.com

December 10, 2018
Category archive

Developer

Why you need a supercomputer to build a house

in affordable housing/Artificial Intelligence/building/building codes/buildings/camino/concur/concur labs/Cove.Tool/cover/Cover Technologies/Delhi/Developer/Enterprise/envelope/Government/GreenTech/housing/India/Logistics/machine learning/Policy/Politics/Real estate/regulation/SaaS/Startups/TC/zoning by

When the hell did building a house become so complicated?

Don’t let the folks on HGTV fool you. The process of building a home nowadays is incredibly painful. Just applying for the necessary permits can be a soul-crushing undertaking that’ll have you running around the city, filling out useless forms, and waiting in motionless lines under fluorescent lights at City Hall wondering whether you should have just moved back in with your parents.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.

And to actually get approval for those permits, your future home will have to satisfy a set of conditions that is a factorial of complex and conflicting federal, state and city building codes, separate sets of fire and energy requirements, and quasi-legal construction standards set by various independent agencies.

It wasn’t always this hard – remember when you’d hear people say “my grandparents built this house with their bare hands?” These proliferating rules have been among the main causes of the rapidly rising cost of housing in America and other developed nations. The good news is that a new generation of startups is identifying and simplifying these thickets of rules, and the future of housing may be determined as much by machine learning as woodworking.

When directions become deterrents

Photo by Bill Oxford via Getty Images

Cities once solely created the building codes that dictate the requirements for almost every aspect of a building’s design, and they structured those guidelines based on local terrain, climates and risks. Over time, townships, states, federally-recognized organizations and independent groups that sprouted from the insurance industry further created their own “model” building codes.

The complexity starts here. The federal codes and independent agency standards are optional for states, who have their own codes which are optional for cities, who have their own codes that are often inconsistent with the state’s and are optional for individual townships. Thus, local building codes are these ever-changing and constantly-swelling mutant books made up of whichever aspects of these different codes local governments choose to mix together. For instance, New York City’s building code is made up of five sections, 76 chapters and 35 appendices, alongside a separate set of 67 updates (The 2014 edition is available as a book for $155, and it makes a great gift for someone you never want to talk to again).

In short: what a shit show.

Because of the hyper-localized and overlapping nature of building codes, a home in one location can be subject to a completely different set of requirements than one elsewhere. So it’s really freaking difficult to even understand what you’re allowed to build, the conditions you need to satisfy, and how to best meet those conditions.

There are certain levels of complexity in housing codes that are hard to avoid. The structural integrity of a home is dependent on everything from walls to erosion and wind-flow. There are countless types of material and technology used in buildings, all of which are constantly evolving.

Thus, each thousand-page codebook from the various federal, state, city, township and independent agencies – all dictating interconnecting, location and structure-dependent needs – lead to an incredibly expansive decision tree that requires an endless set of simulations to fully understand all the options you have to reach compliance, and their respective cost-effectiveness and efficiency.

So homebuilders are often forced to turn to costly consultants or settle on designs that satisfy code but aren’t cost-efficient. And if construction issues cause you to fall short of the outcomes you expected, you could face hefty fines, delays or gigantic cost overruns from redesigns and rebuilds. All these costs flow through the lifecycle of a building, ultimately impacting affordability and access for homeowners and renters.

Startups are helping people crack the code

Photo by Caiaimage/Rafal Rodzoch via Getty Images

Strap on your hard hat – there may be hope for your dream home after all.

The friction, inefficiencies, and pure agony caused by our increasingly convoluted building codes have given rise to a growing set of companies that are helping people make sense of the home-building process by incorporating regulations directly into their software.

Using machine learning, their platforms run advanced scenario-analysis around interweaving building codes and inter-dependent structural variables, allowing users to create compliant designs and regulatory-informed decisions without having to ever encounter the regulations themselves.

For example, the prefab housing startup Cover is helping people figure out what kind of backyard homes they can design and build on their properties based on local zoning and permitting regulations.

Some startups are trying to provide similar services to developers of larger scale buildings as well. Just this past week, I covered the seed round for a startup called Cove.Tool, which analyzes local building energy codes – based on location and project-level characteristics specified by the developer – and spits out the most cost-effective and energy-efficient resource mix that can be built to hit local energy requirements.

And startups aren’t just simplifying the regulatory pains of the housing process through building codes. Envelope is helping developers make sense of our equally tortuous zoning codes, while Cover and companies like Camino are helping steer home and business-owners through arduous and analog permitting processes.

Look, I’m not saying codes are bad. In fact, I think building codes are good and necessary – no one wants to live in a home that might cave in on itself the next time it snows. But I still can’t help but ask myself why the hell does it take AI to figure out how to build a house? Why do we have building codes that take a supercomputer to figure out?

Ultimately, it would probably help to have more standardized building codes that we actually clean-up from time-to-time. More regional standardization would greatly reduce the number of conditional branches that exist. And if there was one set of accepted overarching codes that could still set precise requirements for all components of a building, there would still only be one path of regulations to follow, greatly reducing the knowledge and analysis necessary to efficiently build a home.

But housing’s inherent ties to geography make standardization unlikely. Each region has different land conditions, climates, priorities and political motivations that cause governments to want their own set of rules.

Instead, governments seem to be fine with sidestepping the issues caused by hyper-regional building codes and leaving it up to startups to help people wade through the ridiculousness that paves the home-building process, in the same way Concur aids employee with infuriating corporate expensing policies.

For now, we can count on startups that are unlocking value and making housing more accessible, simpler and cheaper just by making the rules easier to understand. And maybe one day my grandkids can tell their friends how their grandpa built his house with his own supercomputer.

And lastly, some reading while in transit:

News Source = techcrunch.com

The nation-state of the internet

in blockchain/Book Review/community/Delhi/Developer/Facebook/Google/Government/India/Media/Politics/Reddit/Uber by

The internet is a community, but can it be a nation-state? It’s a question that I have been pondering on and off this year, what with the rise of digital nomads and the deeply libertarian ethos baked into parts of the blockchain community. It’s clearly on a lot of other people’s minds as well: when we interviewed Matt Howard of Norwest on Equity a few weeks back, he noted (unprompted) that Uber is one of the few companies that could reach “nation-state” status when it IPOs.

Clearly, the internet is home to many, diverse communities of similar-minded people, but how do those communities transmute from disparate bands into a nation-state?

That question led me to Imagined Communities, a book from 1983 and one of the most lauded (and debated) social science works ever published. Certainly it is among the most heavily cited: Google Scholar pegs it at almost 93,000 citations.

Benedict Anderson, a political scientist and historian, ponders over a simple question: where does nationalism come from? How do we come to form a common bond with others under symbols like a flag, even though we have never — and will almost never — meet all of our comrades-in-arms? Why does every country consider itself “special,” yet for all intents and purposes they all look identical (heads of state, colors and flags, etc.) Also, why is the nation-state invented so late?

Anderson’s answer is his title: people come to form nations when they can imagine their community and the values and people it holds, and thus can demarcate the borders (physical and cognitive) of who is a member of that hypothetical club and who is not.

In order to imagine a community though, there needs to be media that actually links that community together. The printing press is the necessary invention, but Anderson tracks the rise of nation-states to the development of vernacular media — French language as opposed to the Latin of the Catholic Church. Lexicographers researched and published dictionaries and thesauruses, and the printing presses — under pressure from capitalism’s dictates — created rich shelves of books filled with the stories and myths of peoples who just a few decades ago didn’t “exist” in the mind’s eye.

The nation-state itself was developed first in South America in the decline and aftermath of the Spanish and Portuguese empires. Anderson argues for a sociological perspective on where these states originate from. Intense circulation among local elites — the bureaucrats, lawyers, and professionals of these states — and their lack of mobility back to their empires’ capitals created a community of people who realized they had more in common with each other than the people on the other side of the Atlantic.

As other communities globally start to understand their unique place in the world, they import these early models of nation-states through the rich print culture of books and newspapers. We aren’t looking at convergent evolution, but rather clones of one model for organizing the nation implemented across the world.

That’s effectively the heart of the thesis of this petite book, which numbers just over 200 pages of eminently readable if occasionally turgid writing. There are dozens of other epiphanies and thoughts roaming throughout those pages, and so the best way to get the full flavor is just to pick up a used copy and dive in.

For my purposes though, I was curious to see how well Anderson’s thesis could be applied to the nation-state of the internet. Certainly, the concept that the internet is its own sovereign entity has been with us almost since its invention (just take a look at John Perry Barlow’s original manifesto on the independence of cyberspace if you haven’t).

Isn’t the internet nothing but a series of imagined communities? Aren’t subreddits literally the seeds of nation-states? Every time Anderson mentioned the printing press or “print-capitalism,” I couldn’t help but replace the word “press” with WordPress and print-capitalism with advertising or surveillance capitalism. Aren’t we going through exactly the kind of media revolution that drove the first nation-states a few centuries ago?

Perhaps, but it’s an extraordinarily simplistic comparison, one that misses some of the key originators of these nation-states.

Photo by metamorworks via Getty Images

One of the key challenges is that nation-states weren’t a rupture in time, but rather were continuous with existing power structures. On this point, Anderson is quite absolute. In South America, nation-states were borne out of the colonial administrations, and elites — worried about losing their power — used the burgeoning form of the nation-state to protect their interests (Anderson calls this “official nationalism”). Anderson sees this pattern pretty much everywhere, and if not from colonial governments, then from the feudal arrangements of the late Middle Ages.

If you turn the gaze to the internet then, who are the elites? Perhaps Google or Facebook (or Uber), companies with “nation-state” status that are essentially empires on to themselves. Yet, the analogy to me feels stretched.

There is an even greater problem though. In Anderson’s world, language is the critical vehicle by which the nation-state connects its citizens together into one imagined community. It’s hard to imagine France without French, or England without English. The very symbols by which we imagine our community are symbols of that community, and it is that self-referencing that creates a critical feedback loop back to the community and reinforces its differentiation.

That would seem to knock out the lowly subreddit as a potential nation-state, but it does raise the question of one group: coders.

When I write in Python for instance, I connect with a group of people who share that language, who communicate in that language (not entirely mind you), and who share certain values in common by their choice of that language. In fact, software engineers can tie their choices of language so strongly to their identities that it is entirely possible that “Python developer” or “Go programmer” says more about that person than “American” or “Chinese.”

Where this gets interesting is when you carefully connect it to blockchain, which I take to mean a technology that can autonomously distribute “wealth.” Suddenly, you have an imagined community of software engineers, who speak in their own “language” able to create a bureaucracy that serves their interests, and with media that connects them all together (through the internet). The ingredients — at least as Anderson’s recipe would have them — are all there.

I am not going to push too hard in this direction, but one surprise I had with Anderson is how little he discussed the physical agglomeration of people. The imagining of (physical) borders is crucial for a community, and so the development of maps for each nation is a common pattern in their historical developments. But, the map, fundamentally, is a symbol, a reminder that “this place is our place” and not much more.

Indeed, nation-states bleed across physical borders all the time. Americans are used to the concept of worldwide taxation. France seats representatives from its overseas departments in the National Assembly, allowing French citizens across the former empire to vote and elect representatives to the country’s legislature. And anyone who has followed the Huawei CFO arrest in Canada this week should know that “jurisdiction” these days has few physical borders.

The barrier for the internet or its people to become nation-states is not physical then, but cognitive. One needs to not just imagine a community, but imagine it as the prime community. We will see an internet nation-state when we see people prioritizing fealty to one of these digital communities over the loyalty and patriotism to a meatspace country. There are already early acolytes in these communities who act exactly that way. The question is whether the rest of the adherents will join forces and create their own imagined (cyber)space.

News Source = techcrunch.com

Contentful raises $33.5M for its headless CMS platform

in Balderton Capital/Benchmark/berlin/cloud applications/cloud computing/computing/content management/Contentful/Delhi/Developer/Enterprise/Europe/funding/Fundings & Exits/General Catalyst/hercules/India/Lyft/Nike/north america/OMERS Ventures/Politics/Recent Funding/Salesforce Ventures/salesforce.com/San Francisco/sap ventures/Sapphire Ventures/series C/Software/Spotify/Startups by

Contentful, a Berlin- and San Francisco-based startup that provides content management infrastructure for companies like Spotify, Nike, Lyft and others, today announced that it has raised a $33.5 million Series D funding round led by Sapphire Ventures, with participation from OMERS Ventures and Salesforce Ventures, as well as existing investors General Catalyst, Benchmark, Balderton Capital and Hercules. In total, the company has now raised $78.3 million.

It’s only been less than a year since the company raised its Series C round and as Contentful co-founder and CEO Sascha Konietzke told me, the company didn’t really need to raise right now. “We had just raised our last round about a year ago. We still had plenty of cash in our bank account and we didn’t need to raise as of now,” said Konietzke. “But we saw a lot of economic uncertainty, so we thought it might be a good moment in time to recharge. And at the same time, we already had some interesting conversations ongoing with Sapphire [formeraly SAP Ventures] and Salesforce. So we saw the opportunity to add more funding and also start getting into a tight relationship with both of these players.”

The original plan for Contentful was to focus almost explicitly on mobile. As it turns out, though, the company’s customers also wanted to use the service to handle its web-based applications and these days, Contentful happily supports both. “What we’re seeing is that everything is becoming an application,” he told me. “We started with native mobile application, but even the websites nowadays are often an application.”

In its early days, Contentful also focuses only on developers. Now, however, that’s changing and having these connections to large enterprise players like SAP and Salesforce surely isn’t going to hurt the company as it looks to bring on larger enterprise accounts.

Currently, the company’s focus is very much on Europe and North America, which account for about 80% of its customers. For now, Contentful plans to continue to focus on these regions, though it obviously supports customers anywhere in the world.

Contentful only exists as a hosted platform. As of now, the company doesn’t have any plans for offering a self-hosted version, though Konietzke noted that he does occasionally get requests for this.

What the company is planning to do in the near future, though, is to enable more integrations with existing enterprise tools. “Customers are asking for deeper integrations into their enterprise stack,” Konietzke said. “And that’s what we’re beginning to focus on and where we’re building a lot of capabilities around that.” In addition, support for GraphQL and an expanded rich text editing experience is coming up. The company also recently launched a new editing experience.

News Source = techcrunch.com

Microsoft Edge goes Chromium (and macOS)

in chromium/Delhi/Developer/google-chrome/India/Javascript/macos/Microsoft/Microsoft Edge/microsoft windows/Politics/TC/windows 7 by

The rumors were true: Microsoft Edge is moving to the open-source Chromium platform, the same platform that powers Google’s Chrome browser. And once that is done, Microsoft is bringing Edge to macOS, too. In addition, Microsoft is decoupling Edge from the Windows update process to offer a faster update cadence — and with that, it’ll bring the new Edge to Windows 7 and 8 users, too.

It’ll be a while before any of this happens, though. There’s no code to test today and the first previews are still months away. But at some point in 2019, Microsoft’s EdgeHTML and Chakra will go away and Blink and V8 will take its place. The company expects to release a first developer preview early next year.

Obviously, there is a lot to unpack here. What’s clear, though, is that Microsoft is acknowledging that Chrome and Chromium are the de facto standard today, both for users and for developers.

Over the years, especially after Microsoft left the Internet Explorer brand behind, Edge had, for the most part, become a perfectly usable browser, but Microsoft acknowledges that there were always compatibility issues. While it was investing heavily in fixing those, what we’re hearing from Microsoft is a very pragmatic message: it simply wasn’t worth the investment in engineering resources anymore. What Microsoft had to do, after all, was reverse engineer its way around problems on certain sites.

In part, that’s because Edge never quite gained the market share where developers cared enough to test their code on the platform. And with the web as big as it is, the long tail of incompatible sites remains massive.

Because many web developers work on Macs, where they don’t have access to Edge, testing for it became even more of an afterthought. Hence Microsoft’s efforts to bring Edge to the Mac, 15 years after it abandoned Internet Explorer for Mac. The company doesn’t expect that Edge on Mac will gain any significant market share, but it believes that having it available on every platform will mean that more developers will test their web apps with Edge, too.

Microsoft also admits that it didn’t help that Edge only worked on Windows 10 — and that Edge updates were bound to Windows updates. I was never quite sure why that was the case, but as Microsoft will now happily acknowledge, that meant that millions of users on older Windows versions were left behind, and even those on Windows 10 often didn’t get the latest, most compatible version of Edge because their companies remained a few updates behind.

For better or worse, Chrome has become the default and Microsoft is going with the flow. The company could have opted to open source EdgeHTML and its JavaScript engine. That option was on the table, but in the end, it opted not to. The company says that’s due to the fact that the current version of Edge has so many hooks into Windows 10 that it simply wouldn’t make much sense to do this if Microsoft wants to take the new Edge to Windows 7 and the Mac. To be fair, this probably would’ve been a fool’s errand anyway, since it’s hard to imagine that an open-source community around Edge would’ve made much of a difference in solving the practical problems anyway.

With this move, Microsoft also plans to increase its involvement in the Chromium community. That means it’ll bring to Chromium some of the work it did to make Edge work really well with touchscreens, for example. But also, as previously reported, the company now publicly notes that it is working with Google and Qualcomm to bring a native implementation of the Chrome browser to Windows 10 on ARM, making it snappier and more battery friendly than the current version that heavily relies on emulation.

Microsoft hopes that if it can make the compatibility issues a thing of the past, users will still gravitate to its browser because of what differentiates it. Maybe that’s its Cortana integration or new integrations with Windows and Office. Or maybe those are new consumer services or, for the enterprise users, specific features that make the lives of IT managers a bit easier.

When the rumors of this change first appeared a few days ago, a number of pundits argued that this isn’t great for the web because it gives even more power over web standards to the Chromium project.

I share some of those concerns, but Microsoft is making a very pragmatic argument for this move and notes that Edge’s small market share didn’t allow it to make a dent in this process anyway. By becoming more active in the Chromium community, it’ll have more of a voice — or so it hopes — and be able to advocate for web standards and bring its own innovations to Chromium.

You’re browser is probably the most complex piece of software running on your computer right now. That means switching out engines is anything but trivial. The company isn’t detailing what its development process will look like and how it’ll go about this, but we’re being told that the company is looking at which parts of the Edge experience to keep and then will work with the Chromium community to bring those to the Chromium engine, too.

Microsoft stresses that it isn’t giving up on Edge, by the way. The browser isn’t going anywhere. If you’re a happy Edge user today, chances are this move will make you an even happier Edge user in the long run. If you aren’t, Microsoft hopes you’ll give it a fresh look when the new Chromium-based version launches. It’s on Microsoft now to build a browser that is differentiated enough to get people to give it another shot.

 

 

News Source = techcrunch.com

Facebook ends platform policy banning apps that copy its features

in Apps/Delhi/Developer/Facebook/facebook platform/Facebook Policy/India/mobile/Policy/Politics/Social/TC by

Facebook will now freely allow developers to build competitors to its features upon its own platform. Today Facebook announced it will drop Platform Policy section 4.1, which stipulates “Add something unique to the community. Don’t replicate core functionality that Facebook already provides.”

Facebook had previously enforced that policy selectively to hurt competitors that had used its Find Friends or viral distribution features. Apps like Vine, Voxer, MessageMe, Phhhoto and more had been cut off from Facebook’s platform for too closely replicating its video, messaging or GIF creation tools. Find Friends is a vital API that lets users find their Facebook friends within other apps.

The move will significantly reduce the risk of building on the Facebook platform. It could also cast it in a better light in the eyes of regulators. Anyone seeking ways Facebook abuses its dominance will lose a talking point. And by creating a more fair and open platform where developers can build without fear of straying too close to Facebook’s history or road map, it could reinvigorate its developer ecosystem.

A Facebook spokesperson provided this statement to TechCrunch:

We built our developer platform years ago to pave the way for innovation in social apps and services. At that time we made the decision to restrict apps built on top of our platform that replicated our core functionality. These kind of restrictions are common across the tech industry with different platforms having their own variant including YouTube, Twitter, Snap and Apple. We regularly review our policies to ensure they are both protecting people’s data and enabling useful services to be built on our platform for the benefit of the Facebook community. As part of our ongoing review we have decided that we will remove this out of date policy so that our platform remains as open as possible. We think this is the right thing to do as platforms and technology develop and grow.

The change comes after Facebook locked down parts of its platform in April for privacy and security reasons in the wake of the Cambridge Analytica scandal. Diplomatically, Facebook said it didn’t expect the change to impact its standing with regulators but it’s open to answering their questions.

Earlier in April, I wrote a report on how Facebook used Policy 4.1 to attack competitors it saw gaining traction. The article, “Facebook shouldn’t block you from finding friends on competitors,” advocated for Facebook to make its social graph more portable and interoperable so users could decamp to competitors if they felt they weren’t treated right in order to coerce Facebook to act better.

The policy change will apply retroactively. Old apps that lost Find Friends or other functionality will be able to submit their app for review and, once approved, will regain access.

Friend lists still can’t be exported in a truly interoperable way. But at least now Facebook has enacted the spirit of that call to action. Developers won’t be in danger of losing access to that Find Friends Facebook API for treading in its path.

Below is an excerpt from our previous reporting on how Facebook has previously enforced Platform Policy 4.1 that before today’s change was used to hamper competitors:

  • Voxer was one of the hottest messaging apps of 2012, climbing the charts and raising a $30 million round with its walkie-talkie-style functionality. In early January 2013, Facebook copied Voxer by adding voice messaging into Messenger. Two weeks later, Facebook cut off Voxer’s Find Friends access. Voxer CEO Tom Katis told me at the time that Facebook stated his app with tens of millions of users was a “competitive social network” and wasn’t sharing content back to Facebook. Katis told us he thought that was hypocritical. By June, Voxer had pivoted toward business communications, tumbling down the app charts and leaving Facebook Messenger to thrive.
  • MessageMe had a well-built chat app that was growing quickly after launching in 2013, posing a threat to Facebook Messenger. Shortly before reaching 1 million users, Facebook cut off MessageMe‘s Find Friends access. The app ended up selling for a paltry double-digit millions price tag to Yahoo before disintegrating.
  • Phhhoto and its fate show how Facebook’s data protectionism encompasses Instagram. Phhhoto’s app that let you shoot animated GIFs was growing popular. But soon after it hit 1 million users, it got cut off from Instagram’s social graph in April 2015. Six months later, Instagram launched Boomerang, a blatant clone of Phhhoto. Within two years, Phhhoto shut down its app, blaming Facebook and Instagram. “We watched [Instagram CEO Kevin] Systrom and his product team quietly using PHHHOTO almost a year before Boomerang was released. So it wasn’t a surprise at all . . . I’m not sure Instagram has a creative bone in their entire body.”
  • Vine had a real shot at being the future of short-form video. The day the Twitter-owned app launched, though, Facebook shut off Vine’s Find Friends access. Vine let you share back to Facebook, and its six-second loops you shot in the app were a far cry from Facebook’s heavyweight video file uploader. Still, Facebook cut it off, and by late 2016, Twitter announced it was shutting down Vine.

News Source = techcrunch.com

1 2 3 55
Go to Top