Menu

Timesdelhi.com

March 19, 2019
Category archive

London

It’s time to disrupt nuclear weapons

in Column/countries/Delhi/India/London/North Korea/Pakistan/Pentagon/Politics/president/Russia/Social Media Manipulation/TC/Technology/United States by

“Atomic bombs are primarily a means for the ruthless annihilation of cities.”

Those are the words of Leo Szilard, one of the scientists who pushed for the development of nuclear weapons. He wrote them as part of a petition signed by dozens of other scientists who had worked on the Manhattan Project pleading with President Harry Truman not to use the nuclear bomb on Japan.

Mere months after its introduction in 1945, the architects of today’s nuclear world feared the implications of the technology they had created.

Nearly 75 years later it’s time again to ask technologists, innovators, entrepreneurs and academics: will you be party to the ‘ruthless annihilation of cities’? Will you expend your talents in the service of nuclear weapons? Will you use technology to create or to destroy?

Our moment of choice

Humanity is at another turning point.

A new nuclear arms race has begun in earnest with the US and Russia leading the way; tearing up the promise of lasting peace in favor of a new Cold War. Russia’s latest weapon is built to destroy entire coast lines with a radioactive tsunami. The US is building new nuclear weapons that are ‘more likely to be used’.

Meanwhile, North Korea appears to again be building its nascent nuclear weapons program. And India and Pakistan stand on the verge of open nuclear conflict, which climate modeling shows could lead to a global famine killing upwards of 2 billion people.

An Indian student wearing a mask poses with her hands painted with a slogans for peace during a rally to mark Hiroshima Day, in Mumbai on August 6, 2018. (PUNIT PARANJPE/AFP/Getty Images)

How do we stop this march toward oblivion?

The Treaty on the Prohibition of Nuclear Weapons has created an opening — a chance to radically change course with the power of international law and shifting norms. The nuclear ban treaty will become international law once 50 nations have ratified it. We are already at 22.

The financial world is also recognizing the risk, with some of the world’s biggest pension funds divesting from nuclear weapons. But there is something even more powerful than the almighty dollar; human capital.

“It took innovation, technological disruption, and ingenuity to create the nuclear dawn. We will need those same forces in greater measure to bring about a nuclear dusk.”

The nuclear weapons industrial complex relies on the most talented scientists, engineers, physicists and technologists to build this deadly arsenal. As more of that talent moves into the tech sector, defense contractors and the Pentagon is seeking to work with major technology companies and disruptive startups, as well as continue their work with universities.

Without those talented technologists, there would be no new nuclear arms race. It’s time to divest human capital from nuclear weapons.

A mistake to end humanity?

Just over one year ago Hawaiians took cover and frantically Googled, “What to do during a nuclear attack”. Days later many Japanese mobile phone users also received a false alert for an inbound nuclear missile.

The combination of human error and technological flaws these incidents exposed makes accidental nuclear attacks an inevitability if we don’t move to end nuclear weapons before they end us.

The development of new machine learning technologies, autonomous weapons systems, cyber threats and social media manipulation are already destabilizing the global political order and potentially increasing the risk of a nuclear cataclysm. That is why it’s vital that the technology community collectively commits to using their skills and knowledge to protect us from nuclear eradication by joining the effort for global nuclear abolition.

A mock “killer robot” is pictured in central London on April 23, 2013 during the launching of the Campaign to Stop “Killer Robots,” which calls for the ban of lethal robot weapons that would be able to select and attack targets without any human intervention. The Campaign to Stop Killer Robots calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons. (Photo: CARL COURT/AFP/Getty Images)

We need to stop this foolish nuclear escalation in its tracks. Our commitment must be to a nuclear weapons-free world, by disrupting the trajectory we are currently heading on. Business as usual will likely end in nuclear war.

It took innovation, technological disruption, and ingenuity to create the nuclear dawn. We will need those same forces in greater measure to bring about a nuclear dusk — the complete disarmament of nuclear-armed states and safeguards against future proliferation.

The belief that we can keep doing what we have done for seven decades for another seven decades is naive. It relies on a fanciful, misplaced faith in the illogical idea of deterrence. We are told simultaneously that nuclear weapons keep the world safe, by never being used. They bestow power, but only make certain states powerful.

This fallacy has been exposed by this moment in time. Thirty years after the end of the Cold War, nuclear weapons have proliferated. Key treaties have been torn up or are under threat. And even more states are threatening to develop nuclear weapons.

So I am putting out a call to you: join us with this necessary disruption; declare that you will not have a hand in our demise; declare that you will use technology for good.

News Source = techcrunch.com

Jupiter raises $23 million to tell businesses and governments how climate change will destroy them

in boulder/california/colorado/Delhi/Department of Commerce/Federal government/Google/Ignition Partners/India/insurance/Jupiter/kaggle/London/mapbox/McKinsey/Miami/new jersey/New York/Planet Labs/Politics/Port Authority/president/Real estate/satellite imagery/series B/Singapore/skybox/TC/Virtual Reality/zeus by

Whether it’s by flood, fire, or the fury of a storm, climate-related catastrophes are now impacting most cities and towns across the country. As these natural disasters increase in frequency and severity, cities and the businesses that reside in them are mobilizing to understand how best to prepare for the climatological challenges they’re going to face — and increasingly they’re turning to companies like Jupiter Intelligence for information.

From offices in San Mateo, Calif., Boulder, Colo., and New York Jupiter Intelligence has made a business selling data from satellite imagery and advanced computer models to cities like New York and Miami, along with the Federal government and big insurance and real estate customers.

With its new financing, Jupiter plans to take its show on the global road, and is bringing its services to clients in Rotterdam, London, and Singapore.

It’s a story that has its roots in over two decades of work from founders Rich Sorkin, Eric Wun, Josh Hacker, and Alan Blumberg.

Wun and Sorkin met in 1996 in the early days of the development of mapping and weather prediction technologies. And got their start in the business co-founding Zeus, a weather prediction technology developer that was pitching its services to commodities traders.

“Zeus was way too early from a technology platform perspective,” says Sorkin. “We put Zeus on the shelf eight years ago. Then when we came up with the idea for Jupiter most of the early ideas were already there.”

In the interim, Sorkin served as the president of Kaggle, a company Google acquired back in 2017. By that point, Sorkin had already left to launch Jupiter, which he started in 2016.

While Zeus predicted the thirty day weather for commodities traders, Jupiter is a more powerful toolkit that predicts the possibility of damage from severe weather and climate change for a much broader set of customers, Sorkin says.

Wun and Sorkin were on board immediately, and the next person to join the fledgling team, was Hacker — who had run satellite operations for Skybox — another Google acquisition. Following the merger of Skybox with Planet Labs, Hacker took a job at the National Oceanic and Atmospheric Administration within the Department of Commerce (one of the pre-eminent organizations focused on climate change).

The final recruit was Blumberg who was approached because of his role in developing the Princeton Ocean Model, which is used by over 5700 research and operational groups in 70 countries and his leadership position in developing 2-hour and 4-day flood predictions for Port Authority of New York and New Jersey.

Storm surge from Hurricane Sandy in New York City

After its launch the company was able to land three big insurance companies, QBE, Mitsui, and Nephila, which all agreed to throw cash into the company’s new $23 million round.

Jupiter’s predictive and analytics technologies have applications far beyond insurance. Airports, ports, power plants, water facilities, hospitals, municipal and even the federal government are turning to the company for information, according to Sorkin.

Jupiter raised $1 million in its seed round from DCVC (Data Collective) and then closed on $10 million more from Ignition Partners . The latest $23 million was led by Energize Ventures, a fund focused on infrastructure and climate-related investments.

SYSTEMIQ, which was co-founded by McKinsey veteran Jeremy Oppenheim, also invested in Jupiter’s Series B. The architect of McKinsey’s Sustainability and Resource Practice said in a statement, “For a decade the planet has needed the kind of repeatable, globally consistent, insurance grade analytics Jupiter now delivers.”

Photo courtesy of Shutterstock

The toolkit that the company pitches does purport to offer new levels of granularity and insight into the kinds of threats climate and weather-related disasters post to government and private assets.

“We predict probabilistically at the asset level… at the loading dock of a warehouse or a transmission box or a hotel on the beach, we determine the actual expected risk in a form that the insurance industry or the risk manager at an organization can use and integrate into their plans,” says Sorkin. 

The company’s process begins with global climate models and then drills down into a specific region which is used as the basis of predicting peril-like events, according to Sorkin.

That goes into a statistical model which translate the predictions into a form that quantifies the uncertainty and in a way that’s tailored to decision makers, he said.

Using APIs from Mapbox, the company can also provide a mapping interface that gives customers visualizations along with a product that lets users see what damage can look like inside of a building through virtual reality and a collaboration with Oculus.

“The strategy was to start with one peril in one place in one market so we started with flooding in Carolinas for the real estate,” says Sorkin. “We have expanded into much broader perils and geographies and market segments.”

For all of the time that Sorkin spends modeling out how cities will meet their doom in one form of cataclysm or another, Jupiter’s chief executive is fairly positive about the prospects for society to withstand the climate threat it currently faces.

“Even with all the bad things that could happen, we don’t think the apocalypse is inevitable,” Sorkin says. “The extent of damage is a function of how much people invest in avoiding it over the next decade.”

News Source = techcrunch.com

Fabula AI is using social spread to spot ‘fake news’

in Amazon/api/Artificial Intelligence/deep learning/Delhi/Emerging-Technologies/Europe/European Research Council/Facebook/fake news/Imperial College London/India/London/machine learning/Mark Zuckerberg/Media/MIT/myanmar/Politics/Social/social media/social media platforms/social media regulation/social network/social networks/Startups/TC/United Kingdom by

UK startup Fabula AI reckons it’s devised a way for artificial intelligence to help user generated content platforms get on top of the disinformation crisis that keeps rocking the world of social media with antisocial scandals.

Even Facebook’s Mark Zuckerberg has sounded a cautious note about AI technology’s capability to meet the complex, contextual, messy and inherently human challenge of correctly understanding every missive a social media user might send, well-intentioned or its nasty flip-side.

“It will take many years to fully develop these systems,” the Facebook founder wrote two years ago, in an open letter discussing the scale of the challenge of moderating content on platforms thick with billions of users. “This is technically difficult as it requires building AI that can read and understand news.”

But what if AI doesn’t need to read and understand news in order to detect whether it’s true or false?

Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional machine learning techniques struggle to find purchase on this ‘non-Euclidean’ space.

The startup says its deep learning algorithms are, by contrast, capable of learning patterns on complex, distributed data sets like social networks. So it’s billing its technology as a breakthrough. (Its written a paper on the approach which can be downloaded here.)

It is, rather unfortunately, using the populist and now frowned upon badge “fake news” in its PR. But it says it’s intending this fuzzy umbrella to refer to both disinformation and misinformation. Which means maliciously minded and unintentional fakes. Or, to put it another way, a photoshopped fake photo or a genuine image spread in the wrong context.

The approach it’s taking to detecting disinformation relies not on algorithms parsing news content to try to identify malicious nonsense but instead looks at how such stuff spreads on social networks — and also therefore who is spreading it.

There are characteristic patterns to how ‘fake news’ spreads vs the genuine article, says Fabula co-founder and chief scientist, Michael Bronstein.

“We look at the way that the news spreads on the social network. And there is — I would say — a mounting amount of evidence that shows that fake news and real news spread differently,” he tells TechCrunch, pointing to a recent major study by MIT academics which found ‘fake news’ spreads differently vs bona fide content on Twitter.

“The essence of geometric deep learning is it can work with network-structured data. So here we can incorporate heterogenous data such as user characteristics; the social network interactions between users; the spread of the news itself; so many features that otherwise would be impossible to deal with under machine learning techniques,” he continues.

Bronstein, who is also a professor at Imperial College London, with a chair in machine learning and pattern recognition, likens the phenomenon Fabula’s machine learning classifier has learnt to spot to the way infectious disease spreads through a population.

“This is of course a very simplified model of how a disease spreads on the network. In this case network models relations or interactions between people. So in a sense you can think of news in this way,” he suggests. “There is evidence of polarization, there is evidence of confirmation bias. So, basically, there are what is called echo chambers that are formed in a social network that favor these behaviours.”

“We didn’t really go into — let’s say — the sociological or the psychological factors that probably explain why this happens. But there is some research that shows that fake news is akin to epidemics.”

The tl;dr of the MIT study, which examined a decade’s worth of tweets, was that not only does the truth spread slower but also that human beings themselves are implicated in accelerating disinformation. (So, yes, actual human beings are the problem.) Ergo, it’s not all bots doing all the heavy lifting of amplifying junk online.

The silver lining of what appears to be an unfortunate quirk of human nature is that a penchant for spreading nonsense may ultimately help give the stuff away — making a scalable AI-based tool for detecting ‘BS’ potentially not such a crazy pipe-dream.

Although, to be clear, Fabula’s AI remains in development at this stage, having been tested internally on Twitter data sub-sets at this stage. And the claims it’s making for its prototype model remain to be commercially tested with customers in the wild using the tech across different social platforms.

It’s hoping to get there this year, though, and intends to offer an API for platforms and publishers towards the end of this year. The AI classifier is intended to run in near real-time on a social network or other content platform, identifying BS.

Fabula envisages its own role, as the company behind the tech, as that of an open, decentralised “truth-risk scoring platform” — akin to a credit referencing agency just related to content, not cash.

Scoring comes into it because the AI generates a score for classifying content based on how confident it is it’s looking at a piece of fake vs true news.

A visualisation of a fake vs real news distribution pattern; users who predominantly share fake news are coloured red and users who don’t share fake news at all are coloured blue — which Fabula says shows the clear separation into distinct groups, and “the immediately recognisable difference in spread pattern of dissemination”.

In its own tests Fabula says its algorithms were able to identify 93 percent of “fake news” within hours of dissemination — which Bronstein claims is “significantly higher” than any other published method for detecting ‘fake news’. (Their accuracy figure uses a standard aggregate measurement of machine learning classification model performance, called ROC AUC.)

The dataset the team used to train their model is a subset of Twitter’s network — comprised of around 250,000 users and containing around 2.5 million “edges” (aka social connections).

For their training dataset Fabula relied on true/fake labels attached to news stories by third party fact checking NGOs, including Snopes and PolitiFact. And, overall, pulling together the dataset was a process of “many months”, according to Bronstein, He also says that around a thousand different stories were used to train the model, adding that the team is confident the approach works on small social networks, as well as Facebook-sized mega-nets.

Asked whether he’s sure the model hasn’t been trained to identified patterns caused by bot-based junk news spreaders, he says the training dataset included some registered (and thus verified ‘true’) users.

“There is multiple research that shows that bots didn’t play a significant amount [of a role in spreading fake news] because the amount of it was just a few percent. And bots can be quite easily detected,” he also suggests, adding: “Usually it’s based on some connectivity analysis or content analysis. With our methods we can also detect bots easily.”

To further check the model, the team tested its performance over time by training it on historical data and then using a different split of test data.

“While we see some drop in performance it is not dramatic. So the model ages well, basically. Up to something like a year the model can still be applied without any re-training,” he notes, while also saying that, when applied in practice, the model would be continually updated as it keeps digesting (ingesting?) new stories and social media content.

Somewhat terrifyingly, the model could also be used to predict virality, according to Bronstein — raising the dystopian prospect of the API being used for the opposite purpose to that which it’s intended: i.e. maliciously, by fake news purveyors, to further amp up their (anti)social spread.

“Potentially putting it into evil hands it might do harm,” Bronstein concedes. Though he takes a philosophical view on the hyper-powerful double-edged sword of AI technology, arguing such technologies will create an imperative for a rethinking of the news ecosystem by all stakeholders, as well as encouraging emphasis on user education and teaching critical thinking.

Let’s certainly hope so. And, on the educational front, Fabula is hoping its technology can play an important role — by spotlighting network-based cause and effect.

“People now like or retweet or basically spread information without thinking too much or the potential harm or damage they’re doing to everyone,” says Bronstein, pointing again to the infectious diseases analogy. “It’s like not vaccinating yourself or your children. If you think a little bit about what you’re spreading on a social network you might prevent an epidemic.”

So, tl;dr, think before you RT.

Returning to the accuracy rate of Fabula’s model, while ~93 per cent might sound pretty impressive, if it were applied to content on a massive social network like Facebook — which has some 2.3BN+ users, uploading what could be trillions of pieces of content daily — even a seven percent failure rate would still make for an awful lot of fakes slipping undetected through the AI’s net.

But Bronstein says the technology does not have to be used as a standalone moderation system. Rather he suggests it could be used in conjunction with other approaches such as content analysis, and thus function as another string on a wider ‘BS detector’s bow.

It could also, he suggests, further aid human content reviewers — to point them to potentially problematic content more quickly.

Depending on how the technology gets used he says it could do away with the need for independent third party fact-checking organizations altogether because the deep learning system can be adapted to different use cases.

Example use-cases he mentions include an entirely automated filter (i.e. with no human reviewer in the loop); or to power a content credibility ranking system that can down-weight dubious stories or even block them entirely; or for intermediate content screening to flag potential fake news for human attention.

Each of those scenarios would likely entail a different truth-risk confidence score. Though most — if not all — would still require some human back-up. If only to manage overarching ethical and legal considerations related to largely automated decisions. (Europe’s GDPR framework has some requirements on that front, for example.)

Facebook’s grave failures around moderating hate speech in Myanmar — which led to its own platform becoming a megaphone for terrible ethnical violence — were very clearly exacerbated by the fact it did not have enough reviewers who were able to understand (the many) local languages and dialects spoken in the country.

So if Fabula’s language-agnostic propagation and user focused approach proves to be as culturally universal as its makers hope, it might be able to raise flags faster than human brains which lack the necessary language skills and local knowledge to intelligently parse context.

“Of course we can incorporate content features but we don’t have to — we don’t want to,” says Bronstein. “The method can be made language independent. So it doesn’t matter whether the news are written in French, in English, in Italian. It is based on the way the news propagates on the network.”

Although he also concedes: “We have not done any geographic, localized studies.”

“Most of the news that we take are from PolitiFact so they somehow regard mainly the American political life but the Twitter users are global. So not all of them, for example, tweet in English. So we don’t yet take into account tweet content itself or their comments in the tweet — we are looking at the propagation features and the user features,” he continues.

“These will be obviously next steps but we hypothesis that it’s less language dependent. It might be somehow geographically varied. But these will be already second order details that might make the model more accurate. But, overall, currently we are not using any location-specific or geographic targeting for the model.

“But it will be an interesting thing to explore. So this is one of the things we’ll be looking into in the future.”

Fabula’s approach being tied to the spread (and the spreaders) of fake news certainly means there’s a raft of associated ethical considerations that any platform making use of its technology would need to be hyper sensitive to.

For instance, if platforms could suddenly identify and label a sub-set of users as ‘junk spreaders’ the next obvious question is how will they treat such people?

Would they penalize them with limits — or even a total block — on their power to socially share on the platform? And would that be ethical or fair given that not every sharer of fake news is maliciously intending to spread lies?

What if it turns out there’s a link between — let’s say — a lack of education and propensity to spread disinformation? As there can be a link between poverty and education… What then? Aren’t your savvy algorithmic content downweights risking exacerbating existing unfair societal divisions?

Bronstein agrees there are major ethical questions ahead when it comes to how a ‘fake news’ classifier gets used.

“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score. So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. Of course such an algorithm would provide great accuracy but at least ethically it might be wrong,” he says when we ask about ethics.

He confirms Fabula is not using any kind of political affiliation information in its model at this point — but it’s all too easy to imagine this sort of classifier being used to surface (and even exploit) such links.

“What is very important in these problems is not only to be right — so it’s great of course that we’re able to quantify fake news with this accuracy of ~90 percent — but it must also be for the right reasons,” he adds.

The London-based startup was founded in April last year, though the academic research underpinning the algorithms has been in train for the past four years, according to Bronstein.

The patent for their method was filed in early 2016 and granted last July.

They’ve been funded by $500,000 in angel funding and about another $500,000 in total of European Research Council grants plus academic grants from tech giants Amazon, Google and Facebook, awarded via open research competition awards.

(Bronstein confirms the three companies have no active involvement in the business. Though doubtless Fabula is hoping to turn them into customers for its API down the line. But he says he can’t discuss any potential discussions it might be having with the platforms about using its tech.)

Focusing on spotting patterns in how content spreads as a detection mechanism does have one major and obvious drawback — in that it only works after the fact of (some) fake content spread. So this approach could never entirely stop disinformation in its tracks.

Though Fabula claims detection is possible within a relatively short time frame — of between two and 20 hours after content has been seeded onto a network.

“What we show is that this spread can be very short,” he says. “We looked at up to 24 hours and we’ve seen that just in a few hours… we can already make an accurate prediction. Basically it increases and slowly saturates. Let’s say after four or five hours we’re already about 90 per cent.”

“We never worked with anything that was lower than hours but we could look,” he continues. “It really depends on the news. Some news does not spread that fast. Even the most groundbreaking news do not spread extremely fast. If you look at the percentage of the spread of the news in the first hours you get maybe just a small fraction. The spreading is usually triggered by some important nodes in the social network. Users with many followers, tweeting or retweeting. So there are some key bottlenecks in the network that make something viral or not.”

A network-based approach to content moderation could also serve to further enhance the power and dominance of already hugely powerful content platforms — by making the networks themselves core to social media regulation, i.e. if pattern-spotting algorithms rely on key network components (such as graph structure) to function.

So you can certainly see why — even above a pressing business need — tech giants are at least interested in backing the academic research. Especially with politicians increasingly calling for online content platforms to be regulated like publishers.

At the same time, there are — what look like — some big potential positives to analyzing spread, rather than content, for content moderation purposes.

As noted above, the approach doesn’t require training the algorithms on different languages and (seemingly) cultural contexts — setting it apart from content-based disinformation detection systems. So if it proves as robust as claimed it should be more scalable.

Though, as Bronstein notes, the team have mostly used U.S. political news for training their initial classifier. So some cultural variations in how people spread and react to nonsense online at least remains a possibility.

A more certain challenge is “interpretability” — aka explaining what underlies the patterns the deep learning technology has identified via the spread of fake news.

While algorithmic accountability is very often a challenge for AI technologies, Bronstein admits it’s “more complicated” for geometric deep learning.

“We can potentially identify some features that are the most characteristic of fake vs true news,” he suggests when asked whether some sort of ‘formula’ of fake news can be traced via the data, noting that while they haven’t yet tried to do this they did observe “some polarization”.

“There are basically two communities in the social network that communicate mainly within the community and rarely across the communities,” he says. “Basically it is less likely that somebody who tweets a fake story will be retweeted by somebody who mostly tweets real stories. There is a manifestation of this polarization. It might be related to these theories of echo chambers and various biases that exist. Again we didn’t dive into trying to explain it from a sociological point of view — but we observed it.”

So while, in recent years, there have been some academic efforts to debunk the notion that social media users are stuck inside filter bubble bouncing their own opinions back at them, Fabula’s analysis of the landscape of social media opinions suggests they do exist — albeit, just not encasing every Internet user.

Bronstein says the next steps for the startup is to scale its prototype to be able to deal with multiple requests so it can get the API to market in 2019 — and start charging publishers for a truth-risk/reliability score for each piece of content they host.

“We’ll probably be providing some restricted access maybe with some commercial partners to test the API but eventually we would like to make it useable by multiple people from different businesses,” says requests. “Potentially also private users — journalists or social media platforms or advertisers. Basically we want to be… a clearing house for news.”

News Source = techcrunch.com

Entrepreneur First eyes further Asia growth to build its global network of founders

in Accelerator/Artificial Intelligence/Asia/bangalore/berlin/Business/business incubators/Co-founder/Delhi/Economy/Entrepreneur/entrepreneur first/Europe/India/London/Paris/Politics/Reid Hoffman/Singapore/Startup company/women in technology by

British startup venture builder Entrepreneur First is eying additional expansion in Asia, where its operation is now as large as it is in Europe, as it expands its reach in 2019. But, despite serving a varied mixture of markets, the company said its founders are a fairly unified breed.

The Entrepreneur First program is billed as a “talent investor.” It matches prospective founders and, through an accelerator program, it encourages them to start and build companies which it backs with financing. The organization started out in London in 2011, and today it is also present in Paris and Berlin in Europe and, in Asia, Singapore, Hong Kong and (soon) Bangalore. To date, it says it has graduated over 1,200 founders who have created more than 200 companies, estimated at a cumulative $1.5 billion on paper.

Those six cities cover a spread of unique cultures — both in general life and startup ecosystems — but, despite that, co-founder Matthew Clifford believes there’s actually many commonalities between among its global founder base.

“It’s really striking to me how little adjustment of the model has been necessary to make it work in each location,” Clifford — who started EF with Alice Bentinck — told TechCrunch in an interview. “The outliers in each country have more in common with each other and their fellow compatriots… we’re uncovering this global community of outliers.”

Despite the common traits, EF’s Asia expansion has added a new dimension to the program after it announced a tie-in with HAX, one of the world’s best-known hardware-focused accelerator programs, that will see the duo co-invest in hardware startups via a new joint program.

“We saw early that hardware was a much more viable part of the market in Asia than it is traditionally seen in Europe [and] needed a partner to accelerate the talent,” Clifford said.

Already, the first four beneficiaries of that partnership have been announced — AIMS, BOPSIN, Neptune Robotics and SEPPURE — each of which graduated the first EF cohort in Hong Kong, its fourth in Asia so far. Going forward, Clifford expects that around three to five startups from each batch will move from EF into the joint initiative with HAX. The program covers Asia first but it is slated to expand to EF’s European sites “soon.”

Entrepreneur First held its first investor day in Hong Kong this month

Another impending expansion is EF’s first foray into India via Bangalore which starts this month, and there could be other new launches in 2019.

“We’ll continue to grow by adding sites but we are not in a rush,” Clifford said. “The most important thing is retraining quality of talent. It may be six months until we add another site in Asia but there’s no shortage of places we think it will work.

“We operate a single global fund,” he added. “We’re a talent investor and we believe there are strong network effects in that. The people who back us are really betting on the model… [that it’s] an asset class with great returns.

While it appears that its global expansion drive is a little more gradual than what was previously envisaged — backer and board member Reid Hoffman told TechCrunch in 2016 that he could imagine it in 50 cities — Clifford said EF isn’t raising more capital presently. That previous investment coupled with management fees is enough fuel in the tank, he said. The organization also operates a follow-on fund but it has one major exit to date, Pony Technology, the AI startup bought by Twitter for a reported $150 million.

Still, with hundreds of companies in the world with EF on the cap table, Clifford said he is bullish that his organization can target an international-minded breed of entrepreneur worldwide. The impact he sees is one that will work regardless of any local constraints placed on them.

“With our global network of capital, we always want capital, not talent, to be the limiting factor. Our goal is to make being ‘an EF company’ more relevant to your identity as a startup regardless of your location,” he told TechCrunch

News Source = techcrunch.com

pi-top’s latest edtech tool doubles down on maker culture

in Delhi/drone/edtech startup/Education/electronics/Europe/Gadgets/Hardware/India/learn to code/London/pi-top/pi-top 4/Politics/Raspberry Pi/robotics/Startups/STEM/TC/United Kingdom by

London-based edtech startup, pi-top, has unboxed a new flagship learn-to-code product, demoing the “go anywhere” Pi-powered computer at the Bett Show education fare in London today.

Discussing the product with TechCrunch ahead of launch, co-founder and CEO Jesse Lozano talked up the skills the company hopes students in the target 12-to-17 age range will develop and learn to apply by using sensor-based connected tech, powered by its new pi-top 4, to solve real world problems.

“When you get a pi-top 4 out of the box you’re going to start to learn how to code with it, you’re going to start to learn and understand electronic circuits, you’re going to understand sensors from our sensor library. Or components from our components library,” he told us. “So it’s not: ‘I’m going to learn how to create a robot that rolls around on wheels and doesn’t knock into things’.

“It’s more: ‘I’m going to learn how a motor works. I’m going to learn how a distance sensor works. I’m going to learn how to properly hook up power to these different sensors. I’m going to learn how to apply that knowledge… take those skills and [keep making stuff].”

The pi-top 4 is a modular computer that’s designed to be applicable, well, anywhere; up in the air, with the help of a drone attachment; powering a sensing weather balloon; acting as the brains for a rover style wheeled robot; or attached to sensors planted firmly in the ground to monitor local environmental conditions.

The startup was already dabbling in this area, via earlier products — such as a Pi-powered laptop that featured a built in rail for breadboarding electronics. But the pi-top 4 is a full step outside the usual computing box.

The device has a built-in mini OLED screen for displaying project info, along with an array of ports. It can be connected to and programmed via one of pi-top’s other Pi-powered computers, or any PC, Mac and Chromebook, with the company also saying it easily connects to existing screens, keyboards and mice. Versatility looks to be the name of the game for pi-top 4.

pi-top’s approach to computing and electronics is flexible and interoperable, meaning the pi-top 4 can be extended with standard electronics components — or even with Littlebits‘ style kits’ more manageable bits and bobs.

pi-top is also intending to sell a few accessories of its own (such as the drone add-on, pictured above) to help get kids’ creative project juices flowing — and has launched a range of accessories, cameras, motors and sensors to “allow creators of all ages to start learning by making straight out of the box”.

But Lozano emphasizes its platform play is about reaching out to a wider world, not seeking to lock teachers and kids to buying proprietary hardware. (Which would be all but impossible, in any case, given the Raspberry Pi core.)

“It’s really about giving people that breadth of ability,” says Lozano, discussing the sensor-based skills he wants the product to foster. “As you go through these different projects you’re learning these specific skills but you also start to understand how they would apply to other projects.”

He mentions various maker projects the pi-top can be used to make, like a music synth or wheeled robot, but says the point isn’t making any specific connected thing; it’s encouraging kids to come up with project ideas of their own.

“Once that sort of veil has been pierced in students and in teachers we see some of the best stuff starts to be made. People make things that we had no idea they would integrate it into,” he tells us, pointing by way of example to a solar car project from a group of U.S. schoolkids. “These fifteen year olds are building solar cars and they’re racing them from Texas to California — and they’re using pi-tops to understand how their cars are performing to make better race decisions.”

pi-top’s new device is a modular programmable computer designed for maker projects

“What you’re really learning is the base skills,” he adds, with a gentle sideswipe at the flood of STEM toys now targeting parents’ wallets. “We want to teach you real skills. And we want you to be able to create projects that are real. That it’s not block-based coding. It’s not magnetized, clipped in this into that and all of a sudden you have something. It’s about teaching you how to really make things. And how the world actually works around you.”

The pi-top 4 starts at $199 for a foundation bundle which includes a Raspberry Pi 3B+,16GB SD card, power pack, along with a selection of sensors and add-on components for starter projects.

Additional educational bundles will also launch down the line, at a higher price, including more add ons, access to premium software and a full curriculum for educators to support budding makers, according to Lozano.

The startup has certainly come a long way from its founders’ first luridly green 3D printed laptop which caught our eye back in 2015. Today it employs more than 80 people globally, with offices in the UK, US and China, while its creative learning devices are in the hands of “hundreds of thousands” of schoolkids across more than 70 countries at this stage. And Lozano says they’re gunning to pass the million mark this year.

So while the ‘learn to code’ space has erupted into a riot of noise and color over the past half decade, with all sorts of connected playthings now competing for kids’ attention, and pestering parents with quasi-educational claims, pi-top has kept its head down and focused firmly on building a serious edtech business with STEM learning as its core focus, saving it from chasing fickle consumer fads, as Lozano tells it.

“Our relentless focus on real education is something that has differentiated us,” he responds, when asked how pi-top stands out in what’s now a very crowded marketplace. “The consumer market, as we’ve seen with other startups, it can be fickle. And trying to create a hit toy all the time — I’d rather leave that to Mattel… When you’re working with schools it’s not a fickle process.”

Part of that focus includes supporting educators to acquire the necessary skills themselves to be able to teach what’s always a fast-evolving area of study. So schools signing up to pi-top’s subscription product get support materials and guides, to help them create a maker space and understand all the ins and outs of the pi-top platform. It also provides a classroom management backend system that lets teachers track students’ progress.

“If you’re a teacher that has absolutely no experience in computer science or engineering or STEM based learning or making then you’re able to bring on the pi-top platform, learn with it and with your student, and when they’re ready they can create a computer science course — or something of that ilk — in their classroom,” says Lozano.

pi-top wants kids to use tech to tackle real-world problems

“As with all good things it takes time, and you need to build up a bank of experience. One of the things we’ve really focused on is giving teachers that ability to build up that bank of experience, through an after school club, or through a special lesson plan that they might do.

“For us it’s about augmenting that teacher and helping them become a great educator with tools and with resources. There’s some edtech stuff they want to replace the teacher — they want to make the teacher obsolete. I couldn’t disagree with that viewpoint more.”

“Why aren’t teachers just buying textbooks?” he adds. “It takes 24 months to publish a textbook. So how are you supposed to teach computer science with those technology-based skills with something that’s by design two years out of date?”

Last summer pi-top took in $16M in Series B funding, led by existing founders Hambro Perks and Committed Capital. It’s been using the financing to bring pi-top 4 to market while also investing heavily in its team over the past 18 months — expanding in-house expertise in designing learning products and selling in to the education sector via a number of hires. Including the former director of learning at Apple, Dr William Rankin.

The founders’ philosophy is to combine academic expertise in education with “excellence in engineering”. “We want the learning experience to be something we’re 100% confident in,” says Lozano. “You can go into pi-top and immediately start learning with our lesson plans and the kind of framework that we provide.”

“[W]e’ve unabashedly focused on… education. It is the pedagogy,” he adds. “It is the learning outcome that you’re going to get when you use the pi-top. So one of the big changes over the last 18 months is we’ve hired a world class education team. We have over 100 years of pedagogical experience on the team now producing an enormous amount of — we call them learning experience designers.”

He reckons that focus will stand pi-top in good stead as more educators turn their attention to how to arm their pupils with the techie skills of the future.

“There’s loads of competition but now the schools are looking they’re [asking] who’s the team behind the education outcome that you’re selling me?” he suggests. “And you know what if you don’t have a really strong education team then you’re seeing schools and districts become a lot more picky — because there is so much choice. And again that’s something I’m really excited about. Everybody’s always trying to do a commercial brand partnership deal. That’s just not something that we’ve focused on and I do really think that was a smart choice on our end.”

Lozano is also excited about a video the team has produced to promote the new product — which strikes a hip, urban note as pi-top seeks to inspire the next generation of makers.

“We really enjoy working in the education sector and I really, really enjoy helping teachers and schools deliver inspirational content and learning outcomes to their students,” he adds. “It’s genuinely a great reason to wake up in the morning.”

News Source = techcrunch.com

1 2 3 8
Go to Top