Timesdelhi.com

December 12, 2018
Category archive

cybernetics

China’s Infervision is helping 280 hospitals worldwide detect cancers from images

in Artificial Intelligence/Asia/Beijing/cancer/chicago/China/cybernetics/Delhi/Disease/Health/Healthcare/hospital/imaging/India/Infervision/medical imaging/medicine/Politics/sequoia capital/Sequoia Capital China/shenzhen/University of Chicago by

Until recently, humans have relied on the trained eyes of doctors to diagnose diseases from medical images.

Beijing-based Infervision is among a handful of artificial intelligence startups around the world racing to improve medical imaging analysis through deep learning, the same technology that powers face recognition and autonomous driving.

The startup, which has to date raised $70 million from leading investors like Sequoia Capital China, began by picking out cancerous lung cells, a prevalent cause of death in China. At the Radiological Society of North America’s annual conference in Chicago this week, the three-year-old company announced extending its computer vision prowess to other chest-related conditions like cardiac calcification.

“By adding more scenarios under which our AI works, we are able to offer more help to doctors,” Chen Kuan, founder and chief executive officer of Infervision, told TechCrunch. While a doctor can spot dozens of diseases from one single image scan, AI needs to be taught how to identify multiple target objects in one go.

But Chen says machines already outstrip humans in other aspects. For one, they are much faster readers. It normally takes doctors 15 to 20 minutes to scrutinize one image, whereas Infervision’s AI can process the visuals and put together a report under 30 seconds.

AI also addresses the long-standing issue of misdiagnosis. Chinese clinical newspaper Medical Weekly reported that doctors with less than five years’ experience only got their answers right 44 percent of the time when diagnosing black lungs, a disease common among coal miners. A research from Zhejiang University that examined autopsies between 1950 to 2009 found that the total clinical misdiagnosis rate averaged 46 percent.

“Doctors work long hours and are constantly under tremendous stress, which can lead to errors,” suggested Chen.

The founder claimed that his company is able to improve the accuracy rate by 20 percent. AI can also fill in for doctors in remote hinterlands where healthcare provision falls short, which is often the case in China.

Winning the first client

A report on bone fractures produced by Infervision’s medical imaging tool

Like any deep learning company, Infervision needs to keep training its algorithms with data from varied sources. As of this week, the startup is working with 280 hospitals – among which twenty are outside of China – and steadily adding a dozen new partners weekly. It also claims that 70 percent of China’s top-tier hospitals use its lung-specific AI tool.

But the firm has had a rough start.

Chen, a native of Shenzhen in south China, founded Infervision after dropping out of his doctoral program at the University of Chicago where he studied under Nobel-winning economist James Heckman. For the first six months of his entrepreneurial journey, Chen knocked on the doors of 40 hospitals across China — to no avail.

“Medical AI was still a novelty then. Hospitals are by nature conservative because they have to protect patients, which make them reluctant to partner with outsiders,” Chen recalled.

Eventually, Sichuan Provincial People’s Hospital gave Infervision a shot. Chen with his two founding members got hold of a small batch of image data, moved into a tiny apartment next to the hospital, and got the company underway.

“We observed how doctors work, explained to them how AI works, listened to their complaints, and iterated our product,” said Chen. Infervision’s product proved adept, and its name soon gathered steam among more healthcare professionals.

“Hospitals are risk-averse, but as soon as one of them likes us, it goes out to spread the word and other hospitals will soon find us. The medical industry is very tight-knit,” the founder said.

It also helps that AI has evolved from a fringe invention to a norm in healthcare over the past few years, and hospitals start actively seeking help from tech startups.

Infervision has stumbled in its foreign markets as well. In the US, for example, Infervision is restricted to visiting doctors only upon appointments, which slows down product iteration.

Chen also admitted that many western hospitals did not trust that a Chinese startup could provide state-of-the-art technology. But they welcomed Infervision in as soon as they found out what it’s able to achieve, which is in part thanks to its data treasure — up to 26,000 images a day.

“Regardless of their technological capability, Chinese startups are blessed with access to mountains of data that no startups elsewhere in the world could match. That’s an immediate advantage,” said Chen.

There’s no lack of rivalry in China’s massive medical industry. Yitu, a pivotal player that also applies its AI to surveillance and fintech, unveiled a cancer detection tool at the Chicago radiological conference this week.

Infervision, which generates revenues by charging fees for its AI solution as a service, says that down the road, it will prioritize product development for conditions that incur higher social costs, such as cerebrovascular and cardiovascular diseases.

News Source = techcrunch.com

Free societies face emerging, existential threats from technology

in Airbnb/Artificial Intelligence/Column/cybernetics/Delhi/echodyne/Emerging-Technologies/evolv technology/Fortem Technologies/General Catalyst/India/machine vision/online identity/Pakistan/Politics/Science and Technology/shasta ventures/TC/Technology/Uber/United States by

Silicon Valley is currently, and correctly, under fire for the failure of leading platforms such as Facebook, Google and Twitter to protect against the spread of disinformation, hate speech and efforts to disrupt our elections. I don’t know why these companies behaved as they did.

But whatever the reason – naiveté, excessive focus on near-term profits, or simply a lack of proper attention on mind-numbingly complex problems – it’s clear they have to do a better job of making sure technology makes our world safer, freer and more stable rather than the opposite.

But it’s not just these big companies that need to up their game. As venture capitalists, we need to do more to find, fund and help a new generation of technology companies that build the infrastructure and applications to deal with technology-based threats to stability and security. Yes, Facebook and Twitter must deal with unintended consequences of their massive platforms. But if history is any guide, it will be new companies that come up with the bold new visions and business models to address fundamental, once-in-a-generation challenges.

I don’t use the word fundamental lightly. Just think about all security failures you now take for granted, that once would have been unthinkable. Our PCs and other devices are patched every few hours or days, rather than every few months. We are routinely warned by merchants—sometimes even credit agencies!—to change our passwords because they’ve been hacked. We are relieved, rather than annoyed, when the credit card company calls to verify our recent purchases.

We feel abused when we read how our online identity has been monetized without our knowledge or used to micro-target us with ads by groups seeking to polarize our politics. And there are deeper-seated concerns, like the nagging fear of a terror attack or a lone-wolf gunman when we enter an airport or let our teenage kid go to a concert. Our physical and cyber selves feel threatened on a regular basis. Like it or not, we are too often under attack, as individuals, consumers and as citizens. But like the proverbial frog in a pot, we don’t seem to notice the rising water temperature.

If we stick with the status quo, that water is only going to get hotter. We already know the Russians (and the Iranians, and the North Koreans) are again targeting U.S. voting systems in advance of the midterm elections, and the Russians also have the ability to shut down large parts of our electric grid. It hasn’t happened yet, but will Americans start worrying about congregating in public spaces, whether it is to protest, attend large rallies, or go to concerts? I grew up in Pakistan, where horrific gun and bomb attacks on civilians are more common. I can’t help fear the same scourge will come to our shores.

If this sounds like scare-mongering, so be it. There is no getting around the fact that more people have more ways to do large-scale damage than ever before. Thankfully there are technologists and entrepreneurs working diligently to find ways to defend us from such harm.

Our portfolio company Evolv Technology, for example, is using advanced sensors and AI in weapons detection systems that can screen hundreds of people per hour  without making them slow down or empty their pockets and purses. Companies like ShieldAI, Convexxum, Echodyne and others are using machine vision and advanced radars/lidar technologies to prevent people from being put in harm’s way by drone-type attacks.

A drone flying and filming over Dubai

Funding such companies can be different than the deals Silicon Valley VCs are used to.  In most cases, these firms must collaborate with trusted government actors, intelligence agencies and enforcement organizations–not to mention comply with their regulations. To be successful, they need to share information with other companies, including competitors.

But I’m betting the trouble will be well worth it. History tells us that companies that overcome big obstacles to create new markets often enjoy years of rapid growth, and few competitors.

Most of all, I believe a nervous world is ready to reward companies that make it feel safer. Just as Uber and Airbnb caught the front edge of the sharing economy boom, companies whose mission is aligned with a change in the societal zeitgeist can create huge value.

Investors are already doing their part. DCVC recently invested in Fortem Technologies, and Shasta Ventures in AirSpace, which make Star Wars-ish systems of AI-based drones whose only role is to automatically detect, identify, and slam into drones that wander into unauthorized airspace — say, over a private estate, or a factory.

General Catalyst invested in Mark43, which makes a cloud platform to help police departments and their detectives investigate crimes more quickly and effectively.

While these mission-oriented companies may not provide the fastest or steepest ramp to riches, the best of these mission-oriented companies will create technology that affects each of us every day, and businesses that will be resilient to economic cycles, fads and fashion. For investors, it’s a twofer of enlightened self-interest — both as investors, and as citizens. To paraphrase JFK, we should invest in such companies “not because it is easy, but because it is hard.”

News Source = techcrunch.com

Safe artificial intelligence requires cultural intelligence

in Artificial Intelligence/Column/Culture/cybernetics/Delhi/Future/India/journalist/life 3.0/Politics/TC/Technology/transhumanism by

Knowledge, to paraphrase British journalist Miles Kington, is knowing a tomato is a fruit; wisdom is knowing there’s a norm against putting it in a fruit salad.

Any kind of artificial intelligence clearly needs to possess great knowledge. But if we are going to deploy AI agents widely in society at large — on our highways, in our nursing homes and schools, in our businesses and governments — we will need machines to be wise as well as smart.

Researchers who focus on a problem known as AI safety or AI alignment define artificial intelligence as machines that can meet or beat human performance at a specific cognitive task. Today’s self-driving cars and facial recognition algorithms fall into this narrow type of AI.

But some researchers are working to develop artificial general intelligence (AGI) — machines that can outperform humans at any cognitive task. We don’t know yet when or even if AGI will be achieved, but it’s clear that the research path is leading to ever more powerful and autonomous AI systems performing more and more tasks in our economies and societies.

Building machines that can perform any cognitive task means figuring out how to build AI that can not only learn about things like the biology of tomatoes but also about our highly variable and changing systems of norms about things like what we do with tomatoes.

Humans live lives populated by a multitude of norms, from how we eat, dress and speak to how we share information, treat one another and pursue our goals.

For AI to be truly powerful will require machines to comprehend that norms can vary tremendously from group to group, making them seem unnecessary, yet it can be critical to follow them in a given community.

Tomatoes in fruit salads may seem odd to the Brits for whom Kington was writing, but they are perfectly fine if you are cooking for Koreans or a member of the culinary avant-garde.  And while it may seem minor, serving them the wrong way to a particular guest can cause confusion, disgust, even anger. That’s not a recipe for healthy future relationships.

Norms concern things not only as apparently minor as what foods to combine but also things that communities consider tremendously consequential: who can marry whom, how children are to be treated, who is entitled to hold power, how businesses make and price their goods and services, when and how criticism can be shared publicly.

Image courtesy of Shutterstock

Successful and safe AI that achieves our goals within the limits of socially accepted norms requires an understanding of not only how our physical systems behave, but also how human normative systems behave. Norms are not just fixed features of the environment, like the biology of a plant. They are dynamic and responsive structures that we make and remake on a daily basis, as we decide whether or when to let someone know that “this” is the way “we” do things around here.

These normative systems are the systems on which we rely to solve the challenge of ensuring that people behave the way we want them to in our communities, workplaces and social environments. Only with confidence about how everyone around us is likely to behave are we all willing to trust and live and invest with one another.

Ensuring that powerful AIs behave the way we want them to will not be so terribly different. Just as we need to raise our children to be competent participants in our systems of norms, we will need to train our machines to be similarly competent. It is not enough to be extremely knowledgeable about the facts of the universe; extreme competence also requires wisdom enough to know that there may be a rule here, in this group but not in that group. And that ignoring that rule may not just annoy the group; it may lead them to fear or reject the machine in their midst.

Ultimately, then, the success of Life 3.0 depends on our ability to understand Life 1.0.  And that is where we may face the greatest challenge in AI research.

News Source = techcrunch.com

Keeping artificial intelligence accountable to humans

in algorithmic bias/Artificial Intelligence/Brad Smith/Column/Culture/cybernetics/Delhi/driver/engineer/European Union/facial recognition software/General Data Protection Regulation/IBM/India/Jeopardy/machine learning/Microsoft/Nigeria/Politics/search engines/TC/Technology/U.S. government/United States by

As a teenager in Nigeria, I tried to build an artificial intelligence system. I was inspired by the same dream that motivated the pioneers in the field: That we could create an intelligence of pure logic and objectivity that would free humanity from human error and human foibles.

I was working with weak computer systems and intermittent electricity, and needless to say my AI project failed. Eighteen years later—as an engineer researching artificial intelligence, privacy and machine-learning algorithms—I’m seeing that so far, the premise that AI can free us from subjectivity or bias is also disappointing. We are creating intelligence in our own image. And that’s not a compliment.

Researchers have known for awhile that purportedly neutral algorithms can mirror or even accentuate racial, gender and other biases lurking in the data they are fed. Internet searches on names that are more often identified as belonging to black people were found to prompt search engines to generate ads for bailbondsmen. Algorithms used for job-searching were more likely to suggest higher-paying jobs to male searchers than female. Algorithms used in criminal justice also displayed bias.

Five years later, expunging algorithmic bias is turning out to be a tough problem. It takes careful work to comb through millions of sub-decisions to figure out why the algorithm reached the conclusion it did. And even when that is possible, it is not always clear which sub-decisions are the culprits.

Yet applications of these powerful technologies are advancing faster than the flaws can be addressed.

Recent research underscores this machine bias, showing that commercial facial-recognition systems excel at identifying light-skinned males, with an error rate of less than 1 percent. But if you’re a dark-skinned female, the chance you’ll be misidentified rises to almost 35 percent.

AI systems are often only as intelligent—and as fair—as the data used to train them. They use the patterns in the data they have been fed and apply them consistently to make future decisions. Consider an AI tasked with sorting the best nurses for a hospital to hire. If the AI has been fed historical data—profiles of excellent nurses who have mostly been female—it will tend to judge female candidates to be better fits. Algorithms need to be carefully designed to account for historical biases.

Occasionally, AI systems get food poisoning. The most famous case was Watson, the AI that first defeated humans in 2011 on the television game show “Jeopardy.” Watson’s masters at IBM needed to teach it language, including American slang, so they fed it the contents of the online Urban Dictionary. But after ingesting that colorful linguistic meal, Watson developed a swearing habit. It began to punctuate its responses with four-letter words.

We have to be careful what we feed our algorithms. Belatedly, companies now understand that they can’t train facial-recognition technology by mainly using photos of white men. But better training data alone won’t solve the underlying problem of making algorithms achieve fairness.

Algorithms can already tell you what you might want to read, who you might want to date and where you might find work. When they are able to advise on who gets hired, who receives a loan, or the length of a prison sentence, AI will have to be made more transparent—and more accountable and respectful of society’s values and norms.

Accountability begins with human oversight when AI is making sensitive decisions. In an unusual move, Microsoft president Brad Smith recently called for the U.S. government to consider requiring human oversight of facial-recognition technologies.

The next step is to disclose when humans are subject to decisions made by AI. Top-down government regulation may not be a feasible or desirable fix for algorithmic bias. But processes can be created that would allow people to appeal machine-made decisions—by appealing to humans. The EU’s new General Data Protection Regulation establishes the right for individuals to know and challenge automated decisions.

Today people who have been misidentified—whether in an airport or an employment data base—have no recourse. They might have been knowingly photographed for a driver’s license, or covertly filmed by a surveillance camera (which has a higher error rate.) They cannot know where their image is stored, whether it has been sold or who can access it. They have no way of knowing whether they have been harmed by erroneous data or unfair decisions.

Minorities are already disadvantaged by such immature technologies, and the burden they bear for the improved security of society at large is both inequitable and uncompensated. Engineers alone will not be able to address this. An AI system is like a very smart child just beginning to understand the complexities of discrimination.

To realize the dream I had as a teenager, of an AI that can free humans from bias instead of reinforcing bias, will require a range of experts and regulators to think more deeply not only about what AI can do, but what it should do—and then teach it how. 

News Source = techcrunch.com

Navigating the risks of artificial intelligence and machine learning in low-income countries

in africa/Artificial Intelligence/Column/Culture/cybernetics/Delhi/Getty/India/machine learning/ML/natural language processing/Politics/Science/South Africa/TC/Technology/United States by

On a recent work trip, I found myself in a swanky-but-still-hip office of a private tech firm. I was drinking a freshly frothed cappuccino, eyeing a mini-fridge stocked with local beer, and standing amidst a group of hoodie-clad software developers typing away diligently at their laptops against a backdrop of Star Wars and xkcd comic wallpaper.

I wasn’t in Silicon Valley: I was in Johannesburg, South Africa, meeting with a firm that is designing machine learning (ML) tools for a local project backed by the U.S. Agency for International Development.

Around the world, tech startups are partnering with NGOs to bring machine learning and artificial intelligence (AI) to bear on problems that the international aid sector has wrestled with for decades. ML is uncovering new ways to increase crop yields for rural farmers. Computer vision lets us leverage aerial imagery to improve crisis relief efforts. Natural language processing helps usgauge community sentiment in poorly connected areas. I’m excited about what might come from all of this. I’m also worried.

AI and ML have huge promise, but they also have limitations. By nature, they learn from and mimic the status quo–whether or not that status quo is fair or just. We’ve seen AI or ML’s potential to hard-wire or amplify discrimination, exclude minorities, or just be rolled out without appropriate safeguards–so we know we should approach these tools with caution. Otherwise, we risk these technologies harming local communities, instead of being engines of progress.

Seemingly benign technical design choices can have far-reaching consequences. In model development, tradeoffs are everywhere. Some are obvious and easily quantifiable — like choosing to optimize a model for speed vs. precision. Sometimes it’s less clear. How you segment data or choose an output variable, for example, may affect predictive fairness across different sub-populations. You could end up tuning a model to excel for the majority while failing for a minority group.

Image courtesy of Getty Images

These issues matter whether you’re working in Silicon Valley or South Africa, but they’re exacerbated in low-income countries. There is often limited local AI expertise to tap into, and the tools’ more troubling aspects can be compounded by histories of ethnic conflict or systemic exclusion. Based on ongoing research and interviews with aid workers and technology firms, we’ve learned five basic things to keep in mind when applying AI and ML in low-income countries:

  1. Ask who’s not at the table. Often, the people who build the technology are culturally or geographically removed from their customers. This can lead to user-experience failures like Alexa misunderstanding a person’s accent. Or worse. Distant designers may be ill-equipped to spot problems with fairness or representation. A good rule of thumb: if everyone involved in your project has a lot in common with you, then you should probably work hard to bring in new, local voices.
  2. Let other people check your work. Not everyone defines fairness the same way, and even really smart people have blind spots. If you share your training data, design to enable external auditing, or plan for online testing, you’ll help advance the field by providing an example of how to do things right. You’ll also share risk more broadly and better manage your own ignorance. In the end, you’ll probably end up building something that works better.
  3. Doubt your data. A lot of AI conversations assume that we’re swimming in data. In places like the U.S., this might be true. In other countries, it isn’t even close. As of 2017, less than a third of Africa’s 1.25 billion people were online. If you want to use online behavior to learn about Africans’ political views or tastes in cinema, your sample will be disproportionately urban, male, and wealthy. Generalize from there and you’re likely to run into trouble.
  4. Respect context. A model developed for a particular application may fail catastrophically when taken out of its original context. So pay attention to how things change in different use cases or regions. That may just mean retraining a classifier to recognize new types of buildings, or it could mean challenging ingrained assumptions about human behavior.
  5. Automate with care. Keeping humans ‘in the loop’ can slow things down, but their mental models are more nuanced and flexible than your algorithm. Especially when deploying in an unfamiliar environment, it’s safer to take baby steps and make sure things are working the way you thought they would. A poorly-vetted tool can do real harm to real people.

AI and ML are still finding their footing in emerging markets. We have the chance to thoughtfully construct how we build these tools into our work so that fairness, transparency, and a recognition of our own ignorance are part of our process from day one. Otherwise, we may ultimately alienate or harm people who are already at the margins.

The developers I met in South Africa have embraced these concepts. Their work with the non-profit Harambee Youth Employment Accelerator has been structured to balance the perspectives of both the coders and those with deep local expertise in youth unemployment; the software developers are even foregoing time at their hip offices to code alongside Harambee’s team. They’ve prioritized inclusivity and context, and they’re approaching the tools with healthy, methodical skepticism. Harambee clearly recognizes the potential of machine learning to help address youth unemployment in South Africa–and they also recognize how critical it is to ‘get it right’. Here’s hoping that trend catches on with other global startups too.

News Source = techcrunch.com

Go to Top