Connect with us

Artificial Intelligence

Want to fool a computer vision system? Just tweak some colors

Research into machine learning and the interesting AI models created as a consequence are popular topics these days. But there’s a sort of shadow world of scientists working to undermine these systems — not to show they’re worthless but to shore up their weaknesses. A new paper demonstrates this by showing how vulnerable image recognition models are to the simplest color manipulations of the pictures they’re meant to identify.

It’s not some deep indictment of computer vision — techniques to “beat” image recognition systems might just as easily be characterized as situations in which they perform particularly poorly. Sometimes this is something surprisingly simple: rotating an image, for example, or adding a crazy sticker. Unless a system has been trained specifically on a given manipulation or has orders to check common variations like that, it’s pretty much just going to fail.

In this case it’s research from the University of Washington led by grad student Hossein Hosseini. Their “adversarial” imagery was similarly simple: switch up the colors.

Probably many of you have tried something similar to this when fiddling around in an image manipulation program: by changing the “hue” and “saturation” values on a picture, you can make someone have green skin, a banana appear blue, and so on. That’s exactly what the researchers did: twiddled the knobs so a dog looked a bit yellow, a deer looked purplish, etc.

The original images are at left; color-shifted versions and the systems’ best guesses at right.

Critically, however, the “value” of the pixels, meaning how light or dark it is, wasn’t changed, meaning the images still look like what they are — just in weird colors.

But while a cat looks like a cat no matter if it’s grey or pink to us, one can’t really say the same for a deep neural network. The accuracy of the model they tested was reduced by 90 percent on sets of color-tweaked images that it would normally identify easily. Its best guesses are pretty random, as you can see in the figure at right. Changing the colors totally changes the system’s guess.

The team tested several models and they all broke down on the color-shifted set, so it wasn’t just a consequence of this specific system.

It’s not too hard to fix — in this case, all you really need to do is add some labeled, color-shifted images into the training data so the system is exposed to them beforehand. This addition brought success rates back up to reasonable (if still fairly poor) levels.

But the point isn’t that computer vision systems are fundamentally bad at color or something. It’s that there are lots of ways of subtly or not-so-subtly manipulating an image or video that will devastate its accuracy or subvert it.

“Deep networks are very good at learning (or better memorizing) the distribution of training data,” wrote Hosseini in an email to TechCrunch. “They, however, hardly generalize beyond that. So, even if models are trained with augmented data, it’s likely that we can come up with a new type of adversarial images that can fool the model.”

A model trained to catch color variations might still be vulnerable to attention-based adversarial images and vice versa. The way these systems are created and encoded right now simply isn’t robust enough to prevent such attacks. But by cataloguing them and devising improvements that protect against some but not all, we can advance the state of the art.

“I think we need to find a way for the model to learn the concepts, such as being invariant to color or rotation,” Hosseini suggested. “That can save the algorithm a lot of training data and is more similar to how humans learn.”

You can read the full pre-print paper on Arxiv (PDF).

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Nvidia’s researchers teach a robot to perform simple tasks by observing a human

Industrial robots are typically all about repeating a well-defined task over and over again. Usually, that means performing those tasks a safe distance away from the fragile humans that programmed them. More and more, however, researchers are now thinking about how robots and humans can work in close proximity to humans and even learn from them. In part, that’s what Nvidia’s new robotics lab in Seattle focuses on and the company’s research team today presented some of its most recent work around teaching robots by observing humans at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia.

Nvidia’s director of robotics research Dieter Fox.

As Dieter Fox, the senior director of robotics research at Nvidia (and a professor at the University of Washington), told me, the team wants to enable this next generation of robots that can safely work in close proximity to humans. But to do that, those robots need to be able to detect people, tracker their activities and learn how they can help people. That may be in small-scale industrial setting or in somebody’s home.

While it’s possible to train an algorithm to successfully play a video game by rote repetition and teaching it to learn from its mistakes, Fox argues that the decision space for training robots that way is far too large to do this efficiently. Instead, a team of Nvidia researchers led by Stan Birchfield and Jonathan Tremblay, developed a system that allows them to teach a robot to perform new tasks by simply observing a human.

The tasks in this example are pretty straightforward and involve nothing more than stacking a few colored cubes. But it’s also an important step in this overall journey to enable us to quickly teach a robot new tasks.

The researchers first trained a sequence of neural networks to detect objects, infer the relationship between them and then generate a program to repeat the steps it witnessed the human perform. The researchers say this new system allowed them to train their robot to perform this stacking task with a single demonstration in the real world.

One nifty aspect of this system is that it generates a human-readable description of the steps it’s performing. That way, it’s easier for the researchers to figure out what happened when things go wrong.

Nvidia’s Stan Birchfield tells me that the team aimed to make training the robot easy for a non-expert — and few things are easier to do than to demonstrate a basic task like stacking blocks. In the example the team presented in Brisbane, a camera watches the scene and the human simply walks up, picks up the blocks and stacks them. Then the robot repeats the task. Sounds easy enough, but it’s a massively difficult task for a robot.

To train the core models, the team mostly used synthetic data from a simulated environment. As both Birchfield and Fox stressed, it’s these simulations that allow for quickly training robots. Training in the real world would take far longer, after all, and can also be more far more dangerous. And for most of these tasks, there is no labeled training data available to begin with.

“We think using simulation is a powerful paradigm going forward to train robots do things that weren’t possible before,” Birchfield noted. Fox echoed this and noted that this need for simulations is one of the reasons why Nvidia thinks that its hardware and software is ideally suited for this kind of research. There is a very strong visual aspect to this training process, after all, and Nvidia’s background in graphics hardware surely helps.

Fox admitted that there’s still a lot of research left to do be done here (most of the simulations aren’t photorealistic yet, after all), but that the core foundations for this are now in place.

Going forward, the team plans to expand the range of tasks that the robots can learn and the vocabulary necessary to describe those tasks.

News Source = techcrunch.com

Continue Reading

Accel Partners

With at least $1.3 billion invested globally in 2018, VC funding for blockchain blows past 2017 totals

Although bitcoin and blockchain technology may not take up quite as much mental bandwidth for the general public as it did just a few months ago, companies in the space continue to rake in capital from investors.

One of the latest to do so is Circle, which recently announced a $110 million Series E round led by bitcoin mining hardware manufacturer Bitmain. Other participating investors include Tusk VenturesPantera CapitalIDG Capital PartnersGeneral CatalystAccel PartnersDigital Currency GroupBlockchain Capital and Breyer Capital.

This round vaults Circle into an exclusive club of crypto companies that are valued, in U.S. dollars, at $1 billion or more in their most recent venture capital round. According to Crunchbase data, Circle was valued at $2.9 billion pre-money, up from a $420 million pre-money valuation in its Series D round, which closed in May 2016. According to Crunchbase data, only Coinbase and Robinhood — a mobile-first stock-trading platform which recently made a big push into cryptocurrency trading — were in the crypto-unicorn club, which Circle has now joined.

But that’s not the only milestone for the world of venture-backed cryptocurrency and blockchain startups.

Back in February, Crunchbase News predicted that the amount of money raised in old-school venture capital rounds by blockchain and blockchain-adjacent startups in 2018 would surpass the amount raised in 2017. Well, it’s only May, and it looks like the prediction panned out.

In the chart below, you’ll find worldwide venture deal and dollar volume for blockchain and blockchain-adjacent companies. We purposely excluded ICOs, including those that had traditional VCs participate, and instead focused on venture deals: angel, seed, convertible notes, Series A, Series B and so on. The data displayed below is based on reported data in Crunchbase, which may be subject to reporting delays, and is, in some cases, incomplete.

A little more than five months into 2018, reported dollar volume invested in VC rounds raised by blockchain companies surpassed 2017’s totals. Not just that, the nearly $1.3 billion in global dollar volume is greater than the reported funding totals for the 18 months between July 1, 2016 and New Year’s Eve in 2017.

And although Circle’s Series E round certainly helped to bump up funding totals year-to-date, there were many other large funding rounds throughout 2018:

There were, of course, many other large rounds over the past five months. After all, we had to get to $1.3 billion somehow.

All of this is to say that investor interest in the blockchain space shows no immediate signs of slowing down, even as the price of bitcoin, ethereum and other cryptocurrencies hover at less than half of their all-time highs. Considering that regulators are still figuring out how to treat most crypto assets, massive price volatility and dubious real-world utility of the technology, it may surprise some that investors at the riskiest end of the risk capital pool invest as much as they do in blockchain.

Notes on methodology

Like in our February analysis, we first created a list of companies in Crunchbase’s bitcoin, ethereum, blockchaincryptocurrency and virtual currency categories. We added to this list any companies that use those keywords, as well as “digital currency,” “utility token” and “security token” that weren’t previously included in the above categories. After de-duplicating this list, we merged this set of companies with funding rounds data in Crunchbase.

Please note that for some entries in Crunchbase’s round data, the amount of capital raised isn’t known. And, as previously noted, Crunchbase’s data is subject to reporting delays, especially for seed-stage companies. Accordingly, actual funding totals are likely higher than reported here.

News Source = techcrunch.com

Continue Reading

Artificial Intelligence

AI will save us from yanny/laurel, right? Wrong

If you haven’t taken part in the yanny/laurel controversy over the last couple days, allow me to sincerely congratulate you. But your time is up. The viral speech synth clip has met the AI hype train and the result is, like everything in this mortal world, disappointing.

Sonix, a company that produces AI-based speech recognition software, ran the ambiguous sound clip through Google, Amazon, and Watson’s transcription tools, and of course its own.

Google and Sonix managed to get it on the first try — it’s “laurel,” by the way. Not yanny. Laurel.

But Amazon stumbled, repeatedly producing “year old” as its best guess for what the robotic voice was saying. IBM’s Watson, amazingly, got it only half the time, alternating between hearing “yeah role” and “laurel.” So in a way, it’s the most human of them all.

Top: Amazon; bottom: IBM.

Sonix CEO Jamie Sutherland told me in an email that he can’t really comment on the mixed success of the other models, not having access to them.

“As you can imagine the human voice is complex and there are so many variations of volume, cadence, accent, and frequency,” he wrote. “The reality is that different companies may be optimizing for different use cases, so the results may vary. It is challenging for a speech recognition model to accommodate for everything.”

My guess as an ignorant onlooker is it may have something to do with the frequencies the models have been trained to prioritize. Sounds reasonable enough!

It’s really an absurd endeavor to appeal to a system based on our own hearing and cognition to make an authoritative judgement in a matter on which our hearing and cognition are demonstrably lacking. But it’s still fun.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending