Menu

Timesdelhi.com

June 16, 2019
Category archive

law enforcement

Amazon faces greater shareholder pressure to limit sale of facial recognition tech to the government

in aclu/Amazon/American Civil Liberties Union/Cloud/Delhi/facial recognition/Government/India/law enforcement/learning/Politics/privacy/publishing/San Francisco/Security/skills/surveillance/surveillance technologies/United States by

This week could mark a significant setback for Amazon’s facial recognition business if privacy and civil liberties advocates — and some shareholders — get their way.

Months earlier, shareholders tabled a resolution to limit the sale of Amazon’s facial recognition tech giant calls Rekognition to law enforcement and government agencies. It followed accusations of bias and inaccuracies with the technology, which they say can be used to racially discriminate against minorities. Rekognition, which runs image and video analysis of faces, has been sold to two states so far and Amazon has pitched Immigrations & Customs Enforcement. A second resolution will require an independent human and civil rights review of the technology.

Now the ACLU is backing the measures and calling on shareholders to pass the the resolutions.

“Amazon has stayed the course,” said Shankar Narayan, director of the Technology and Liberty Project at the ACLU Washington, in a call Friday. “Amazon has heard repeatedly about the dangers to our democracy and vulnerable communities about this technology but they have refused to acknowledge those dangers let alone address them,” he said.

“Amazon has been so non-responsive to these concerns,” said Narayan, “even Amazon’s own shareholders have been forced to resort to putting these proposals addressing those concerns on the ballot.”

It’s the latest move in a concerted effort by dozens of shareholders and investment firms, tech experts and academics, and privacy and rights groups and organizations who have decried the use of the technology.

Critics say Amazon Rekognition has accuracy and bias issues. (Image: TechCrunch)

In a letter to be presented at Amazon’s annual shareholder meeting Wednesday, the ACLU will accuse Amazon of “failing to act responsibly” by refusing to stop the sale of the technology to the government.

“This technology fundamentally alters the balance of power between government and individuals, arming governments with unprecedented power to track, control, and harm people,” said the letter, shared with TechCrunch. “It would enable police to instantaneously and automatically determine the identities and locations of people going about their daily lives, allowing government agencies to routinely track their own residents. Associated software may even display dangerous and likely inaccurate information to police about a person’s emotions or state of mind.”

“As shown by a long history of other surveillance technologies, face surveillance is certain to be disproportionately aimed at immigrants, religious minorities, people of color, activists, and other vulnerable communities,” the letter added.

“Without shareholder action, Amazon may soon become known more for its role in facilitating pervasive government surveillance than for its consumer retail operations,” it read.

Facial recognition has become one of the most hot button topics in privacy in years. Amazon Rekognition, its cloud-based facial recognition system, remains in its infancy yet one of the most prominent and available systems available. But critics say the technology is flawed. Exactly a year prior to this week’s shareholder meeting, the ALCU first raised “profound” concerns with Rekognition and its installation at airports, public places and by police. Since then, the technology was shown to struggle to detect people of color. In its tests, the system struggled to match 28 congresspeople who were falsely matched in a mugshot database who had been previously arrested.

But there has been pushback — even from government. Several municipalities have rolled out surveillance-curtailing laws and ordnances in the past year. San Francisco last week became the first major U.S. city government to ban the use of facial recognition.

“Amazon leadership has failed to recognize these issues,” said the ACLU’s letter to be presented Wednesday. “This failure will lead to real-life harm.”

The ACLU said shareholders “have the power to protect Amazon from its own failed judgment.”

Amazon has pushed back against the claims by arguing that the technology is accurate — largely by criticizing how the ACLU conducted its tests using Rekognition.

Amazon did not comment when reached prior to publication.

Read more:

Security lapse exposed a Chinese smart city surveillance system

in alibaba/Alibaba Cloud/Artificial Intelligence/Asia/Beijing/China/Delhi/facial recognition/Government/India/Kuala Lumpur/law enforcement/national security/Politics/privacy/Security/skills/surveillance/United Nations/United States/video surveillance by

Smart cities are designed to make life easier for their residents: better traffic management by clearing routes, making sure the public transport is running on time and having cameras keeping a watchful eye from above.

But what happens when that data leaks? One such database was open for weeks for anyone to look inside.

Security researcher John Wethington found a smart city database accessible from a web browser without a password. He passed details of the database to TechCrunch in an effort to get the data secured.

The database was an Elasticsearch database, storing gigabytes of data — including facial recognition scans on hundreds of people over several months. The data was hosted by Chinese tech giant Alibaba. The customer, which Alibaba did not name, tapped into the tech giant’s artificial intelligence-powered cloud platform, known as City Brain.

“This is a database project created by a customer and hosted on the Alibaba Cloud platform,” said an Alibaba spokesperson. “Customers are always advised to protect their data by setting a secure password.”

“We have already informed the customer about this incident so they can immediately address the issue. As a public cloud provider, we do not have the right to access the content in the customer database,” the spokesperson added. The database was pulled offline shortly after TechCrunch reached out to Alibaba.

But while Alibaba may not have visibility into the system, we did.

The location of the smart city’s many cameras in Beijing (Image: supplied)

While artificial intelligence-powered smart city technology provides insights into how a city is operating, the use of facial recognition and surveillance projects have come under heavy scrutiny from civil liberties advocates. Despite privacy concerns, smart city and surveillance systems are slowly making their way into other cities both in China and abroad, like Kuala Lumpur, and soon the West.

“It’s not difficult to imagine the potential for abuse that would exist if a platform like this were brought to the U.S. with no civilian and governmental regulations or oversight,” said Wethington. “While businesses cannot simply plug in to FBI data sets today it would not be hard for them to access other state or local criminal databases and begin to create their own profiles on customers or adversaries.”

We don’t know the customer of this leaky database, but its contents offered a rare insight into how a smart city system works.

The system monitors the residents around at least two small housing communities in eastern Beijing, the largest of which is Liangmaqiao, known as the city’s embassy district. The system is made up of several data collection points, including cameras designed to collect facial recognition data.

The exposed data contains enough information to pinpoint where people went, when and for how long, allowing anyone with access to the data — including police — to build up a picture of a person’s day-to-day life.

A portion of the database containing facial recognition scans (Image: supplied)

Alibaba provides technologies like City Brain to customers to understand the data they collect from various sources, including license plate readers, door access controls, smart things and internet-connected devices and facial recognition.

Using City Brain’s data-crunching back-end, the cameras can process various facial details, such as if a person’s eyes or mouth are open, if they’re wearing sunglasses, or a mask — common during periods of heavy smog — and if a person is smiling or even has a beard.

The database also contained a subject’s approximate age as well as an “attractive” score, according to the database fields.

But the capabilities of the system have a darker side, particularly given the complicated politics of China.

The system also uses its facial recognition systems to detect ethnicities and labels them — such as “汉族” for Han Chinese, the main ethnic group of China — and also “维族” — or Uyghur Muslims, an ethnic minority under persecution by Beijing.

Where ethnicities can help police identify suspects in an area even if they don’t have a name to match, the data can be used for abuse.

The Chinese government has detained more than a million Uyghurs in internment camps in the past year, according to a United Nations human rights committee. It’s part of a massive crackdown by Beijing on the ethnic minority group. Just this week, details emerged of an app used by police to track Uyghur Muslims.

We also found that the customer’s system also pulls in data from the police and uses that information to detect people of interest or criminal suspects, suggesting it may be a government customer.

Facial recognition scans would match against police records in real time (Image: supplied)

Each time a person is detected, the database would trigger a “warning” noting the date, time, location and a corresponding note. Several records seen by TechCrunch include suspects’ names and their national identification card number.

“Key personnel alert by the public security bureau: “[name] [location]” – 177 camera detects key individual(s),” one translated record reads, courtesy of TechCrunch’s Rita Liao. (The named security bureau is China’s federal police department, the Ministry of Public Security.)

In other words, the record shows a camera at a certain point detected a person’s face whose information matched a police watchlist.

Many of the records associated with a watchlist flag would include the reason why, such as if a recognized person was a “drug addict” or “released from prison.”

The system is also programmed to alert the customer in the event of building access control issues, smoke alarms and equipment failures — such as when cameras go offline.

The customer’s system also has the capability to monitor for Wi-Fi-enabled devices, such as phones and computers, using sensors built by Chinese networking tech maker Renzixing and placed around the district. The database collects the dates and times that pass through its wireless network radius. Fields in the Wi-Fi-device logging table suggest the system can collect IMEI and IMSI numbers, used to uniquely identify a cellular user.

Although the customer’s smart city system was on a small scale with only a few dozen sensors, cameras and data collection points, the amount of data it collected in a short space of time was staggering.

In the past week alone, the database had grown in size — suggesting it’s still actively collecting data.

“The weaponization and abuse of A.I. is a very real threat to the privacy and security of every individual,” said Wethington. “We should carefully look at how this technology is already being abused by other countries and businesses before permitting them to be deployed here.”

It’s hard to know if facial recognition systems like this are good or bad. There’s no real line in the sand separating good uses from bad uses. Facial and object recognition systems can spot criminals on the run and detect weapons ahead of mass shootings. But some worry about the repercussions of being watched every day — even jaywalkers don’t get a free pass. The pervasiveness of these systems remain a privacy concern for civil liberties groups.

But as these systems develop and become more powerful and ubiquitous, companies might be better placed to first and foremost make sure its massive data banks don’t inadvertently leak.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.

Why your CSO, not your CMO, should pitch your security startup

in computer security/computing/cryptography/Delhi/encryption/Entrepreneurship/executive/India/law enforcement/national security/Politics/reporter/Security/snake oil/Startup company/Startups by

Whenever a security startup lands on my desk, I have one question: Who’s the chief security officer (CSO) and when can I get time with them?

Having a chief security officer is as relevant today as a chief marketing officer (CMO) or chief revenue boss. Just as you need to make sure your offering looks good and the money keeps rolling in, you need to show what your security posture looks like.

Even for non-security startups, having someone at the helm is just as important — not least given the constant security threats that all companies face today, they will become a necessary part of interacting with the media. Regardless of whether your company builds gadgets or processes massive amounts of customer data, security has to be at the front of mind. It’s no good simply saying that you “take your privacy and security seriously.” You have to demonstrate it.

A CSO has several roles and they will wear many hats. Depending on the kind of company you have, they will work to bolster your company’s internal processes and policies on keeping not only your corporate data safe but also the data of your customers. They also will be consulted on security practices of your app or product or service to make sure you’re complying with consumer-expected privacy expectations — and not the overbearing and all-embracing industry standards of vacuuming up as much data as there is.

But for the average security startup, a CSO should also act as the point-person for all technical matters associated with their company’s product or service. A CSO can be an evangelist for the infosec professional who can speak to their company’s offering — and to reporters, like me.

In my view, no startup of any size — especially a security startup — should be without a CSO.

The reality is about 95 percent of the world’s wealthiest companies don’t have one. Facebook hasn’t had someone running the security shop since August. It may be a coincidence that the social networking giant has faced breach after exposure after leak after scandal, and it shows — the company is running around headless without a direction of where to go.

Alphabet’s Sidewalk Labs is developing visual cues to indicate when their tech is monitoring you

in boston/data management/Delhi/human rights/India/law enforcement/London/national security/New York/Politics/Prevention/privacy/Safety/Security/sidewalk labs/smart city/surveillance/TC/toronto by

Alphabet’s subsidiary focused on urban tech development, Sidewalk Labs, is now trying to reinvent signage for smart cities. These signs aren’t to direct the flow of traffic, or to point the way to urban landmarks — they’re designed to let citizens know when they’re being monitored.

The proposal is part of a push by the company to acclimate people to the technologies that it’s deploying in cities like New York and Toronto.

Globally, competition for contracts to deploy sensors, data management and predictive technologies in cities can run into the tens of millions, if not billions of dollars, and Sidewalk Labs knows this better than most. Because its projects are among the most ambitious deployments of sensing and networking technologies for smart cities, the company has also faced the most public criticism.

So at least partially in an attempt to blunt attacks from critics, the company is proposing to make its surveillance and monitoring efforts more transparent.

“Digital technology is all around us, but often invisible. Consider: on any one urban excursion (your commute, perhaps), you could encounter CCTVs, traffic cameras, transit card readers, bike lane counters, Wi-Fi access points, occupancy sensors that open doors — potentially all on the same block,” writes Jacqueline Lu, whose title is “assistant director of the public realm” at Sidewalk Labs.

Lu notes that while the technologies can be useful, there’s little transparency around the data these technologies are collecting, who the data is being collected by and what the data is collected for.

Cities like Boston and London already indicate when technology is being used in the urban environment, but Sidewalk Labs convened a group of designers and urban planners to come up with a system for signage that would make the technology being used even more public for citizens going about their day.

Image courtesy of Sidewalk Labs

Back in 2013, the U.S. Federal Trade Commission called for the development of these types of indicators when it issued a call for mobile privacy disclosures. But that seems to have resulted in companies just drafting reams of jargon-filled disclosures that obscured more than they revealed.

At Sidewalk, the goal is transparency, say the authors of the company’s suggested plan.

“We strongly believe that people should know how and why data is being collected and used in the public realm, and we also believe that design and technology can meaningfully facilitate this understanding. For these reasons, we embarked on a collaborative project to imagine what digital transparency in the public realm could be like,” writes Lu and her co-authors Principal Designer Patrick Keenan and Legal Associate Chelsey Colbert.

As an example, Sidewalk showed off potential designs for signage that would alert people to the presence of the company’s Numina technology.

That tech monitors traffic patterns by recording, anonymizing and transmitting data from sensors using digital recording and algorithmically enhanced software to track movement in an area. These sensors are installed on light poles and transmit data wirelessly.

At the very least, the technology can’t be any worse than the innocuously intended cameras that are monitoring public spaces already (and can be turned into surveillance tools easily).

The hexagonal designs indicate the purpose of the technology, the company deploying it, the reason for its use, whether or not the tech is collecting sensitive information and a QR code that can be scanned to find out more information.

The issue with experiments like these in the public sphere is that there’s no easy way to opt out of them. Sidewalk Lab’s Toronto project is both an astounding feat of design and the apotheosis of surveillance capitalism.

Once these decisions are made to cede public space to the private sector, or sacrifice privacy for security (or simply better information about a location for the sake of convenience), they’re somewhat difficult to unwind. As with most of the salient issues with technology today, it’s about unintended consequences.

Information about a technology’s deployment isn’t enough if the relevant parties haven’t thought through the ramifications of that technology’s use.

100 Car2go Mercedes hijacked in Chicago crime spree

in Automotive/car2go/carsharing/chicago/Daimler AG/Delhi/India/law enforcement/Politics/reporter/TC/Transportation by

Car2go, free-floating car-sharing service owned by Daimler, temporarily shut down its service in Chicago on Wednesday after dozens of Mercedes-Benz vehicles were stolen using the app.

The Chicago Police Department was alerted by Car2Go that some of their vehicles may have been rented by deceptive or fraudulent means through a mobile  app, a spokesperson wrote in an emailed statement to TechCrunch.

The news was first reported via tweet by Brad Edwards, a reporter with CBS Chicago. Edwards reported that sources said that many of the vehicles were allegedly used to commit other crimes. CPD did not provide any details about how the vehicles were used and said the investigation was ongoing.

Car2go launched in Chicago last June, the first time in four years that the company added a U.S. city to its ranks. The car-sharing company lets customers rent out vehicles on a short-term basis. Daimler’s diminutive Smart cars were once the lone option for Car2go customers. The company has expanded its offerings in recent years and now offers Mercedes-Benz CLA and GLA, as well as the two-door Smartfortwo vehicles.

CPD said 100 vehicles are still unaccounted for. It is believed that 50 vehicles, all of them Mercedes-Benz remain in the greater Chicago area. Police are questioning more than a dozen persons of interest.

CPD said it’s working with Car2go to determine whether there are any other vehicles whose locations cannot be accounted for.  At this time the recoveries appear to be isolated to the West Side, CPD said.

While the perpetrators appear to have gained access to the vehicles through “fraudulent means,” Car2go emphasized that no personal or confidential member information has been compromised.

TechCrunch received a tip from a user who received this “temporary pause in service” message when trying to use the app. Car2go confirmed the shut down and added that it will provide an update as soon as possible.

 

Car2go is going through a branding and organizational transition. Daimler  AG and BMW Group officially agreed to merge their urban mobility services into a single holding company back in March 2018 with a 50 percent stake each. In February, the companies announced plans to unify their services under five categories by creating five joint ventures — Reach Now, Charge Now, Park Now, Free Now and Share Now.

1 2 3
Go to Top