Timesdelhi.com

November 21, 2018
Category archive

privacy

Judge orders Amazon to turn over Echo recordings in double murder case

in Delhi/India/Politics/privacy/Security by

A New Hampshire judge has ordered Amazon to turn over two days of Amazon Echo recordings in a double murder case.

Prosecutors believe that recordings from an Amazon Echo in a Farmington home where two women were murdered in January 2017 may yield further clues to their killer. Although police seized the Echo when they secured the crime scene, any recordings are stored on Amazon servers.

The order granting the search warrant, obtained by TechCrunch, said that there is “probable cause to believe” that the Echo picked up “audio recordings capturing the attack” and “any events that preceded or succeeded the attack.”

Amazon is also directed to turn over any “information identifying any cellular devices that were linked to the smart speaker during that time period,” the order said.

Timothy Verrill, a resident of neighboring Dover, New Hampshire, was charged with two counts of first-degree murder. He pleaded not guilty and awaits trial.

When reached, an Amazon spokesperson did not comment — but the company told the Associated Press last week that it won’t release the information “without a valid and binding legal demand properly served on us.”

New Hampshire doesn’t provide electronic access to court records, so it’s not readily known if Amazon has complied with the order, signed by Justice Steven Houran, on November 5.

A court order signed by New Hampshire Superior Court on November 5 ordering Amazon to turn over Echo recordings. (Image: TechCrunch)

It’s not the first time Amazon has been ordered to turn over recordings that prosecutors believe may help a police investigation.

Three years ago, an Arkansas man was accused of murder. Prosecutors pushed Amazon to turn over data from an Echo found in the house where the body was found. Amazon initially resisted the request citing First Amendment grounds — but later conceded and complied. Police and prosecutors generally don’t expect much evidence from Echo recordings — if any — because Echo speakers are activated with a wake word — usually “Alexa,” the name of the voice assistant. But, sometimes fragment of recordings can be inadvertently picked up, which could help piece together events from a crime scene.

But these two cases represent a fraction of the number of requests Amazon receives for Echo data. Although Amazon publishes a biannual transparency report detailing the number of warrants and orders it receives across its entire business, the company doesn’t — and refuses — to break down how many requests for data it receives for Echo data.

In most cases, any request for Echo recordings are only known through court orders.

In fact, when TechCrunch reached out to the major players in the smart home space, only one device maker had a transparency report and most had no future plans to publish one — leaving consumers in the dark on how these companies protect your private information from overly broad demands.

Although the evidence in the Verrill case is compelling, exactly what comes back from Amazon — or the company’s refusal to budge — will be telling.

News Source = techcrunch.com

Children are being “datafied” before we’ve understood the risks, report warns

in Advertising Tech/Anne Longfield/Apps/Artificial Intelligence/big data/children's data/consumer protection/data management/data processing/data protection/data security/Delhi/Education/Europe/European Union/Facebook/General Data Protection Regulation/identity management/India/Policy/Politics/privacy/Snapchat/Social/social media/terms of service/United Kingdom by

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

News Source = techcrunch.com

Facebook Portal isn’t listening to your calls, but may track data

in Delhi/Facebook/Facebook Portal/Hardware/India/Politics/privacy by

When the initial buzz of Portal finally dies down, it’s the timing that will be remembered most. There’s never a great time for a company like Facebook to launch a product like Portal, but as far as optics go, the whole of 2018 probably should have been a write-off.

Our followup headline, “Facebook, are you kidding?” seems to sum up the fallout nicely.

But the company soldiered on, intent to launch its in-house hardware product, and insofar as its intentions can be regarded as pure, there are certainly worse motives than the goal of connecting loved ones. That’s a promise video chat technology brings, and Facebook’s technology stack delivers it in a compelling way.

Any praise the company might have received for the product’s execution, however, quickly took a backseat to another PR dustup. Here’s Recode with another fairly straightforward headline. “It turns out that Facebook could in fact use data collected from its Portal in-home video device to target you with ads.”

In a conversation with TechCrunch this week, Facebook exec Andrew “Boz” Bosworth claims it was the result of a misunderstanding on the company’s part.

“I wasn’t in the room with that,” Bosworth says, “but what I’m told was that we thought that the question was about ads being served on Portal. Right now, Facebook ads aren’t being served on Portal. Obviously, if some other service, like YouTube or something else, is using ads, and you’re watching that you’ll have ads on the Portal device. Facebook’s been serving ads on Portal.”

Facebook is working to draw a line here, looking to distinguish the big ask of putting its own microphones and a camera in consumer living rooms from the standard sort of data collection that forms the core of much of the site’s monetization model.

“[T]he thing that’s novel about this device is the camera and the microphone,” he explains. “That’s a place that we’ve gone overboard on the security and privacy to make sure consumers can trust at the electrical level the device is doing only the things that they expect.”

Facebook was clearly working to nip these questions in the bud prior to launch. Unprompted, the company was quick to list the many levels of security and privacy baked into the stack, from encryption to an actual physical piece of plastic the consumer can snap onto the top of the device to serve as a lens cap.

Last night, alongside the announcement of availability, Facebook issued a separate post drilling down on privacy concerns. Portal: Privacy and Ads details three key points:

  • Facebook does not listen to, view or keep the contents of your Portal video calls. This means nothing you say on a Portal video call is accessed by Facebook or used for advertising.
  • Portal video calls are encrypted, so your calls are secure.
  • Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t identify who you are.

Facebook is quick to explain that, in spite of what it deemed a misunderstanding, it hasn’t switched approaches since we spoke ahead of launch. But none of this is to say, of course, that the device won’t be collecting data that can be used to target other ads. That’s what Facebook does.

“I can be quite definitive about the camera and the microphone, and content of audio or content of video and say none of those things are being used to inform ads, full stop,” the executive tells TechCrunch. “I can be very, very confident when I make that statement.”

However, he adds, “Once you get past the camera and the microphones, this device functions a lot like other mobile devices that you have. In fact, it’s powered by Messenger, and in other spaces it’s powered by Facebook. All the same properties that a billion-plus people that are using Messenger are used to are the same as what’s happening on the device.”

As a hypothetical, Bosworth points to the potential for cross-platform ads targeting video calling for those who do it frequently — a classification, one imagines, that would apply to anyone who spends $199 on a video chat device of this nature. “If you were somebody who frequently use video calls,” Bosworth begins, “maybe there would be an ad-targeting cluster, for people who were interested in video calling. You would be a part of that. That’s true if you were using video calling often on your mobile phone or if you were using video calling often on Portal.”

Facebook may have painted itself into a corner with this one, however. Try as it might to draw the distinction between cameras/microphones and the rest of the software stack, there’s little doubt that trust has been eroded after months of talk around major news stories like Cambridge Analytica. Once that notion of trust has been breached, it’s a big lift to ask users to suddenly purchase a piece of standalone hardware they didn’t realize they needed a few months back.

“Certainly, the headwinds that we face in terms of making sure consumers trust the brand are ones that we’re all familiar with and, frankly, up to the challenge for,” says Bosworth. “It’s good to have extra scrutiny. We’ve been through a tremendous transformation inside the company over the last six to eight months to try to focus on those challenges.”

The executive believes, in fact, that the introduction of a device like Portal could actually serve to counteract that distrust, rather than exacerbate it.

“This device is exactly what I think people want from Facebook,” he explains. “It is a device focused on their closest friends and family, and the experiences, and the connections they have with those people. On one hand, I hear you. It’s a headwind. On the other hand, it’s exactly what we need. It is actually the right device that tells a story that I think we want people to hear about, what we care about the most, which is the people getting deeper and more meaningful hashes of one another.”

If Portal is ultimately a success, however, it won’t be because the product served to convince people that the company is more focused on meaningful interactions versus ad sales before. It will be because our memories are short. These sorts of concerns fade pretty quickly in the face of new products, particularly in a 24-hour news environment when basically everything is bad all the time.

The question then becomes whether Portal can offer enough of a meaningful distinction from other products to compel users to buy in. Certainly the company has helped jumpstart this with what are ultimately reasonably priced products. But even with clever augmented reality features and some well-produced camera tracking, Facebook needs to truly distinguish this device from an Echo Show or Google Home Hub.
Facebook’s early goal for the product are likely fairly modest. In conversations ahead of launch, the company has positioned this as a kind of learning moment. That began when the company seeded early versions of the products into homes as part of a private beta, and continues to some degree now that the device is out in the world. When pressed, the company wouldn’t offer up anything concrete.

“This is the first Facebook-branded hardware,” says Bosworth. “It’s early. I don’t know that we have any specific sales expectations so much as what we have is an expectation to have a market that’s big enough that we can learn, and iterate, and get better.”

This is true, certainly — and among my biggest complaints with the device. Aside from the aforementioned video chat functionality, the Portal doesn’t feel like a particularly fleshed-out device. There’s an extremely limited selection of apps pre-loaded and no app store. Video beyond the shorts offered up through Facebook is a big maybe for the time being.

During my review of the Portal+, I couldn’t shake the feeling that the product would have functioned as well — or even better, perhaps — as an add-on to or joint production with Amazon. However, that partnership is limited only to the inclusion of Alexa on the device. In fact, the company confirms that we can expect additional hardware devices over the next couple of years.

As it stands, Facebook says it’s open to a broad spectrum of possibilities, based on consumer demand. It’s something that could even, potentially, expand to on-device record, a feature that would further blur the lines of what the on-board camera and microphone can and should do.

“Right now, there’s no recording possible on the device,” Bosworth says. “The idea that a camera with microphones, people may want to use it like a camera with microphones to record things. We wanted to start in a position where people felt like they could understand what the device was, and have a lot of confidence and trust, and bring it home. There’s an obvious area where you can expand it. There’s also probably areas that are not obvious to us […] It’s not at all fair to say that this is any kind of a beta period. We only decided to ship it when we felt like we had crossed over into full finished product territory.”

From a privacy perspective, these things always feel like a death by a million cuts. For now, however, the company isn’t recording anything locally and has no definitive plans to do so. Given the sort of year the company has been having with regards to optics around privacy, it’s probably best to keep it that way.

News Source = techcrunch.com

Campaign tool supplied to UK’s governing party by Trump-Pence app dev quietly taken out of service

in Apps/Cambridge Analytica/Conservative Campaigner/Conservative Party/data protection/DCMS committee/Delhi/digital media/electoral law/Europe/European Union/Facebook/General Election/Government/India/information commissioner's office/Malta/Politics/privacy/republican national committee/Social/social media/social network/terms of service/uCampaign/United Kingdom by

An app that the UK’s governing party launched last year — for Conservative Party activists to gamify, ‘socialize’ and co-ordinate their campaigning activity — has been quietly pulled from app stores.

Its vanishing was flagged to us earlier today, by Twitter user Sarah Parks, who noticed that, when loaded, the Campaigner app now displays a message informing users the supplier is “no longer supporting clients based in Europe”.

“So we’re taking this opportunity to refresh our campaigning app,” it adds. “We will be back with a new and improved app early next year – well in time for the local elections.”

(Bad luck, then, should there end up being another very snap, Brexit-induced UK General Election in the meanwhile, as some have suggested may yet come to pass. But I digress… )

The supplier of the Conservative Campaigner app is — or was — a US-based add developer called uCampaign, which had also built branded apps for Trump-Pence 2016; the Republican National Committee; and the UK’s Vote Leave Brexit campaign, to name a few of the political campaigns it has counted as customers.

Here’s a few more: The (pro-gun) National Rife Association and the (anti-abortion) SBA List.

We know the name of the Conservative Campaigner app’s supplier because this summer we raised privacy concerns about the app — on account of its use of uCampaign’s boilerplate privacy policy, if you clicked to read the app’s privacy policy earlier this year.

The wording of uCampaign’s privacy policy suggested the Conservative Campaigner app could be harvesting users’ mobile phone contacts — if they chose to sync their contacts book with it.

The privacy policy for the app was subsequently changed to point to the Conservative Party’s own privacy policy — with the change of privacy policy taking place just before a tough new EU-wide data protection framework, GDPR, came into force on May 25 this year.

Prior to May 23, the privacy policy of the Conservatives’ digital campaigning app suggests it was harvesting contacts data from users — and potentially sharing non-users’ personal information with entities of uCampaign’s choosing (given, for example, the company’s privacy policy gave itself the right to “share your Personal Information with other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”).

This sort of consentless scraping of large amounts of networked personal data — by sucking up information on users’ friend groups and other personal connections — has of course had a massive spotlight thrown on it this year, as a result of the Facebook Cambridge Analytica data misuse scandal in which the personal data of tens of millions of Facebook users was extracted from the social network via a quiz app that used a (now defunct) Facebook friends API to grab data on non-users who would not have even had the chance to agree to the app’s terms.

Safe to say, this modus operandi wasn’t cool then — and it’s certainly not cool now.

Politicians all over the globe have been shaken awake by the Cambridge Analytica scandal, and are now raising all sorts of concerns about how data and digital tools are being used (and or misused and abused).

The EU parliament recently called for an independent audit of Facebook, for example.

In the UK, a committee that’s been probing the impact of social media-accelerated disinformation on democratic processes published a report this summer calling for a levy on social media to defend democracy. Its lengthy preliminary report also suggested urgent amendments to domestic electoral law to reflect the use of digital technologies for political campaigning.

Though the UK’s Conservative minority government — and the party behind the now on-pause Conservative Campaigner app — apparently disagrees on the need for speed, declining in its response last week to accept most of the committee’s laundry list of recommended changes.

The DCMS committee’s inquiry into political campaigns’ use (and misuse) of personal data continues — now at a transnational level.

An ethical pause?

Shortly after we published our privacy concerns about the Conservative Campaigner app, the UK’s data protection watchdog issued its own a lengthy report detailing extensive concerns about how UK political parties were misusing personal data — and calling for an ethical pause on the use of microtargeting for election campaigning purposes.

Which does rather beg the question whether the Conservative Campaigner app going AWOL now, until a reboot under a new supplier (presumably) next year, might not represent just such an ‘ethical pause’.

The app is, after all, only just over a year old.

We asked the Conservative Party a number of questions about the Campaigner app via email — after a press office spokeswoman declined to discuss the matter on the telephone.

Five hours later it emailed the following brief statement, attributed to a Conservative spokesperson:

We work with a number of different suppliers and all Conservative party campaigning is compliant with the relevant data protection legislation including GDPR.

The spokesperson did not engage with the substance of the vast majority of our concerns — such as those relating to the app’s handling of people’s data and the legal bases for any transfers of UK voter data to the US.

Instead the spokesperson reiterated the in-app notification which claims “the supplier” is no longer supporting clients based in Europe.

They also said the party is currently reviewing its campaigning tools, without providing any further detail.

We’ve included our full list of questions at the bottom of this post.

We’ve also reached out to the ICO to ask if it had any concerns related to how the Conservative Campaigner app was handling people’s data.

Similarly, the former deputy director & head of digital strategy for the Conservative party, Anthony Hind, declined to engage with the same data protection concerns when we raised them with him directly, back in July.

According to his LinkedIn profile he’s since moved on from the Conservatives to head up social media for the Confederation of British Industry.

For this report we also reached out to uCampaign’s founder and CEO, Thomas Peters, to ask for confirmation on the company’s situation vis-a-vis European clients.

At the time of writing Peters had not responded to our emails. We’ll update this story with any uCampaign response.

The company’s website still includes the UK Conservative Party listed as a client — though the language used on the webpage does not make it explicit whether or not the party is a current client…

Another graphic on the same page plots the UK flag on a world map depicting what uCampaign dubs its “global platform”, where it’s marked along with several other European flags — including Ireland, France, Germany and Malta, suggesting uCampaign has — or had — multiple European clients.

Here’s the full list of questions we put to the Conservatives about their campaigner app. To our eye it has answered just one of them:

Can you confirm — on the record — the reasons for the app being pulled?

Does the Conservative Party intend to continue working with uCampaign for the new campaign app that will relaunch next year? Or does the party have a new supplier?

If the latter, where is the new supplier based? In the UK or in the US?

Did the Conservative Party have any concerns at all related to using uCampaigner as a supplier? (Given, for example, concerns flagged about its data privacy practices by one of the DCMS committee’s recent reports — following an inquiry investigating digital campaigning.)

If the Conservative Party was aware of data privacy concerns pertaining to uCampaign’s practices can you confirm when the party became aware of such concerns?

Was the party aware that the privacy policy it used for the app prior to May 23, 2018 was uCampaign’s own privacy policy?

This privacy policy stated that the app could harvest data from users’ mobile phone contacts and share that data with unknown third parties of the developer’s choosing — including other political campaigns. Is the Conservative Party comfortable with having its supporters’ data shared with other political campaigns?

What due diligence did the Conservative Party carry out before it selected uCampaign as its app supplier?

After signing up the supplier, did the Conservative Party carry out a privacy impact assessment related to how the app operates?

Please confirm all the data points that the app was collecting from users, and what each of those data points was being used for

Where was app user data being processed? In the US, where uCampaign is based, or in the UK where potential voters live?

If the US, what was the legal basis for any transfer of data from UK users to the US?

Is the Conservative Party confident its use of the campaigner app did not breach UK data protection law?

Earlier this year the former Cabinet Minister Dominic Grieve suggested that the bosses of tech giants involved in the Cambridge Analytica data misuse scandal should be jailed for their part in abusing online data for political and financial gain. Does the Conservative Party support Grieve’s position on online data abuse?

Has anyone been sacked or sanctioned for their part in procuring uCampaign as the app supplier — and/or overseeing the operation of the Conservative Campaigner app itself?

Will the Conservative Party commit to notifying all individuals whose data was shared with uCampaign without their explicit consent?

Can the Conservative Party confirm how many individuals had their personal data shared with uCampaign?

Has the Information Commissioner’s Office raised any concerns with the Conservative Party about the Campaigner app?

Has the Conservative Party itself reported any concerns about the app/uCampaign to the ICO?

News Source = techcrunch.com

Facial recognition startup Kairos founder continues to fight attempted takeover

in Artificial Intelligence/Delhi/facial recognition/India/kairos/Politics/privacy/TC by

There’s some turmoil brewing over at Miami-based facial recognition startup Kairos . Late last month, New World Angels President and Kairos board chairperson Steve O’Hara sent a letter to Kairos founder Brian Brackeen notifying him of his termination from the role of chief executive officer. The termination letter cited willful misconduct as the cause for Brackeen’s termination. Specifically, O’Hara said Brackeen misled shareholders and potential investors, misappropriated corporate funds, did not report to the board of directors and created a divisive atmosphere.

Kairos is trying to tackle the society-wide problem of discrimination in artificial intelligence. While that’s not the company’s explicit mission — it’s to provide authentication tools to businesses — algorithmic bias has long been a topic the company, especially Brackeen, has addressed.

Brackeen’s purported termination was followed by a lawsuit, on behalf of Kairos, against Brackeen, alleging theft, a breach of fiduciary duties — among other things. Brackeen, in an open letter sent a couple of days ago to shareholders — and one he shared with TechCrunch — about the “poorly constructed coup,” denies the allegations and details his side of the story. He hopes that the lawsuit will be dismissed and that he will officially be reinstated as CEO, he told TechCrunch. As it stands today, Melissa Doval who became CFO of Kairos in July, is acting as interim CEO.

“The Kairos team is amazing and resilient and has blown me away with their commitment to the brand,” Doval told TechCrunch. “I’m humbled by how everybody has just kind of stuck around in light of everything that has transpired.”

The lawsuit, filed on October 10 in Miami-Dade and spearheaded by Kairos COO Mary Wolff, alleges Brackeen “used his position as CEO and founder to further his own agenda of gaining personal notoriety, press, and a reputation in the global technology community” to the detriment of Kairos. The lawsuit describes how Brackeen spent less than 30 percent of his time in the company’s headquarters, “even though the Company was struggling financially.”

Other allegations detail how Brackeen used the company credit card to pay for personal expenses and had the company pay for a car he bought for his then-girlfriend. Kairos alleges Brackeen owes the company at least $60,000.

In his open letter, Brackeen says, “Steve, Melissa and Mary, as cause for my termination and their lawsuit against me, have accused me of stealing 60k from Kairos, comprised of non-work related travel, non-work related expenses, a laptop, and a beach club membership,” Brackeen wrote in a letter to shareholders. “Let’s talk about this. While I immediately found these accusations absurd, I had to consider that, to people on the outside of  ‘startup founder’ life— their claims could appear to be salacious, if not illegal.”

Brackeen goes on to say that none of the listed expenses — ranging from trips, meals, rides to iTunes purchases — were not “directly correlated to the business of selling Kairos to customers and investors, and growing Kairos to exit,” he wrote in the open letter. Though, he does note that there may be between $3,500 to $4,500 worth of charges that falls into a “grey area.”

“Conversely, I’ve personally invested, donated, or simply didn’t pay myself in order to make payroll for the rest of the team, to the tune of over $325,000 dollars,” he wrote. “That’s real money from my accounts.”

Regarding forcing Kairos to pay for his then-girlfriend’s car payments, Brackeen explains:

On my making Kairos ‘liable to make my girlfriend’s car payment’— in order to offset the cost of Uber rides to and from work, to meetings, the airport, etc, I determined it would be more cost effective to lease a car. Unfortunately, after having completely extended my personal credit to start and keep Kairos operating, it was necessary that the bank note on the car be obtained through her credit. The board approved the $700 per month per diem arrangement, which ended when I stopped driving the vehicle. Like their entire case— its not very sensational, when truthfully explained.

The company also claims Brackeen has interfered with the company and its affairs since his termination. Throughout his open letter, Brackeen refers to this as an “attempted termination” because, as advised by his lawyers, he has not been legally terminated. He also explains how, in the days leading up to his ouster, Brackeen was seeking to raise additional funding because in August, “we found ourselves in the position of running low on capital.” While he was presenting to potential investors in Singapore, Brackeen said that’s “when access to my email and documents was cut.”

He added, “I traveled to the other side of the world to work with my team on IP development and meet with the people who would commit to millions in investment— and was fired via voicemail the day after I returned.”

Despite the “termination” and lawsuit, O’Hara told TechCrunch via email that “in the interest of peaceful coexistence, we are open to reaching an agreement to allow Brian to remain part of the family as Founder, but not as CEO and with very limited responsibilities and no line authority.”

O’Hara also noted the company’s financials showed there was $44,000 in cash remaining at the end of September. He added, “Then reconcile it with the fact that Brian raised $6MM in 2018 and ask yourself, how does a company go through that kind of money in under 9 months.”

Within the next twelve days, there will be a shareholder vote to remove the board, as well as a vote to reinstate Brackeen as CEO, he told me. After that, Brackeen said he intends to countersue Doval, O’Hara and Wolff.

In addition to New World Angels, Kairos counts Kapor Capital, Backstage Capital and others as investors. At least one investor, Arlan Hamilton of Backstage Capital, has publicly come out in support of Brackeen.

As previously mentioned, Brackeen has been pretty outspoken about the ethical concerns of facial recognition technologies. In the case of law enforcement, no matter how accurate and unbiased these algorithms are, facial recognition software has no business in law enforcement, Brackeen said at TechCrunch Disrupt in early September. That’s because of the potential for unlawful, excessive surveillance of citizens.

Given the government already has our passport photos and identification photos, “they could put a camera on Main Street and know every single person driving by,” Brackeen said.

And that’s a real possibility. In the last couple of months, Brackeen said Kairos turned down a government request from Homeland Security, seeking facial recognition software for people behind moving cars.

“For us, that’s completely unacceptable,” Brackeen said.

Whether that’s entirely unacceptable for Doval, the interim CEO of Kairos, is not clear. In an interview with TechCrunch, Doval said, “we’re committed to being a responsible and ethical vendor” and that “we’re going to continue to champion the elimination of algorithmic bias in artificial intelligence.” While that’s not a horrific thing to say, it’s much vaguer than saying, “No, we will not ever sell to law enforcement.”

Selling to law enforcement could be lucrative, but that comes with ethical risks and concerns. But if the company is struggling financially, maybe the pros could outweigh the cons.

News Source = techcrunch.com

1 2 3 32
Go to Top