Menu

Timesdelhi.com

June 16, 2019
Category archive

encryption

London’s Tube network to switch on wi-fi tracking by default in July

in api/controlled/Delhi/encryption/Europe/European Union/India/London/London Underground/MAC Address/Mayor/mobile devices/Politics/privacy/Security/smartphone/transport for london/Transportation/United Kingdom/wi-fi/wireless networking by

Transport for London will roll out default wi-fi device tracking on the London Underground this summer, following a trial back in 2016.

In a press release announcing the move, TfL writes that “secure, privacy-protected data collection will begin on July 8” — while touting additional services, such as improved alerts about delays and congestion, which it frames as “customer benefits”, as expected to launch “later in the year”.

As well as offering additional alerts-based services to passengers via its own website/apps, TfL says it could incorporate crowding data into its free open-data API — to allow app developers, academics and businesses to expand the utility of the data by baking it into their own products and services.

It’s not all just added utility though; TfL says it will also use the information to enhance its in-station marketing analytics — and, it hopes, top up its revenues — by tracking footfall around ad units and billboards.

Commuters using the UK capital’s publicly funded transport network who do not want their movements being tracked will have to switch off their wi-fi, or else put their phone in airplane mode when using the network.

To deliver data of the required detail, TfL says detailed digital mapping of all London Underground stations was undertaken to identify where wi-fi routers are located so it can understand how commuters move across the network and through stations.

It says it will erect signs at stations informing passengers that using the wi-fi will result in connection data being collected “to better understand journey patterns and improve our services” — and explaining that to opt out they have to switch off their device’s wi-fi.

Attempts in recent years by smartphone OSes to use MAC address randomization to try to defeat persistent device tracking have been shown to be vulnerable to reverse engineering via flaws in wi-fi set-up protocols. So, er, switch off to be sure.

We covered TfL’s wi-fi tracking beta back in 2017, when we reported that despite claiming the harvested wi-fi data was “de-personalised”, and claiming individuals using the Tube network could not be identified, TfL nonetheless declined to release the “anonymized” data-set after a Freedom of Information request — saying there remains a risk of individuals being re-identified.

As has been shown many times before, reversing ‘anonymization’ of personal data can be frighteningly easy.

It’s not immediately clear from the press release or TfL’s website exactly how it will be encrypting the location data gathered from devices that authenticate to use the free wi-fi at the circa 260 wi-fi enabled London Underground stations.

Its explainer about the data collection does not go into any real detail about the encryption and security being used. (We’ve asked for more technical details.)

“If the device has been signed up for free Wi-Fi on the London Underground network, the device will disclose its genuine MAC address. This is known as an authenticated device,” TfL writes generally of how the tracking will work.

“We process authenticated device MAC address connections (along with the date and time the device authenticated with the Wi-Fi network and the location of each router the device connected to). This helps us to better understand how customers move through and between stations — we look at how long it took for a device to travel between stations, the routes the device took and waiting times at busy periods.”

“We do not collect any other data generated by your device. This includes web browsing data and data from website cookies,” it adds, saying also that “individual customer data will never be shared and customers will not be personally identified from the data collected by TfL”.

In a section entitled “keeping information secure” TfL further writes: “Each MAC address is automatically depersonalised (pseudonymised) and encrypted to prevent the identification of the original MAC address and associated device. The data is stored in a restricted area of a secure location and it will not be linked to any other data at a device level.  At no time does TfL store a device’s original MAC address.”

Privacy and security concerns were raised about the location tracking around the time of the 2016 trial — such as why TfL had used a monthly salt key to encrypt the data rather than daily salts, which would have decreased the risk of data being re-identifiable should it leak out.

Such concerns persist — and security experts are now calling for full technical details to be released, given TfL is going full steam ahead with a rollout.

 

A report in Wired suggests TfL has switched from hashing to a system of tokenisation – “fully replacing the MAC address with an identifier that cannot be tied back to any personal information”, which TfL billed as as a “more sophisticated mechanism” than it had used before. We’ll update as and when we get more from TfL.

Another question over the deployment at the time of the trial was what legal basis it would use for pervasively collecting people’s location data — since the system requires an active opt-out by commuters a consent-based legal basis would not be appropriate.

In a section on the legal basis for processing the Wi-Fi connection data, TfL writes now that its ‘legal ground’ is two-fold:

  • Our statutory and public functions
  • to undertake activities to promote and encourage safe, integrated, efficient and economic transport facilities and services, and to deliver the Mayor’s Transport Strategy

So, presumably, you can file ‘increasing revenue around adverts in stations by being able to track nearby footfall’ under ‘helping to deliver (read: fund) the mayor’s transport strategy’.

(Or as TfL puts it: “[T]he data will also allow TfL to better understand customer flows throughout stations, highlighting the effectiveness and accountability of its advertising estate based on actual customer volumes. Being able to reliably demonstrate this should improve commercial revenue, which can then be reinvested back into the transport network.”)

On data retention it specifies that it will hold “depersonalised Wi-Fi connection data” for two years — after which it will aggregate the data and retain those non-individual insights (presumably indefinitely, or per its standard data retention policies).

“The exact parameters of the aggregation are still to be confirmed, but will result in the individual Wi-Fi connection data being removed. Instead, we will retain counts of activities grouped into specific time periods and locations,” it writes on that.

It further notes that aggregated data “developed by combining depersonalised data from many devices” may also be shared with other TfL departments and external bodies. So that processed data could certainly travel.

Of the “individual depersonalised device Wi-Fi connection data”, TfL claims it is accessible only to “a controlled group of TfL employees” — without specifying how large this group of staff is; and what sort of controls and processes will be in place to prevent the risk of A) data being hacked and/or leaking out or B) data being re-identified by a staff member.

A TfL employee with intimate knowledge of a partner’s daily travel routine might, for example, have access to enough information via the system to be able to reverse the depersonalization.

Without more technical details we just don’t know. Though TfL says it worked with the UK’s data protection watchdog in designing the data collection with privacy front of mind.

“We take the privacy of our customers very seriously. A range of policies, processes and technical measures are in place to control and safeguard access to, and use of, Wi-Fi connection data. Anyone with access to this data must complete TfL’s privacy and data protection training every year,” it also notes elsewhere.

Despite holding individual level location data for two years, TfL is also claiming that it will not respond to requests from individuals to delete or rectify any personal location data it holds, i.e. if people seek to exercise their information rights under EU law.

“We use a one-way pseudonymisation process to depersonalise the data immediately after it is collected. This means we will not be able to single out a specific person’s device, or identify you and the data generated by your device,” it claims.

“This means that we are unable to respond to any requests to access the Wi-Fi data generated by your device, or for data to be deleted, rectified or restricted from further processing.”

Again, the distinctions it is making there are raising some eyebrows.

What’s amply clear is that the volume of data that will be generated as a result of a full rollout of wi-fi tracking across the lion’s share of the London Underground will be staggeringly massive.

More than 509 million “depersonalised” pieces of data, were collected from 5.6 million mobile devices during the four-week 2016 trial alone — comprising some 42 million journeys. And that was a very brief trial which covered a much smaller sub-set of the network.

As big data giants go, TfL is clearly gunning to be right up there.

Microsoft makes a push for service mesh interoperability

in alpha/Cloud/cloud computing/cloud infrastructure/computing/Delhi/Docker/encryption/Enterprise/Google/HashiCorp/IBM/India/Istio/Kubernetes/Linkerd/Lyft/micro services/microservices/Microsoft/Politics/red hat/TC/vmware by

Services meshes. They are the hot new thing in the cloud native computing world. At Kubecon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to chose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMWare. That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo – to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

 

‘Unhackable’ encrypted flash drive eyeDisk is, as it happens, hackable

in computer security/Crowdfunding/cryptography/Delhi/encryption/eyeDisk/Flash/Hack/Hardware/India/Password/Politics/Security by

In security, nothing is “unhackable.” When it’s claimed, security researchers see nothing more than a challenge.

Enter the latest findings from Pen Test Partners, a U.K.-based cybersecurity firm. Their latest project was ripping apart the “unhackable” eyeDisk, an allegedly secure USB flash drive that uses iris recognition to unlock and decrypt the device.

In its Kickstarter campaign last year, eyeDisk raised more than $21,000; it began shipping devices in March.

There’s just one problem: it’s anything but “unhackable.”

Pen Test Partners researcher David Lodge found the device’s backup password — to access data in the event of device failure or a sudden eye-gouging accident — could be easily obtained using a software tool able to sniff USB device traffic.

The secret password — “SecretPass” — can be seen in plaintext (Image: Pen Test Partners)

“That string in red, that’s the password I set on the device. In the clear. Across an easy to sniff bus,” he said in a blog post detailing his findings.

Worse, he said, the device’s real password can be picked up even when the wrong password has been entered. Lodge explained this as the device revealing its password first, then validating it against whatever password the user submitted before the unlock password is sent.

Lodge said anyone using one of these devices should use additional encryption on the device.

The researcher disclosed the flaw to eyeDisk, which promised a fix, but has yet to release it; eyeDisk did not return a request for comment.

When it comes to elections, Facebook moves slow, may still break things

in api/Avaaz/Campaign/dark web/Delhi/dublin/Election Interference/election security/encryption/Europe/european commission/european parliament/European Union/Facebook/fake news/General Election/India/instagram/ireland/Israel/Nick Clegg/Political Advertising/Politics/Security/Social/social media/spain/targeted advertising/TC/United Kingdom/Vox/WhatsApp by

This week, Facebook invited a small group of journalists — which didn’t include TechCrunch — to look at the “war room” it has set up in Dublin, Ireland, to help monitor its products for election-related content that violates its policies. (“Time and space constraints” limited the numbers, a spokesperson told us when he asked why we weren’t invited.)

Facebook announced it would be setting up this Dublin hub — which will bring together data scientists, researchers, legal and community team members, and others in the organization to tackle issues like fake news, hate speech and voter suppression — back in January. The company has said it has nearly 40 teams working on elections across its family of apps, without breaking out the number of staff it has dedicated to countering political disinformation. 

We have been told that there would be “no news items” during the closed tour — which, despite that, is “under embargo” until Sunday — beyond what Facebook and its executives discussed last Friday in a press conference about its European election preparations.

The tour looks to be a direct copy-paste of the one Facebook held to show off its US election “war room” last year, which it did invite us on. (In that case it was forced to claim it had not disbanded the room soon after heavily PR’ing its existence — saying the monitoring hub would be used again for future elections.)

We understand — via a non-Facebook source — that several broadcast journalists were among the invites to its Dublin “war room”. So expect to see a few gauzy inside views at the end of the weekend, as Facebook’s PR machine spins up a gear ahead of the vote to elect the next European Parliament later this month.

It’s clearly hoping shots of serious-looking Facebook employees crowded around banks of monitors will play well on camera and help influence public opinion that it’s delivering an even social media playing field for the EU parliament election. The European Commission is also keeping a close watch on how platforms handle political disinformation before a key vote.

But with the pan-EU elections set to start May 23, and a general election already held in Spain last month, we believe the lack of new developments to secure EU elections is very much to the company’s discredit.

The EU parliament elections are now a mere three weeks away, and there are a lot of unresolved questions and issues Facebook has yet to address. Yet we’re told the attending journalists were once again not allowed to put any questions to the fresh-faced Facebook employees staffing the “war room”.

Ahead of the looming batch of Sunday evening ‘war room tour’ news reports, which Facebook will be hoping contain its “five pillars of countering disinformation” talking points, we’ve compiled a run down of some key concerns and complications flowing from the company’s still highly centralized oversight of political campaigning on its platform — even as it seeks to gloss over how much dubious stuff keeps falling through the cracks.

Worthwhile counterpoints to another highly managed Facebook “election security” PR tour.

No overview of political ads in most EU markets

Since political disinformation created an existential nightmare for Facebook’s ad business with the revelations of Kremlin-backed propaganda targeting the 2016 US presidential election, the company has vowed to deliver transparency — via the launch of a searchable political ad archive for ads running across its products.

The Facebook Ad Library now shines a narrow beam of light into the murky world of political advertising. Before this, each Facebook user could only see the propaganda targeted specifically at them. Now, such ads stick around in its searchable repository for seven years. This is a major step up on total obscurity. (Obscurity that Facebook isn’t wholly keen to lift the lid on, we should add; Its political data releases to researchers so far haven’t gone back before 2017.)

However, in its current form, in the vast majority of markets, the Ad Library makes the user do all the leg work — running searches manually to try to understand and quantify how Facebook’s platform is being used to spread political messages intended to influence voters.

Facebook does also offer an Ad Library Report — a downloadable weekly summary of ads viewed and highest spending advertisers. But it only offers this in four countries globally right now: the US, India, Israel and the UK.

It has said it intends to ship an update to the reports in mid-May. But it’s not clear whether that will make them available in every EU country. (Mid-May would also be pretty late for elections that start May 23.)

So while the UK report makes clear that the new ‘Brexit Party’ is now a leading spender ahead of the EU election, what about the other 27 members of the bloc? Don’t they deserve an overview too?

A spokesperson we talked to about this week’s closed briefing said Facebook had no updates on expanding Ad Library Reports to more countries, in Europe or otherwise.

So, as it stands, the vast majority of EU citizens are missing out on meaningful reports that could help them understand which political advertisers are trying to reach them and how much they’re spending.

Which brings us to…

Facebook’s Ad Archive API is far too limited

In another positive step Facebook has launched an API for the ad archive that developers and researchers can use to query the data. However, as we reported earlier this week, many respected researchers have voiced disappointed with what it’s offering so far — saying the rate-limited API is not nearly open or accessible enough to get a complete picture of all ads running on its platform.

Following this criticism, Facebook’s director of product, Rob Leathern, tweeted a response, saying the API would improve. “With a new undertaking, we’re committed to feedback & want to improve in a privacy-safe way,” he wrote.

The question is when will researchers have a fit-for-purpose tool to understand how political propaganda is flowing over Facebook’s platform? Apparently not in time for the EU elections, either: We asked about this on Thursday and were pointed to Leathern’s tweets as the only update.

This issue is compounded by Facebook also restricting the ability of political transparency campaigners — such as the UK group WhoTargetsMe and US investigative journalism site ProPublica — to monitor ads via browser plug-ins, as the Guardian reported in January.

The net effect is that Facebook is making life hard for civil society groups and public interest researchers to study the flow of political messaging on its platform to try to quantify democratic impacts, and offering only a highly managed level of access to ad data that falls far short of the “political ads transparency” Facebook’s PR has been loudly trumpeting since 2017.

Ad loopholes remain ripe for exploiting

Facebook’s Ad Library includes data on political ads that were active on its platform but subsequently got pulled (made “inactive” in its parlance) because they broke its disclosure rules.

There are multiple examples of inactive ads for the Spanish far right party Vox visible in Facebook’s Ad Library that were pulled for running without the required disclaimer label, for example.

“After the ad started running, we determined that the ad was related to politics and issues of national importance and required the label. The ad was taken down,” runs the standard explainer Facebook offers if you click on the little ‘i’ next to an observation that “this ad ran without a disclaimer”.

What is not at all clear is how quickly Facebook acted to removed rule-breaking political ads.

It is possible to click on each individual ad to get some additional details. Here Facebook provides a per ad breakdown of impressions; genders, ages, and regional locations of the people who saw the ad; and how much was spent on it.

But all those clicks don’t scale. So it’s not possible to get an overview of how effectively Facebook is handling political ad rule breakers. Unless, well, you literally go in clicking and counting on each and every ad…

There is then also the wider question of whether a political advertiser that is found to be systematically breaking Facebook rules should be allowed to keep running ads on its platform.

Because if Facebook does allow that to happen there’s a pretty obvious (and massive) workaround for its disclosure rules: Bad faith political advertisers could simply keep submitting fresh ads after the last batch got taken down.

We were, for instance, able to find inactive Vox ads taken down for lacking a disclaimer that had still been able to rack up thousands — and even tens of thousands — of impressions in the time they were still active.

Facebook needs to be much clearer about how it handles systematic rule breakers.

Definition of political issue ads is still opaque

Facebook currently requires that all political advertisers in the EU go through its authorization process in the country where ads are being delivered if they relate to the European Parliamentary elections, as a step to try and prevent foreign interference.

This means it asks political advertisers to submit documents and runs technical checks to confirm their identity and location. Though it noted, on last week’s call, that it cannot guarantee this ID system cannot be circumvented. (As it was last year when UK journalists were able to successfully place ads paid for by ‘Cambridge Analytica’.)

One other big potential workaround is the question of what is a political ad? And what is an issue ad?

Facebook says these types of ads on Facebook and Instagram in the EU “must now be clearly labeled, including a paid-for-by disclosure from the advertiser at the top of the ad” — so users can see who is paying for the ads and, if there’s a business or organization behind it, their contact details, plus some disclosure about who, if anyone, saw the ads.

But the big question is how is Facebook defining political and issue ads across Europe?

While political ads might seem fairly easy to categorize — assuming they’re attached to registered political parties and candidates, issues are a whole lot more subjective.

Currently Facebook defines issue ads as those relating to “any national legislative issue of public importance in any place where the ad is being run.” It says it worked with EU barometer, YouGov and other third parties to develop an initial list of key issues — examples for Europe include immigration, civil and social rights, political values, security and foreign policy, the economy and environmental politics — that it will “refine… over time.”

Again specifics on when and how that will be refined are not clear. Yet ads that Facebook does not deem political/issue ads will slip right under its radar. They won’t be included in the Ad Library; they won’t be searchable; but they will be able to influence Facebook users under the perfect cover of its commercial ad platform — as before.

So if any maliciously minded propaganda slips through Facebook’s net, because the company decides it’s a non-political issue, it will once again leave no auditable trace.

In recent years the company has also had a habit of announcing major takedowns of what it badges “fake accounts” ahead of major votes. But again voters have to take it on trust that Facebook is getting those judgement calls right.

Facebook continues to bar pan-EU campaigns

On the flip side of weeding out non-transparent political propaganda and/or political disinformation, Facebook is currently blocking the free flow of legal pan-EU political campaigning on its platform.

This issue first came to light several weeks ago, when it emerged that European officials had written to Nick Clegg (Facebook’s vice president of global affairs) to point out that its current rules — i.e. that require those campaigning via Facebook ads to have a registered office in the country where the ad is running — run counter to the pan-European nature of this particular election.

It means EU institutions are in the strange position of not being able to run Facebook ads for their own pan-EU election everywhere across the region. “This runs counter to the nature of EU institutions. By definition, our constituency is multinational and our target audience are in all EU countries and beyond,” the EU’s most senior civil servants pointed out in a letter to the company last month.

This issue impacts not just EU institutions and organizations advocating for particular policies and candidates across EU borders, but even NGOs wanting to run vanilla “get out the vote” campaigns Europe-wide — leading to a number to accuse Facebook of breaching their electoral rights and freedoms.

Facebook claimed last week that the ball is effectively in the regulators’ court on this issue — saying it’s open to making the changes but has to get their agreement to do so. A spokesperson confirmed to us that there is no update to that situation, either.

Of course the company may be trying to err on the side of caution, to prevent bad actors being able to interfere with the vote across Europe. But at what cost to democratic freedoms?

What about fake news spreading on WhatsApp?

Facebook’s ‘election security’ initiatives have focused on political and/or politically charged ads running across its products. But there’s no shortage of political disinformation flowing unchecked across its platforms as user uploaded ‘content’.

On the Facebook-owned messaging app WhatsApp, which is hugely popular in some European markets, the presence of end-to-end encryption further complicates this issue by providing a cloak for the spread of political propaganda that’s not being regulated by Facebook.

In a recent study of political messages spread via WhatsApp ahead of last month’s general election in Spain, the campaign group Avaaz dubbed it “social media’s dark web” — claiming the app had been “flooded with lies and hate”.

Posts range from fake news about Prime Minister Pedro Sánchez signing a secret deal for Catalan independence to conspiracy theories about migrants receiving big cash payouts, propaganda against gay people and an endless flood of hateful, sexist, racist memes and outright lies,” it wrote. 

Avaaz compiled this snapshot of politically charged messages and memes being shared on Spanish WhatsApp by co-opting 5,833 local members to forward election-related content that they deemed false, misleading or hateful.

It says it received a total of 2,461 submissions — which is of course just a tiny, tiny fraction of the stuff being shared in WhatsApp groups and chats. Which makes this app the elephant in Facebook’s election ‘war room’.

What exactly is a war room anyway?

Facebook has said its Dublin Elections Operation Center — to give it its official title — is “focused on the EU elections”, while also suggesting it will plug into a network of global teams “to better coordinate in real time across regions and with our headquarters in California [and] accelerate our rapid response times to fight bad actors and bad content”.

But we’re concerned Facebook is sending out mixed — and potentially misleading — messages about how its election-focused resources are being allocated.

Our (non-Facebook) source told us the 40-odd staffers in the Dublin hub during the press tour were simultaneously looking at the Indian elections. If that’s the case, it does not sound entirely “focused” on either the EU or India’s elections. 

Facebook’s eponymous platform has 2.375 billion monthly active users globally, with some 384 million MAUs in Europe. That’s more users than in the US (243M MAUs). Though Europe is Facebook’s second-biggest market in terms of revenues after the US. Last quarter, it pulled in $3.65BN in sales for Facebook (versus $7.3BN for the US) out of $15BN overall.

Apart from any kind of moral or legal pressure that Facebook might have for running a more responsible platform when it comes to supporting democratic processes, these numbers underscore the business imperative that it has to get this sorted out in Europe in a better way.

Having a “war room” may sound like a start, but unfortunately Facebook is presenting it as an end in itself. And its foot-dragging on all of the bigger issues that need tackling, in effect, means the war will continue to drag on.

Why your CSO, not your CMO, should pitch your security startup

in computer security/computing/cryptography/Delhi/encryption/Entrepreneurship/executive/India/law enforcement/national security/Politics/reporter/Security/snake oil/Startup company/Startups by

Whenever a security startup lands on my desk, I have one question: Who’s the chief security officer (CSO) and when can I get time with them?

Having a chief security officer is as relevant today as a chief marketing officer (CMO) or chief revenue boss. Just as you need to make sure your offering looks good and the money keeps rolling in, you need to show what your security posture looks like.

Even for non-security startups, having someone at the helm is just as important — not least given the constant security threats that all companies face today, they will become a necessary part of interacting with the media. Regardless of whether your company builds gadgets or processes massive amounts of customer data, security has to be at the front of mind. It’s no good simply saying that you “take your privacy and security seriously.” You have to demonstrate it.

A CSO has several roles and they will wear many hats. Depending on the kind of company you have, they will work to bolster your company’s internal processes and policies on keeping not only your corporate data safe but also the data of your customers. They also will be consulted on security practices of your app or product or service to make sure you’re complying with consumer-expected privacy expectations — and not the overbearing and all-embracing industry standards of vacuuming up as much data as there is.

But for the average security startup, a CSO should also act as the point-person for all technical matters associated with their company’s product or service. A CSO can be an evangelist for the infosec professional who can speak to their company’s offering — and to reporters, like me.

In my view, no startup of any size — especially a security startup — should be without a CSO.

The reality is about 95 percent of the world’s wealthiest companies don’t have one. Facebook hasn’t had someone running the security shop since August. It may be a coincidence that the social networking giant has faced breach after exposure after leak after scandal, and it shows — the company is running around headless without a direction of where to go.

1 2 3 8
Go to Top