Menu

Timesdelhi.com

June 16, 2019
Category archive

mobile device

How to stop robocalls spamming your phone

in caller id/computing/Cybersecurity 101/Delhi/Getty-Images/India/iOS/mobile device/Politics/privacy/Security/smartphones/spamming/telemarketing/United Kingdom/United States/Verizon by

No matter what your politics, beliefs, or even your sports team, we can all agree on one thing: robocalls are the scourge of modern times.

These unsolicited auto-dialed spam calls bug you dozens of times a week — sometimes more — demanding you “pay the IRS” or pretend to be “Apple technical support.” Even the now-infamous Chinese embassy scam, recently warned about by the FBI, has gained notoriety. These robocallers spoof their phone number to peddle scams and tricks — but the calls are real. Some 26 billion calls in 2018 were robocalls — up by close to half on the previous year. And yet there’s little the government agency in charge — the Federal Communications Commission — can do to deter robocallers, even though it’s illegal. Although the FCC has fined robocallers more than $200 million in recent years but collected just $6,790 because the agency lacks the authority to enforce the fines.

So, tough luck — it’s up to you to battle the robocallers — but it doesn’t have to be a losing battle. These are the best solutions to help keep the spammers at bay.

YOUR CARRIER IS YOUR FIRST CALL

Any winds of change will come from the big four cell giants: AT&T, Sprint, T-Mobile, and Verizon (which owns TechCrunch).

Spoofing happens because the carriers don’t verify that a phone number is real before a call crosses their networks. While the networks are figuring out how to fix the problem — more on that later — each carrier has an offering to help prevent spam calls.

Here are what they have:

AT&T‘s Call Protect app, which requires AT&T postpaid service, provides fraud warnings, and spam call screening and blocking. Call Protect is free for iOS and Android. AT&T also offers Call Protect Plus for $3.99 a month which offers enhanced caller ID services and reverse number lookups.

Sprint lets customers block or restrict calls through its Premium Caller ID service. It costs $2.99 per month and can be added to your Sprint account. You can then download the app for iOS. A Sprint spokesperson told TechCrunch that Android users should have an app preinstalled on their devices.

T-Mobile doesn’t offer an app, but provides a call screening to alert customers to potentially scammy or robocalled incoming calls. (Image: Farknot_Architect/Getty Images)

T-Mobile already lets you know when an incoming call is fishy by displaying “scam likely” as the caller ID. Better yet, you can ask T-Mobile to block those calls before your phone even rings using Scam Block. Customers can get it for free by dialing #662# from your device.

Verizon‘s Call Filter is an app that works on both iOS — though most Android devices sold through the carrier already have the app preinstalled. The free version detect and filter spam calls, while its $2.99 a month version gives you a few additional features like its proprietary “risk meter” to help you know more about the caller.

There are a few caveats you should consider:

  • These apps and services won’t be a death blow to spam calls, but they’re meant to help more than they hurt. Your mileage may vary.
  • Many of the premium app features — such as call blocking — are already options on your mobile device. (You can read more about that later.) You may not need to pay even more money on top of your already expensive cellular bill if you don’t need those features.
  • You may get false positives. These apps and services won’t affect your ability to make outbound or emergency calls, but there’s a risk that by using a screening app or service you may miss important phone calls.

WHAT YOU CAN DO

You don’t have to just rely on your carrier. There’s a lot you can do to help yourself.

There are some semi-obvious things like signing up for free to the National Do Not Call Register, but robocallers are not marketers and do not follow the same rules. You should forget about changing your phone number — it won’t help. Within days of setting up my work phone — nobody had my number — it was barraged with spam calls. The robocallers aren’t dialing you from a preexisting list; they’re dialing phones at random using computer-generated numbers. Often the spammers will reel off a list of numbers based off your own area code to make the number look more local and convincing. Sometimes the spoofing is done so badly that there are extra digits in the phone numbers.

Another option for the most annoying of robocalls is to use a third-party app, one that screens and manages your calls on your device.

There are, however, privacy tradeoffs with third-party apps. Firstly, you’re giving information about who calls you — and sometimes who you call — to another company that isn’t your cell carrier. That additional exposure puts your data at risk — we’ve all seen cases of cell data leaking. But the small monthly cost of the apps are worth if it means the apps don’t make money off your data, like serving you ads. Some apps will ask you for access to your phone contacts — be extremely mindful of this.

The three apps we’ve selected balance privacy, cost and their features.

  • Nomorobo has a constantly updated database of more than 800,000 phone numbers which lets the app proactively block against spammy incoming calls while still allowing legal robocalls through, like school closures and emergency alerts. It doesn’t ask for access to your contacts unlike other apps, and can also protect against spam texts. It’s $1.99 per month but comes with a 14-day free trial. Available for iOS and Android.
  • Hiya is an ad-free spam and robocall blocker that powers Samsung’s Smart Call service. Hiya pulls in caller profile information to tell you who’s calling. The app doesn’t automatically ask for access to your contacts but it’s an option for some of the enhanced features, though its privacy policy says it may upload them to its servers. The app has a premium feature set at $2.99 per month after a seven-day trial. Available for iOS and Android.
  • RoboKiller is another spam call blocker with a twist: it has the option to answer spam calls with prerecorded audio that aims to waste the bot’s time. Better yet, you can listen back to the recording for your own peace of mind. The app has more than 1.1 million numbers in its database. The app was awarded $25,000  by the Federal Trade Commission following a contest at security conference Def Con in 2015. RoboKiller’s full feature set can be found on iOS but is slowly rolling out to Android users. The app starts at $0.99 per month. Available for iOS and Android

You may find one app better than another. It’s worth experimenting with each app one at a time, which you can do with their free trials.

WHAT YOUR PHONE CAN DO FOR YOU

There are some more drastic but necessary options at your disposal.

Both iOS and Android devices have the ability to block callers. On one hand it helps against repeat offenders, but on the other it’s like a constant game of Whac-a-Mole. Using your in-built phone’s feature to block numbers prevents audio calls, video calls and text messages from coming through. But you have to block each number as they come in.

How to block spam calls on an iPhone (left) and filter spam calls on Android (right).

Some Android versions are different, but for most versions you can go to Settings > Caller ID & Spam and switch on the feature. You should be aware that incoming and outgoing call data will be sent to Google. You can also block individual numbers by going to Phone > Recents and tapping on each spam number to Block and Report call as spam, which helps improve Google spam busting efforts.

iPhones don’t come with an in-built spam filter, but you can block calls nonetheless. Go to Phone > Recents and tap on the information button next to each call record. Press Block this caller and that number will not be able to contact you again.

You can also use each device’s Do Not Disturb feature, a more drastic technique that blocks calls and notifications from bugging you when you’re busy. This feature for both iOS and Android block calls by default unless you whitelist each number.

How to enable Do Not Disturb on an iPhone (left) and Android (right).

In Android, swipe down from the notifications area and hit the Do Not Disturb icon, a bubble with a line through it. To change its settings, long tap on the button. From here, go to Exceptions > Calls. If you want to only allow calls from your contacts, select From contacts only or From starred contacts only for a more granular list. Your phone will only ring if a contact in your phone book calls you.

It’s almost the same in iOS. You can swipe up from your notifications area and hit the Do Not Disturb icon, shaped as a moon. To configure your notifications, go to Settings > Do Not Disturb and scroll down to Phone. From here you can set it so you only Allow Calls From your contacts or your favorites.

WHAT THE REGULATORS CAN DO

Robocalls aren’t going away unless they’re stamped out at the source. That requires an industry-wide effort — and the U.S. just isn’t quite there yet.

You might be surprised to learn that robocalls aren’t nearly as frequent or as common in the Europe. In the U.K., the carriers and the communications regulator Ofcom worked together in recent years to pool their technical and data sharing resources to find ways to prevent misuse on the cell networks.

Collectively, more than a billion calls have been stopped in the past year. Vodafone, one of the largest networks in Europe, said the carrier prevents around two million automated calls from reaching customers each day alone.

“In the U.K., the problem has been reduced by every major operator implementing techniques to reject nuisance calls,” said Vodafone’s Laura Hind in an email to TechCrunch. “These are generally based on evidence from customer complaints and network heuristics.”

Though collaboration and sharing spam numbers is important, technology is vital to crushing the spammers. Because most calls nowadays rely in some way on voice-over-the-internet, it’s easier than ever to prevent spoofed calls. Ofcom, with help from privacy regulator the Information Commissioner’s Office, plans to bring in technical solutions this year to bring into effect caller authentication to weed out spoofed spam calls.

The reality is that there are solutions to fix the robocall and spammer problem. The downside is that it’s up to the cell carriers to act.

Federal regulators are as sick of the problem as everyone else, ramping up the pressure on the big four to take the situation more seriously. Earlier this year, the Federal Communications Commission chairman Ajit Pai threatened “regulatory intervention” if carriers don’t roll out a system that properly identifies real callers.

One authentication system would make call spoofing nearly impossible, known as Secure Telephone Identity Revisited and Signature-based Handling of Asserted Information Using Tokens — or STIR/SHAKEN. The system relies on every phone number having a unique digital signature which, when checked against the cell networks will prove you are a real caller. The carrier then approves the call and patches it through to the recipient. This happens near-instantly.

The carriers have so far promised to implement the protocol, though the system isn’t expected to go into effect across the board for months — if not another year. So far only AT&T and Comcast have tested the protocol — with success. But there is still a way to go.

Until then, don’t let the spammers win.

Cybersecurity 101 - TechCrunch

Taxing your privacy

in articles/california/ClearRoad/Column/coo/Delhi/digital rights/gps/human rights/identity management/India/law/location technology/Louisiana/Lyft/mobile device/New York/oregon/Politics/privacy/privacy policy/Seattle/smartphones/terms of service/Uber/utah by

Data collection through mobile tracking is big business and the potential for companies helping governments monetize this data is huge. For consumers, protecting yourself against the who, what and where of data flow is just the beginning. The question now is: How do you ensure your data isn’t costing you money in the form of new taxes, fees and bills?  Particularly when the entity that stands to benefit from this data — the government — is also tasked with protecting it?

The advances in personal data collection are a source of growing concern for privacy advocates, but whereas most fears tend to focus on what type of data is being collected, who’s watching and to whom is your data being sold, the potential for this same data to be monetized via auditing and compliance fees is even more problematic.

The fact is, you don’t need massive infrastructure to now track/tax businesses and consumers. State governments and municipalities have taken notice.

The result is a potential multi-billion dollar per-year business that, with mobile tracking technology, will only grow exponentially year over year.

Yet, while the revenue upside for companies helping smart cities (and states) with taxing and tolling is significant, it is also rife with contradictions and complications that could, ultimately, pose serious problems to those companies’ underlying business models and for the investors that bet heavily on them.

Photo courtesy of Getty Images/chombosan

The most common argument when privacy advocates bring up concerns around mobile data collection is that consumers almost always have the control to opt out. When governments utilize this data, however, that option is not always available. And the direct result is the monetization of a consumer’s privacy in the form of taxes and tolls. In an era where states like California and others are stepping up as self-proclaimed defenders of citizen privacy and consent, this puts everyone involved in an awkward position — to say the least.

The marriage of smart cities and next-gen location tracking apps is becoming more commonplace.  AI, always-on data flows, sensor networks and connected devices are all being employed by governments in the name of sustainable and equitable cities as well as new revenue.

New York, LA and Seattle are all implementing (or considering implementing) congestion pricing that would ultimately rely on harvesting personal data in some form or another. Oregon, which passed the first gas tax in 1919, began it’s OreGo Program two years ago utilizing data that measured miles driven to levy fees on drivers so as to address infrastructure issues with its roads and highways.

Image Courtesy of Shutterstock

As more state and local governments look to emulate these kinds of policies the revenue opportunity for companies and investors harvesting this data is obvious.  Populus, (and a portfolio company) a data platform that helps cities manage mobility, captures data from fleets like Uber and Lyft to help cities set policy and collect fees.

Similarly, ClearRoad  is a “road pricing transaction processor” that leverages data from vehicles to help governments determine road usage for new revenue streams.  Safegraph, on the other hand, is a company that daily collects millions of trackers from smartphones via apps, APIs and other delivery methods often leaving the business of disclosure up to third parties. Data like this has begun to make its way into smart city applications which could impact industries as varied as the real estate market to the Gig Economy.

“There are lots of companies that are using location technology, 3D scanning, sensor tracking and more.  So, there are lots of opportunities to improve the effectiveness of services and for governments to find new revenue streams,” says Paul Salama, COO of ClearRoad . “If you trust the computer to regulate, as opposed to the written code, then you can allow for a lot more dynamic types of regulation and that extends beyond vehicles to noise pollution, particulate emissions, temporary signage, etc.”

While most of these platforms and technologies endeavor to do some public good by creating the baseline for good policy and sustainable cities they also raise concerns about individual privacy and the potential for discrimination.  And there is an inherent contradiction for states ostensibly tasked with curbing the excesses of data collection then turning around and utilizing that same data to line the state’s coffers, sometimes without consent or consumer choice.

Image courtesy Bryce Durbin

“People care about their privacy and there are aspects that need to be hashed out”, says Salama. “But we’re talking about a lot of unknowns on that data governance side.  There’s definitely going to be some sort of reckoning at some point but it’s still so early on.”

As policy makers and people become more aware of mobile phone tracking and the largely unregulated data collection associated with it, the question facing companies in this space is how to extract all this societally beneficial data while balancing that against some pretty significant privacy concerns.

“There will be options,” says Salama.  “An example is Utah which, starting next year, will offer electric cars the option to pay a flat fee (for avoiding gas taxes) or pay-by-the-mile.  The pay-by-the-mile option is GPS enabled but it also has additional services, so you pay by your actual usage.”

Ultimately, for governments, regulation plus transparency seems the likeliest way forward.

Image courtesy Getty Images

In most instances, the path to the consumer or tax payer is either through their shared economy vehicle (car, scooter, bike, etc.) or though their mobile device.  While taxing fleets is indirect and provides some measure of political cover for the governments generating revenue off of them, there is no such cover for directly taxing citizens via data gathered through mobile apps.

The best case scenario to short circuit these inherent contradictions for governments is to actually offer choice in the form of their own opt-in for some value exchange or preferred billing method, such as Utah’s opt-in as an alternative way to pay for road use vs. gas tax.   It may not satisfy all privacy concerns, particularly when it is the government sifting through your data, but it at least offers a measure of choice and a tangible value.

If data collection and sharing were still mainly the purview of B2B businesses and global enterprises, perhaps the rising outcry over the methods and usage of data collection would remain relatively muted. But as data usage seeps into more aspects of everyday life and is adopted by smart cities and governments across the nation questions around privacy will invariably get more heated, particularly when citizen consumers start feeling the pinch in their wallet.

As awareness rises and inherent contradictions are laid bare, regulation will surely follow and those businesses not prepared may face fundamental threats to their business models that ultimately threaten their bottom line.

This early GDPR adtech strike puts the spotlight on consent

in Advertising Tech/Android/Apps/Artificial Intelligence/China/data processing/data protection/Delhi/Europe/European Union/Facebook/Fidzup/GDPR/General Data Protection Regulation/Google/India/location based services/mobile advertising/mobile device/online advertising/Politics/privacy/retail/smartphone/TC/terms of service by

What does consent as a valid legal basis for processing personal data look like under Europe’s updated privacy rules? It may sound like an abstract concern but for online services that rely on things being done with user data in order to monetize free-to-access content this is a key question now the region’s General Data Protection Regulation is firmly fixed in place.

The GDPR is actually clear about consent. But if you haven’t bothered to read the text of the regulation, and instead just go and look at some of the self-styled consent management platforms (CMPs) floating around the web since May 25, you’d probably have trouble guessing it.

Confusing and/or incomplete consent flows aren’t yet extinct, sadly. But it’s fair to say those that don’t offer full opt-in choice are on borrowed time.

Because if your service or app relies on obtaining consent to process EU users’ personal data — as many free at the point-of-use, ad-supported apps do — then the GDPR states consent must be freely given, specific, informed and unambiguous.

That means you can’t bundle multiple uses for personal data under a single opt-in.

Nor can you obfuscate consent behind opaque wording that doesn’t actually specify the thing you’re going to do with the data.

You also have to offer users the choice not to consent. So you cannot pre-tick all the consent boxes that you really wish your users would freely choose — because you have to actually let them do that.

It’s not rocket science but the pushback from certain quarters of the adtech industry has been as awfully predictable as it’s horribly frustrating.

This has not gone unnoticed by consumers either. Europe’s Internet users have been filing consent-based complaints thick and fast this year. And a lot of what is being claimed as ‘GDPR compliant’ right now likely is not.

So, some six months in, we’re essentially in a holding pattern waiting for the regulatory hammers to come down.

But if you look closely there are some early enforcement actions that show some consent fog is starting to shift.

Yes, we’re still waiting on the outcomes of major consent-related complaints against tech giants. (And stockpile popcorn to watch that space for sure.)

But late last month French data protection watchdog, the CNIL, announced the closure of a formal warning it issued this summer against drive-to-store adtech firm, Fidzup — saying it was satisfied it was now GDPR compliant.

Such a regulatory stamp of approval is obviously rare this early in the new legal regime.

So while Fidzup is no adtech giant its experience still makes an interesting case study — showing how the consent line was being crossed; how, working with CNIL, it was able to fix that; and what being on the right side of the law means for a (relatively) small-scale adtech business that relies on consent to enable a location-based mobile marketing business.

From zero to GDPR hero?

Fidzup’s service works like this: It installs kit inside (or on) partner retailers’ physical stores to detect the presence of user-specific smartphones. At the same time it provides an SDK to mobile developers to track app users’ locations, collecting and sharing the advertising ID and wi-fi ID of users’ smartphone (which, along with location, are judged personal data under GDPR.)

Those two elements — detectors in physical stores; and a personal data-gathering SDK in mobile apps — come together to power Fidzup’s retail-focused, location-based ad service which pushes ads to mobile users when they’re near a partner store. The system also enables it to track ad-to-store conversions for its retail partners.

The problem Fidzup had, back in July, was that after an audit of its business the CNIL deemed it did not have proper consent to process users’ geolocation data to target them with ads.

Fidzup says it had thought its business was GDPR compliant because it took the view that app publishers were the data processors gathering consent on its behalf; the CNIL warning was a wake up call that this interpretation was incorrect — and that it was responsible for the data processing and so also for collecting consents.

The regulator found that when a smartphone user installed an app containing Fidzup’s SDK they were not informed that their location and mobile device ID data would be used for ad targeting, nor the partners Fidzup was sharing their data with.

CNIL also said users should have been clearly informed before data was collected — so they could choose to consent — instead of information being given via general app conditions (or in store posters), as was the case, after the fact of the processing.

It also found users had no choice to download the apps without also getting Fidzup’s SDK, with use of such an app automatically resulting in data transmission to partners.

Fidzup’s approach to consent had also only been asking users to consent to the processing of their geolocation data for the specific app they had downloaded — not for the targeted ad purposes with retail partners which is the substance of the firm’s business.

So there was a string of issues. And when Fidzup was hit with the warning the stakes were high, even with no monetary penalty attached. Because unless it could fix the core consent problem, the 2014-founded startup might have faced going out of business. Or having to change its line of business entirely.

Instead it decided to try and fix the consent problem by building a GDPR-compliant CMP — spending around five months liaising with the regulator, and finally getting a green light late last month.

A core piece of the challenge, as co-founder and CEO Olivier Magnan-Saurin tells it, was how to handle multiple partners in this CMP because its business entails passing data along the chain of partners — each new use and partner requiring opt-in consent.

“The first challenge was to design a window and a banner for multiple data buyers,” he tells TechCrunch. “So that’s what we did. The challenge was to have something okay for the CNIL and GDPR in terms of wording, UX etc. And, at the same time, some things that the publisher will allow to and will accept to implement in his source code to display to his users because he doesn’t want to scare them or to lose too much.

“Because they get money from the data that we buy from them. So they wanted to get the maximum money that they can, because it’s very difficult for them to live without the data revenue. So the challenge was to reconcile the need from the CNIL and the GDPR and from the publishers to get something acceptable for everyone.”

As a quick related aside, it’s worth noting that Fidzup does not work with the thousands of partners an ad exchange or demand-side platform most likely would be.

Magnan-Saurin tells us its CMP lists 460 partners. So while that’s still a lengthy list to have to put in front of consumers — it’s not, for example, the 32,000 partners of another French adtech firm, Vectaury, which has also recently been on the receiving end of an invalid consent ruling from the CNIL.

In turn, that suggests the ‘Fidzup fix’, if we can call it that, only scales so far; adtech firms that are routinely passing millions of people’s data around thousands of partners look to have much more existential problems under GDPR — as we’ve reported previously re: the Vectaury decision.

No consent without choice

Returning to Fidzup, its fix essentially boils down to actually offering people a choice over each and every data processing purpose, unless it’s strictly necessary for delivering the core app service the consumer was intending to use.

Which also means giving app users the ability to opt out of ads entirely — and not be penalized by not being able to use the app features itself.

In short, you can’t bundle consent. So Fidzup’s CMP unbundles all the data purposes and partners to offer users the option to consent or not.

“You can unselect or select each purpose,” says Magnan-Saurin of the now compliant CMP. “And if you want only to send data for, I don’t know, personalized ads but you don’t want to send the data to analyze if you go to a store or not, you can. You can unselect or select each consent. You can also see all the buyers who buy the data. So you can say okay I’m okay to send the data to every buyer but I can also select only a few or none of them.”

“What the CNIL ask is very complicated to read, I think, for the final user,” he continues. “Yes it’s very precise and you can choose everything etc. But it’s very complete and you have to spend some time to read everything. So we were [hoping] for something much shorter… but now okay we have something between the initial asking for the CNIL — which was like a big book — and our consent collection before the warning which was too short with not the right information. But still it’s quite long to read.”

Fidzup’s CNIL approved GDPR-compliant consent management platform

“Of course, as a user, I can refuse everything. Say no, I don’t want my data to be collected, I don’t want to send my data. And I have to be able, as a user, to use the app in the same way as if I accept or refuse the data collection,” he adds.

He says the CNIL was very clear on the latter point — telling it they could not require collection of geolocation data for ad targeting for usage of the app.

“You have to provide the same service to the user if he accepts or not to share his data,” he emphasizes. “So now the app and the geolocation features [of the app] works also if you refuse to send the data to advertisers.”

This is especially interesting in light of the ‘forced consent’ complaints filed against tech giants Facebook and Google earlier this year.

These complaints argue the companies should (but currently do not) offer an opt-out of targeted advertising, because behavioural ads are not strictly necessary for their core services (i.e. social networking, messaging, a smartphone platform etc).

Indeed, data gathering for such non-core service purposes should require an affirmative opt-in under GDPR. (An additional GDPR complaint against Android has also since attacked how consent is gathered, arguing it’s manipulative and deceptive.)

Asked whether, based on his experience working with the CNIL to achieve GDPR compliance, it seems fair that a small adtech firm like Fidzup has had to offer an opt-out when a tech giant like Facebook seemingly doesn’t, Magnan-Saurin tells TechCrunch: “I’m not a lawyer but based on what the CNIL asked us to be in compliance with the GDPR law I’m not sure that what I see on Facebook as a user is 100% GDPR compliant.”

“It’s better than one year ago but [I’m still not sure],” he adds. “Again it’s only my feeling as a user, based on the experience I have with the French CNIL and the GDPR law.”

Facebook of course maintains its approach is 100% GDPR compliant.

Even as data privacy experts aren’t so sure.

One thing is clear: If the tech giant was forced to offer an opt out for data processing for ads it would clearly take a big chunk out of its business — as a sub-set of users would undoubtedly say no to Zuckerberg’s “ads”. (And if European Facebook users got an ads opt out you can bet Americans would very soon and very loudly demand the same, so…)

Bridging the privacy gap

In Fidzup’s case, complying with GDPR has had a major impact on its business because offering a genuine choice means it’s not always able to obtain consent. Magnan-Saurin says there is essentially now a limit on the number of device users advertisers can reach because not everyone opts in for ads.

Although, since it’s been using the new CMP, he says a majority are still opting in (or, at least, this is the case so far) — showing one consent chart report with a ~70:30 opt-in rate, for example.

He expresses the change like this: “No one in the world can say okay I have 100% of the smartphones in my data base because the consent collection is more complete. No one in the world, even Facebook or Google, could say okay, 100% of the smartphones are okay to collect from them geolocation data. That’s a huge change.”

“Before that there was a race to the higher reach. The biggest number of smartphones in your database,” he continues. “Today that’s not the point.”

Now he says the point for adtech businesses with EU users is figuring out how to extrapolate from the percentage of user data they can (legally) collect to the 100% they can’t.

And that’s what Fidzup has been working on this year, developing machine learning algorithms to try to bridge the data gap so it can still offer its retail partners accurate predictions for tracking ad to store conversions.

“We have algorithms based on the few thousand stores that we equip, based on the few hundred mobile advertising campaigns that we have run, and we can understand for a store in London in… sports, fashion, for example, how many visits we can expect from the campaign based on what we can measure with the right consent,” he says. “That’s the first and main change in our market; the quantity of data that we can get in our database.”

“Now the challenge is to be as accurate as we can be without having 100% of real data — with the consent, and the real picture,” he adds. “The accuracy is less… but not that much. We have a very, very high standard of quality on that… So now we can assure the retailers that with our machine learning system they have nearly the same quality as they had before.

“Of course it’s not exactly the same… but it’s very close.”

Having a CMP that’s had regulatory ‘sign-off’, as it were, is something Fidzup is also now hoping to turn into a new bit of additional business.

“The second change is more like an opportunity,” he suggests. “All the work that we have done with CNIL and our publishers we have transferred it to a new product, a CMP, and we offer today to all the publishers who ask to use our consent management platform. So for us it’s a new product — we didn’t have it before. And today we are the only — to my knowledge — the only company and the only CMP validated by the CNIL and GDPR compliant so that’s useful for all the publishers in the world.”

It’s not currently charging publishers to use the CMP but will be seeing whether it can turn it into a paid product early next year.

How then, after months of compliance work, does Fidzup feel about GDPR? Does it believe the regulation is making life harder for startups vs tech giants — as is sometimes suggested, with claims put forward by certain lobby groups that the law risks entrenching the dominance of better resourced tech giants. Or does he see any opportunities?

In Magnan-Saurin’s view, six months in to GDPR European startups are at an R&D disadvantage vs tech giants because U.S. companies like Facebook and Google are not (yet) subject to a similarly comprehensive privacy regulation at home — so it’s easier for them to bag up user data for whatever purpose they like.

Though it’s also true that U.S. lawmakers are now paying earnest attention to the privacy policy area at a federal level. (And Google’s CEO faced a number of tough questions from Congress on that front just this week.)

“The fact is Facebook-Google they own like 90% of the revenue in mobile advertising in the world. And they are American. So basically they can do all their research and development on, for example, American users without any GDPR regulation,” he says. “And then apply a pattern of GDPR compliance and apply the new product, the new algorithm, everywhere in the world.

“As a European startup I can’t do that. Because I’m a European. So once I begin the research and development I have to be GDPR compliant so it’s going to be longer for Fidzup to develop the same thing as an American… But now we can see that GDPR might be beginning a ‘world thing’ — and maybe Facebook and Google will apply the GDPR compliance everywhere in the world. Could be. But it’s their own choice. Which means, for the example of the R&D, they could do their own research without applying the law because for now U.S. doesn’t care about the GDPR law, so you’re not outlawed if you do R&D without applying GDPR in the U.S. That’s the main difference.”

He suggests some European startups might relocate R&D efforts outside the region to try to workaround the legal complexity around privacy.

“If the law is meant to bring the big players to better compliance with privacy I think — yes, maybe it goes in this way. But the first to suffer is the European companies, and it becomes an asset for the U.S. and maybe the Chinese… companies because they can be quicker in their innovation cycles,” he suggests. “That’s a fact. So what could happen is maybe investors will not invest that much money in Europe than in U.S. or in China on the marketing, advertising data subject topics. Maybe even the French companies will put all the R&D in the U.S. and destroy some jobs in Europe because it’s too complicated to do research on that topics. Could be impacts. We don’t know yet.”

But the fact of GDPR enforcement having — perhaps inevitably — started small, with so far a small bundle of warnings against relative data minnows, rather than any swift action against the industry dominating adtech giants, that’s being felt as yet another inequality at the startup coalface.

“What’s sure is that the CNIL started to send warnings not to Google or Facebook but to startups. That’s what I can see,” he says. “Because maybe it’s easier to see I’m working on GDPR and everything but the fact is the law is not as complicated for Facebook and Google as it is for the small and European companies.”

Metacert’ Cryptonite can catch phishing links in your email

in api/apple inc/Artificial Intelligence/Bank of America/ceo/chat room/cryptocurrencies/cybersecurity startup/Delhi/digital audio/e-commerce/Economy/email/ethereum/Federal Bureau of Investigation/Finance/gmail/India/internet culture/iPhone/iTunes/itunes ping/Mac Mail/mobile device/mobile devices/money/Outlook/Paul Walsh/PayPal/Politics/Samsung/slack/social media/TC/Wells Fargo by

Metacert, founded by Paul Walsh, originally began as a way to watch chat rooms for fake Ethereum scams. Walsh, who was an early experimenter in cryptocurrencies, grew frustrated when he saw hackers dumping fake links into chat rooms, resulting in users regularly losing cash to scammers.

Now Walsh has expanded his software to email. A new product built for email will show little green or red shields next to links, confirming that a link is what it appears to be. A fake link would appear red while a real PayPal link, say, would appear green. The plugin works with Apple’s Mail app on the iPhone and is called Cryptonite.

“The system utilizes the MetaCert Protocol infrastructure/registry,” said Walsh. “It contains 10 billion classified URLs. This is at the core of all of MetaCert’s products and services. It’s a single API that’s used to protect over 1 million crypto people on Telegram via a security bot and it’s the same API that powers the integration that turned off phishing for the crypto world in 2017. Even when links are shortened? MetaCert unfurls them until it finds the real destination site, and then checks the Protocol to see if it’s verified, unknown or classified as phishing. It does all this in less that 300ms.”

Walsh is also working on a system to scan for Fake News in the wild using a similar technology to his anti-phishing solution. The company is raising currently and is working on a utility token.

Walsh sees his first customers as enterprise and expects IT shops to implement the software to show employees which links are allowed, i.e. company or partner links, and which ones are bad.

“It’s likely we will approach this top down and bottom up, which is unusual for enterprise security solutions. But ours is an enterprise service that anyone can install on their phone in less than a minute,” he said. “SMEs isn’t typically a target market for email security companies but we believe we can address this massive market with a solution that’s not scary to setup and expensive to support. More research is required though, to see if our hypothesis is right.”

“With MetaCert’s security, training is reduced to a single sentence ‘if it doesn’t have a green shield, assume it’s not safe,” said Walsh.

Apple ditches the headphone jack on latest iPad Pro models

in AirPods/Apple/Apple earbuds/Apple Fall Event 2018/apple inc/computing/Delhi/Gadgets/Google/India/iOS/iPad/iPhone/lightning/mobile device/Politics/tablet computers/Technology by

The headphone jack is missing from the latest versions of the iPad Pro. It’s gone. Dead. Worse yet, the headphones that come with the iPhone will not work either. Apple ditched Lightning for USB-C.

The latest iPad Pro models follows the trend lines set by the iPhone. Just like on the iPhone, the Home Button is missing, and the headphone jack is gone. It’s a concession some users might find surprising. On the iPhone, there’s obviously less real estate to integrate a large port but that’s, in theory, less of an issue in a large device like a tablet. But it makes sense. Apple tends to maximize margins by ensuring different products use a similar set of hardware. And since the iPhone hasn’t had a headphone port since 2016, it’s about time the trend hits Apple’s other mobile device.

Headphone users are not the only users left in the dark. The iPad has long been a great device for a stationary audio controller. Now, instead of simply connecting the tablet to a stereo with a 3.5mm cable, a dongle is required. Want to use headphones? Apple would obviously prefer if owners use $159 AirPods though there are a handful of USB-C headphones including these from Google.

Go to Top