Blog | Social SafeGuard

Facebook’s Friday Night Fright and Monday Meltdown and Why Smart companies (a.k.a. Social SafeGuard customers) Were already managing social media risk

Written by Otavio Freire | Mar 29, 2018 1:32:20 PM
As of the writing of this article Facebook Inc. shares are on pace to post their largest percentage decline in nearly five years, wiping out $50B in market cap, a value equal to 2x Twitter’s entire market cap that also declined 9%. Facebook is under fire from regulators after it became clear that a third-party group bought Facebook user data from an academic researcher. There has been much speculation regarding whether the 50 million users knew their data had been accessed and amidst the headlines we are all left wondering, yet again, if we truly understand the digital world we live in. On Friday night, March 16, Facebook made an unexpected announcement that the company had suspended Cambridge Analytica. As world markets dig into the announcement the news has prompted regulation scrutiny from legislators in the US and the UK. US Senator Wyden issued a specific request for information from Mark Zuckerberg upping the political ante surrounding the issue.

For the last several years, Social SafeGuard customers really have not had to worry if leaks to their privacy took place. The companies in our network have been protected all along. Their digital presence is protected, and any attacks against their social assets (Facebook or others) have been thwarted. So how did Cambridge Analytica get Facebook data about some 50 million people? and what can Facebook do to reverse the path we are on with the social web? – There are two main pillars of this news story that have broad implications for shaping the future of online networks: privacy and security. Until companies recognize the need for custom privacy and security tools, like Social Safeguard, they will be left wondering if the built-in privacy and security features of social networks are enough to protect their assets. Although this news (and the underlying need to protect data) has been brewing for over a year, smart companies have not stood still. By now it's taken for granted that a company would pay for cyber solutions to prevent email-based attacks, why would they not do the same for communications over social networks?

How did they get the data?

Cambridge Analytica allegedly acquired Facebook user data from Cambridge researchers. The researchers collected the data using Facebook's Graph API. The Graph API allows third parties to create an app that people can install on Facebook, granting access to the social graph and user data. Facebook has been careful to point out that this incident is not a data breach, because the users of the researcher's app gave the app permission to access data upon installation. Via the Graph API, apps can request access to someone's friend list, data associated with their account, and data associated with their friends' accounts. Oftentimes, when asked to review permissions, people simply don’t pay attention to what's requested or are unable to understand the privacy implications of individual permissions. As expected, the design of the installation dialog matters, as highlighted in a paper called, "Third-party apps on Facebook: Privacy and the Illusion of Control." (1)  The Graph API, and Facebook Login, provide added functionality that people enjoy and find useful, however, they also introduce yet another thing to think about when it comes to privacy and security. 

Let's take a step back and think about the timing. The story began in 2015 when a Russian-American researcher, based in Cambridge, UK and also a professor at Cambridge, named Dr. Aleksandr Kogan a.k.a. Aleksandr Spectre (probably not the best of ideas to choose a James Bond title as an alias) created an app called “thisisyourdigitallife” that utilized Facebook’s aforementioned Graph API. Some 270,000 people installed the app and thus opted in to sharing personal profile data with Kogan. The app also asked permission to "access [my] friends' information" including: family members, relationships status, current cities, likes, music, TV, movies, books, quotes, education, work, websites, groups, photos, and videos. Consider that the average Facebook user has roughly 200 friends, some with many more than that, the app developers had access to a lot of data from a lot of people! Users of the app were told at the time that the app was for academic (and research) purposes.

Remember, as Facebook says, this was not a data breach, because at the time users could grant permission for an app to collect their friends' data too. So the access to user data was not made possible by a hack, or a security issue: the app used the Graph API as it was designed to be used. The Cambridge researchers did, however, break the Graph API terms of use when they passed the user data along to another party. It was (and is) against Facebook's terms of use to pass user data along however according to Re/Code. Facebook’s CISO – Alex Stamos tweeted Saturday (before later deleting the tweet): “Kogan did not break into any systems, bypass any technical controls, our use a flaw in our software to gather more data than allowed. He did, however, misuse that data after he gathered it, but that does not retroactively make it a ‘breach.’” Facebook is not the only company that allows third-parties to build on the platform: Apple, Android, Twitter and LinkedIn all have a similar service. All adhere to a modest security standard and rely on audits or post hoc detection of misuse.

Imagine a similar approach in the rest of the cybersecurity world? Allow the breach and if the company is bothered by the breach sanction the developer after the fact and after the damage is done? A better approach is immediately available for those in the know – Social SafeGuard – see the amount of protection growth of enterprise customers.

 

 

Potential implications for Facebook

The statutory fine from the FTC if Facebook is found in violation of their 2011 consent decree could be US$2 trillion. In 2016 the penalty for unfair and deceptive practices (UDAPs) increased to $40,000 per violation from $16,000; 50 million people were affected and each violation for sharing of private information is $40k per violation. A fine that large would be 4x the market cap of Facebook. It’s unlikely that Facebook will face this fine but it goes to show that there are already laws in place to deal with incidents like this. Facebook’s market cap (and its business model) is driven by its ability to control the identity of its users. Facebook market cap is ultimately higher than that of Twitter and LinkedIn since it controls more data than any other firm. In a recent disclosure over 100 spookily precise categories several sourced offline that are used to augment Facebook data. The owners of those sources share cookies and data and in-turn Facebook matches them to an identity - the holy grail of online marketing but that data never leaves the platform.

Facebook recognizes the value of user data and derives more value from helping third parties match cookies and other user data to identify a person. Certainly more value than would be generated from outright selling the data. The downside is that this concentration of data creates an attractive target for hackers and others with ill intent. The Facebook platform has far more data than the Target corporation app (also recently hacked) that would only hold a narrow band of data. Facebook's business model depends on finding ways to monetize people's data. This means that if regulators expect that a social network begins with a commitment to privacy, they are barking up the wrong tree. The business model makes it difficult to convince outsiders that privacy comes first, especially since shareholders fully expect the company to leverage Facebook's unique vantage point (while of course maintaining security). 

Over the last year, Facebook's reaction to a year of scandal has vacillated between silence, half-measures, silence, and backtracking . Their behavior is sending the wrong message to users: that information users post on Facebook, which they view as private, will be used in ways they did not expect; it is neither private, nor secure, and that the users themselves are not safe.

What's next?

Unless more meaningful measures are taken, social networks will quickly become an echo chamber of vitriol perpetuated by a political agenda and it is becoming more and more likely that a model will emerge where a big G government regulator will step in to regulate Facebook and other companies that rely on user data, just as we saw with the pharmaceutical industry. That experience has not been a pleasant one. Consider Corporate Integrity Agreements and outside regulators overseeing corporate functions such as marketing and operations, and both of those come after paying billions of dollars in fines. If Facebook acts fast, it is possible they can address the worst of outcomes by:

  • Delegating more to a network of providers that enforce privacy & security on their behalf such as Social SafeGuard– creating an opportunity for users that need extra privacy and security support to easily find what they need– This is similar approach to other tech industry such as network gear where a security industry emerged around the network and created a mechanism where outside firms compete to be the best at securing the network. We have a head start but we would welcome competition.

 

 

  • For social networks there could be a universal profile (Blockchain?) and a universal set of security and privacy standards that Social SafeGuard enforces. For example, a person could provide a single profile and deviations from that could be deemed automatically private. Alternatively, a consent-based framework similar to EU’s prior privacy directive could be in place so when the rest of the 50 million folks’ data was polled, it automatically requested consent from the other users, blocking requests for those who did not provide authorization. In the future consent may need to be much more explicit and easier to use, for example represented through pictorial icons so that users can more readily digest the potential uses of their data for profiling, tracking, and third parties (read this paper to learn more about effective privacy notices). 
  • Creating a self-regulatory body in which oversight is led by the industry –This is something we already do for FINRA in the financial services industry for example. The industry as a whole can form and impose a watchdog to restore confidence with the help of independent providers such as Social SafeGuard. They'd serve as an outside body, with a different set of motivations, to serve as a counter-weight to existing incentives and internal pressures. 

 

Citations

(1) Third-Party apps on Facebook: Privacy and the Illusion of Control, Na Wang, Heng Xu, and Jens Grossklags. Proceedings of the ACM Symposium on Computer-Human Interaction for Management of Information Technology (CHIMIT), Boston, MA. 2011.