For the last several years, Social SafeGuard customers really have not had to worry if leaks to their privacy took place. The companies in our network have been protected all along. Their digital presence is protected, and any attacks against their social assets (Facebook or others) have been thwarted. So how did Cambridge Analytica get Facebook data about some 50 million people? and what can Facebook do to reverse the path we are on with the social web? – There are two main pillars of this news story that have broad implications for shaping the future of online networks: privacy and security. Until companies recognize the need for custom privacy and security tools, like Social Safeguard, they will be left wondering if the built-in privacy and security features of social networks are enough to protect their assets. Although this news (and the underlying need to protect data) has been brewing for over a year, smart companies have not stood still. By now it's taken for granted that a company would pay for cyber solutions to prevent email-based attacks, why would they not do the same for communications over social networks?
How did they get the data?
Cambridge Analytica allegedly acquired Facebook user data from Cambridge researchers. The researchers collected the data using Facebook's Graph API. The Graph API allows third parties to create an app that people can install on Facebook, granting access to the social graph and user data. Facebook has been careful to point out that this incident is not a data breach, because the users of the researcher's app gave the app permission to access data upon installation. Via the Graph API, apps can request access to someone's friend list, data associated with their account, and data associated with their friends' accounts. Oftentimes, when asked to review permissions, people simply don’t pay attention to what's requested or are unable to understand the privacy implications of individual permissions. As expected, the design of the installation dialog matters, as highlighted in a paper called, "Third-party apps on Facebook: Privacy and the Illusion of Control." (1) The Graph API, and Facebook Login, provide added functionality that people enjoy and find useful, however, they also introduce yet another thing to think about when it comes to privacy and security.
Let's take a step back and think about the timing. The story began in 2015 when a Russian-American researcher, based in Cambridge, UK and also a professor at Cambridge, named Dr. Aleksandr Kogan a.k.a. Aleksandr Spectre (probably not the best of ideas to choose a James Bond title as an alias) created an app called “thisisyourdigitallife” that utilized Facebook’s aforementioned Graph API. Some 270,000 people installed the app and thus opted in to sharing personal profile data with Kogan. The app also asked permission to "access [my] friends' information" including: family members, relationships status, current cities, likes, music, TV, movies, books, quotes, education, work, websites, groups, photos, and videos. Consider that the average Facebook user has roughly 200 friends, some with many more than that, the app developers had access to a lot of data from a lot of people! Users of the app were told at the time that the app was for academic (and research) purposes.
Remember, as Facebook says, this was not a data breach, because at the time users could grant permission for an app to collect their friends' data too. So the access to user data was not made possible by a hack, or a security issue: the app used the Graph API as it was designed to be used. The Cambridge researchers did, however, break the Graph API terms of use when they passed the user data along to another party. It was (and is) against Facebook's terms of use to pass user data along however according to Re/Code. Facebook’s CISO – Alex Stamos tweeted Saturday (before later deleting the tweet): “Kogan did not break into any systems, bypass any technical controls, our use a flaw in our software to gather more data than allowed. He did, however, misuse that data after he gathered it, but that does not retroactively make it a ‘breach.’” Facebook is not the only company that allows third-parties to build on the platform: Apple, Android, Twitter and LinkedIn all have a similar service. All adhere to a modest security standard and rely on audits or post hoc detection of misuse.
Imagine a similar approach in the rest of the cybersecurity world? Allow the breach and if the company is bothered by the breach sanction the developer after the fact and after the damage is done? A better approach is immediately available for those in the know – Social SafeGuard – see the amount of protection growth of enterprise customers.
Potential implications for Facebook
The statutory fine from the FTC if Facebook is found in violation of their 2011 consent decree could be US$2 trillion. In 2016 the penalty for unfair and deceptive practices (UDAPs) increased to $40,000 per violation from $16,000; 50 million people were affected and each violation for sharing of private information is $40k per violation. A fine that large would be 4x the market cap of Facebook. It’s unlikely that Facebook will face this fine but it goes to show that there are already laws in place to deal with incidents like this. Facebook’s market cap (and its business model) is driven by its ability to control the identity of its users. Facebook market cap is ultimately higher than that of Twitter and LinkedIn since it controls more data than any other firm. In a recent disclosure over 100 spookily precise categories several sourced offline that are used to augment Facebook data. The owners of those sources share cookies and data and in-turn Facebook matches them to an identity - the holy grail of online marketing but that data never leaves the platform.
Facebook recognizes the value of user data and derives more value from helping third parties match cookies and other user data to identify a person. Certainly more value than would be generated from outright selling the data. The downside is that this concentration of data creates an attractive target for hackers and others with ill intent. The Facebook platform has far more data than the Target corporation app (also recently hacked) that would only hold a narrow band of data. Facebook's business model depends on finding ways to monetize people's data. This means that if regulators expect that a social network begins with a commitment to privacy, they are barking up the wrong tree. The business model makes it difficult to convince outsiders that privacy comes first, especially since shareholders fully expect the company to leverage Facebook's unique vantage point (while of course maintaining security).
Over the last year, Facebook's reaction to a year of scandal has vacillated between silence, half-measures, silence, and backtracking . Their behavior is sending the wrong message to users: that information users post on Facebook, which they view as private, will be used in ways they did not expect; it is neither private, nor secure, and that the users themselves are not safe.
What's next?
Unless more meaningful measures are taken, social networks will quickly become an echo chamber of vitriol perpetuated by a political agenda and it is becoming more and more likely that a model will emerge where a big G government regulator will step in to regulate Facebook and other companies that rely on user data, just as we saw with the pharmaceutical industry. That experience has not been a pleasant one. Consider Corporate Integrity Agreements and outside regulators overseeing corporate functions such as marketing and operations, and both of those come after paying billions of dollars in fines. If Facebook acts fast, it is possible they can address the worst of outcomes by:
Citations
(1) Third-Party apps on Facebook: Privacy and the Illusion of Control, Na Wang, Heng Xu, and Jens Grossklags. Proceedings of the ACM Symposium on Computer-Human Interaction for Management of Information Technology (CHIMIT), Boston, MA. 2011.