Donald Trump’s algorithm
An interview with Peregrine W. Brown of Cambridge Analytica who explains how technology can shape democratic rights and people’s opinions. And he reveals how algorithms helped Trump win.
- Wednesday, 21 March 2018
Cambridge Analytica exploits data analytics to “change the commercial and political behaviour of its audience”. This is the claim made by the Anglo-American group that has been operating for the last 25 years out of its offices in London, New York, Washington DC, São Paulo, Mexico City and Malaysia. It has worked on 100 political campaigns on five continents and most recently for Donald Trump, Ted Cruz and John Bolton.
Cambridge Analytica’s methodology differs from those used by other data companies because of a particular component called Psychographics. How credible is this instrument when it comes to strategic marketing or securing an election victory?
«To my mind, psychographics and the application of psychology is just another step in the process of understanding the underlying motivations of certain behaviours and attitudes, i.e., how people receive and process information, and how it drives behaviour and opinions. We are building on approaches that already exist within commercial marketing. Many firms, whether they are working with big data or more traditional forms of market research, are already involved in doing what is usually described as behavioural segmentation. Behavioural analysis attempts to predict people’s actions based on the frequency and recency of their past actions. Conversely, attitudinal analysis refers to people’s opinions about a particular product, candidate or idea. By way of explanation of the potential efficacy of this approach, I would point to a body of academic research that supports the idea that opinions are formed by personality or values. A practical example is the framework called OCEAN, a five factor model of personality which classifies people’s holistic personalities across a number of different traits. This is a cutting edge application, and we are very interested to see how it can be employed in different contexts».
In the words of Rick Tyler, Ted Cruz’s campaign spokesman, predictive data models were used during the campaign but failed to deliver the expected strategic advantage. Why did they not work in Cruz’s case? And how did the approach used during the victorious Trump campaign differ from the one utilised by Cruz?
«I am aware of Mr Tyler’s comments, but to my mind it is not that our particular approach did not work. Victories in Iowa and Oklahoma indicated that the Senator’s message did resonate with portions of the US population. Our role as facilitators was to present his message in a way that would resonate with people, and more specifically with people who would tend to agree with it. Ultimately, even with Cruz’s victories, it was not his time. In the case of Trump’s campaign, we were able to look with a much broader lens at the US population. Trump had a series of issue positions that resonated with a larger portion of population than those favoured by Cruz. As a result, Trump was able to secure the nomination and to win the presidency. The difference in approach ultimately relates to the size of the audience, and we were able to achieve a lot for Trump’s campaign through the application of the methodology, particularly the transfer of research into data analytics and then targeted outreach as a result of that».
Following the UK’s referendum, ICO (Information Commissioner’s Office) opened an inquiry motivated by allegations that democratic rights are being threatened since data companies are using personal data without one’s knowledge to target them with psychologically tailored political adverts. If behavioural programming can unconsciously encourage people to vote for a particular candidate or to abstain from voting, should we interpret it as the manipulation of social interaction or as a way of strengthening democracy?
«I would like to point to two facts that are important to consider in this argument. Firstly, it is very rare (and in most Western countries illegal) to actively discourage someone from voting. The second fact, which is more controversial but worth acknowledging, is that personal data being used without the permission of those to whom it belongs is very rare, particularly in Europe. The key point is that data usage often takes place with one’s tacit awareness. That raises questions about the degree to which we, as consumers of media and also as providers of information, are actively engaged in managing how much of our data we give up and how we consume media that has been targeted at us by using that data. It should be a positive thing that you get ads for products that you are likely to be interested in or that politicians are communicating with you about issues that matter to you. I think that, as we move along the path to understanding how those dynamics work, it can be something that strengthens democracy and consumer’s rights. With reference to the ICO’s investigation, as I understand it, it is really about the ICO acknowledging that modalities of human communication are advancing. They are considering how to apply existing laws and how to make sure that everyone is in compliance with them. I would also like to point out that Cambridge Analytica did not work on the EU referendum, though we had spoken to parties involved and are watching the developments with interest».
Further technological development implies the need to implement reforms in the field of Electoral Law. How does digital technology change politics?
«As it does in any number of fields of regulation, technological innovation should be spearing a response from regulatory authorities, and there should be interplay between those two to make sure that the rules fit. It is important that regulatory authorities understand how regulations are being applied to modern technologies. Whether or not electoral reform is required depends on the ability of current regulations to govern communication that takes place in digital arenas. Consider the individuals in USA who were blocked by Trump on Twitter, and are pursuing legal action on the basis that their First Amendment right to free speech has been violated. Cases like this illustrate that it is not always new regulations that are required; often it is just about understanding how new technologies fit within existing structures».
The advent of big data raised serious concerns regarding the erosion of privacy. Is it possible to talk about a post-privacy future, and what are the implications of the emerging context?
«To my mind, the emergence of what some call the “data era” means that we are entering a time in which more information is being turned into data and digitised so that it can be analysed and used in ways that were not previously possible. With reference to the countries of the EU, it would be premature to talk about a post-privacy era in the regulatory sense; if anything, the forthcoming regulations that will be taking effect next year are tightening the degree to which people will purvey their personal data. But I think that I would take this back to a point I made at the Eastwest Forum about EU citizens taking an active role in how their data is being handed over to companies or state authorities and how it is being analysed and used to communicate with them specifically. If a post-privacy era is seen as one which puts (if it puts) at risk the perseverance of personal data, I do not feel that we are moving along that path. But if by post-privacy we mean the assumption that someone will hold all of our data, but that as active consumers and wilful participants, we would be able to share that data in return for services or advertisements that are likely to interest us, then I think it is something that can be very positive as long as the regulatory environment is solid».