Peculiarities and risks of big data from a social perspective
Big data as a disruptive phenomenon for 21st century society
The rise of big data —the collection of large amounts of data and the spread of advanced techniques for its use— has had a strong social impact that we are not always aware of.
Like any disruptive phenomenon, we can understand it as an irreversible change that can surely be expected to have a transformative impact. From a social perspective, is there reason for concern?
One of the great changes brought about by the spread of smart algorithms in the so-called digital wave is their great capacity for replicability and scalability. Just as with all innovations, we have to be alert to the ethical dilemmas associated with the widespread use of big data.
One of the great changes brought about by the spread of smart algorithms is their great capacity for replicability
One well-known example is the programming of algorithms for self-driving vehicles. Regardless of the programmers’ desires, these algorithms will have to make decisions, either actively or implicitly, according to moral criteria.
Programmers have to decide, in the event of an accident, what sort of behavioural pattern they want the vehicle to follow. Do they want to minimise the physical damage, the number of victims, the danger to the driver’s life, or the harm to the brand’s reputation? How are we going to measure this damage?
In recent years, we have collected lots of information about the social impact of big data for cases less extreme than that of self-driving cars.
The first notable issue is inequality in access to data. If we analyse the market dynamics generated by the major digital platforms, we can see that we are entering a new competition model where different players —public, private, third-sector, etc.— have different forms and levels of access to data.
What sorts of markets are we creating in a world where —if we follow in the footsteps of the United States— Facebook and Google are poised to account for two thirds of digital advertising and 99% of growth in this market?
Technological progress will always outpace the ability of the law to respond to new challenges as they arise
Who can access the information that we generate online and who ends up benefiting from it? Where does this leave users? What legal framework is in place to protect our data privacy, ownership and portability?
The General Data Protection Regulation (GDPR) —an EU data-privacy law that came into effect in May 2018— addresses some of these elements, although we know that technological progress will always outpace the ability of the law to respond to new challenges as they arise.
How should we incorporate social expectations and ethical dilemmas into the analysis and processing of large amounts of data?
Nowadays, with cookies or an internet user’s IP address, an online retailer like Amazon can estimate its customers’ purchasing capacity and define personalised pricing strategies that overwhelm the existing legal framework while also radically redefining competition frameworks in terms of practical application.
Is the customer still the king —the ultimate purpose of all business activity— when a company can use customers’ data to personalise prices in ways that disadvantage the customers themselves? How will this phenomenon increase and multiply as big data becomes ever more widespread?
Another pattern of change —important, yet imperceptible to many— is the degree of confidence in the data and the capacity to reverse automated and generalised decision-making processes.
The Flash Crash of May 2010 —an eight-percentage-point drop in stock prices— was caused by automated trading systems. Phenomena like these are going to become increasingly frequent. As an executive asked me recently in class, is the solution to these problems really to turn off our computers?
Chris Anderson’s famous 2008 article in Wired, «The end of theory: The data deluge makes the scientific method obsolete», paints a bleak picture from a humanistic standpoint. Is it really true that we don’t need to know how the world works because data alone can give us an in-depth understanding of economic and social phenomena? What sorts of executives and citizens are we creating by fostering absolute faith in the ability of data to address the world’s biggest challenges?
Predictive policing systems are becoming a reality
Another element of analysis that has accompanied the rise of widespread big-data use is the question of how programmers’ underlying assumptions influence the world.
Predictive policing systems are becoming a reality. These systems allow law enforcement organisations to predict the chances of a crime occurring in a particular place —a possibility foreshadowed by the 2002 film Minority Report. The algorithm can improve the efficiency of police resources and reduce crime in certain urban areas.
Less well understood is the extent of the stigmatisation and racism in automated practices that, in practice, discriminate against people of colour. Are we aware of the social dynamics that we have unleashed with this sort of automated decision-making?
One possible solution to this sort of problem is algorithmic transparency. But how can we get there if most of the value of an application or digital platform is derived from the predictive capacity of its algorithms, which are intellectual property?
Finally, we need to recognise the clear trend towards hyperspecialisation, which on its own leads to the digital revolution.
But on the other hand, hyperspecialisation allows us to anticipate the growing demand for multidisciplinary professionals capable of using their experience as an added value in order to transcend silos and subsegments of scientific knowledge, repositioning organisations at the heart of the social debate and opening them up to dialogue with the world.
Finally, let us quote none other than Steve Jobs: «It is in Apple’s DNA that technology alone is not enough —it’s technology married with liberal arts, married with the humanities.»
Regardless of whether or not this is true, a combination of the scientific and humanistic approaches is the only way to build a better world and create responsible organisations that are more sustainable over time.
David Murillo
David Murillo has worked in the financial, public and non-profit sectors, where he has gained experience in commercial banking, as a local development agent and as an NGO manager in the field of community mental health. He has been Visiting Professor in different universities like the Copenhagen Business School (Denmark), Frankfurt School of Finance and Management (Germany), Universidad del Pacífico (Peru) or Sogang Business School (Korea). He has served as an advisor on CSR-related issues to the Catalan Government, the Spanish Ministry of Industry, the UNDP, the ILO, the IDB and other institutions. His academic work focuses on the areas of business ethics, geopolitics and globalisation studies.