Would ethical advocacy of data be possible?

Signal post, January 2021.

In today’s digital world, and especially in digital communications, we are surrounded by social media digital platforms such as Facebook, Twitter, YouTube and Instagram. These platforms include artificial intelligence (AI) algorithms, which label our profiles and actions, and tend to create a “bubble” around our online communication by selecting the kind of material to our information feed that the algorithm deems interesting or suitable to us. Some commentators have also argued that social media companies and their platforms have had an integral part in creating the so-called surveillance or data capitalism, which increasingly shapes our way of life, often in non-transparent ways.

The reason why so many people worry about the increasing proliferation of these seemingly harmless online bubbles is that, in addition to providing users with increasingly tailored content, the algorithms behind these bubbles can also divide people and even entire societies by passively excluding certain viewpoints and by actively filling our information feeds with selective content that can manipulate our opinions and influence our actions. Some people may be trapped in these bubbles without even understanding it, while others can’t or don’t want to “burst” their bubbles because they are too addicted or ideologically tied to them.

A new Finnish book, entitled “What happened to us?” written by a digital journalist Jussi Pullinen, describes the story of digital platforms. Originally, social media platforms emphasised the freedom of speech and freedom of information. However, despite these aspirations, the pragmatic business logic behind these platforms tends to create information bubbles around users, which in practice hinder the users’ possibilities to get diverse information from various different sources. The idea behind this business logic is to provide tailored content to each individual user according to their perceived interests, in order to get them more engaged, or some would even say “hooked” on using the online platform.

Recent events in the US Capitol led Twitter to suspend the account of former President Trump because of his irresponsible actions on the platform. The decision to suspend Trump’s account by Twitter was recognised by its management as a failure to promote healthy conversation. As a result, some commentators raised the issue of the increasing relevance of Twitter in guiding, not only online conversation, but also the course of real-world events. The power of online platforms can be seen for example in how social media influenced the BREXIT vote, and more recently in the rise of Nokia corporation stock, which has been connected to collective action originating from the social media platform Reddit. Of course, the influence of media in shaping public discussion and debate has always been powerful force in the society, but that should not blind us to the fact that today social media and other digital online platforms are wielding a new kind of fast-paced and disruptive power in societies. Indeed, the immense potential of these platforms has created new kinds of challenges for digital governance globally.

Undoubtedly, a private company has a right to suspend unwanted accounts and to choose what kind of material can be uploaded to or hosted in its platform. However, as citizens we must also ask who should have the right to control and moderate such a media and based on what principles?  Is it one single person or the company management team? Should we have a public “jury” to decide what is allowed or not? Ethically speaking, we are currently in a grey area: we desperately need further societal discussion and possibly the creation of a new “social contract”, which would describe the responsibilities and rights related to the use of online platforms.

Another way to view social media platforms is through the value of data that is being created and exploited within and through these platforms. This data is often provided free of charge by the users in exchange of the right to use the services of these platforms. Thus, the data provided by the users can be seen as a voluntary “fee” paid for the services that the platform is providing. However, it can also be questioned whether most users are actually aware that they are engaging in this kind of exchange which involves data and money. Moreover, it can also be questioned whether this exchange is fair at all in terms of its benefits.

The danger in accepting the current business logic of the social media companies is related to the power and influence that is being created via massive amounts of data generated within these platforms. Indeed, it is often said that “data is the new oil of digital economy”, and access to unprecedented amounts of data can similarly create unprecedented concentrations of wealth and power. Therefore, it is important to think and deliberate on ethical issues around data and its governance globally, in order to avoid further exacerbating the global imbalances in wealth and influence. Instead, the focus of governance should be on how to use the new possibilities created by social media, online platforms and data to create wellbeing for everybody.

How then would an ethical advocacy of data be possible? The first step would be to ensure that citizens could better understand the business logic of data-driven economy as well as their role and rights as users of online platforms and as providers of data. To achieve this, the social media companies would have to adopt a more transparent approach. The second step would be the prevention of data misuse through better regulation that would concentrate on the use of data instead of content only. We as citizens and consumers can demand a more ethical approach to data governance from our governments and tech companies. Indeed, the creation of a digital world that strives for common good is a joint challenge for all of us.

 

The writers: Nina Wessberg, Nadezhda Gotcheva and Santtu Lehtinen (VTT)

Picture: VTT