Introducing the Issue
Every new technology has the potential for unintended consequences that impact our communities and institutions. Whilst technology has improved people’s lives in a myriad of ways, it is also responsible for negative impacts on individuals, communities, society and the environment. Companies ought consider their role by determining the greater implications of their technology and gain confidence from consumers that they’ve done so.
This quote, in reference to Facebook, captures the essence of the discipline of responsible technology and why it is so important: “The product had not been built to put societal good first,” explained Jeff Horwitz, the reporter at The Wall Street Journal who broke the story around Frances Haugen’s leaked documents. “They built a product and didn’t really know what they were doing, and kept turning knobs to maximise engagement. And now we can look under the hood and see some pretty ugly stuff.”
Contribute to the Development of the Responsible Technology Benchmark
Exploring the Issue
Every new technology has the potential for unintended consequences that impact our communities and institutions. Whilst technology has improved people’s lives in a myriad of ways, it is also responsible for negative impacts on individuals, communities, society and the environment.
It can be difficult to develop one definition of responsible technology as technology itself is complex, broad, vague and constantly changing. However, a simplified definition of responsible technology requires businesses to understand the complexities of their technology and consider its social impact and unintended consequences, as well as accepting a level of responsibility and accountability of those consequences.
To create or support responsible technology, companies ought to:
- Look beyond the individual user and take into account technology’s potential impact and consequences on society as a whole
- Share how value is created in a transparent and understandable way
- Ensure best practice in technology that accounts for real, messy humans
As the world becomes more reliant on technology, businesses must adapt their own practices regarding technology to protect their customers. Too many internet services are currently insecure and/or unreliable, as well as poorly architected, designed and maintained. Insecure systems and technologies don’t just affect their users — they affect others too. For example, hacked webcams could form a botnet, or children’s data could be leaked from Internet-connected toys. Firms need to ensure their technology is safe and secure with good practices surrounding its use.
There are also concerns regarding the increase in artificial intelligence (AI) and algorithms which can be used in all kinds of activity to inform and to make decisions, including by the public sector, companies, educational institutions. With the increased amount of user data that is collected today, responsible companies should aim to ensure algorithms and AI are accountable and impartial whilst acknowledging the limitations of what they provide and making them easier for users to understand.
Businesses should also aim to provide safe and responsible technology to all of their customers, not just those who are capable of spending large amounts of money to protect their information. Unfortunately, internet technologies often operate within old systems that disenfranchise already vulnerable and marginalised people.
Overcoming these challenges requires firms to commit to develop and use responsible technology. This means innovation should consider people and the planet. Responsible technologies recognise and respect everyone’s dignity and rights, give people confidence and trust in their use and should never knowingly create or deepen existing inequalities.
Algorithms – A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer. The most popular example is Facebook, in which the platform generates the user’s newsfeed.
Artificial Intelligence (AI) – AI is ‘the designing and building of intelligent agents that receive precepts from the environment and take actions that affect that environment’. The most critical difference between AI and general purpose software is in the phrase “take action”. AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate. (See the article Musk is wrong on jobs, unless he’s right for an overview of the evolution of AI as well as definitions of technical terms and jargons.)
Big Data – Extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations, especially those relating to human behaviour and interactions.
Dominant Network Platforms – A small number of platforms are hugely dominant in their areas, like Google, Facebook, Apple, and Amazon. They wield incredible power in the markets they choose to be active in (including in buying potential competing companies, recruitment, and in R&D). Due to the Internet’s network effects, big platforms may be inevitable — but their governance and accountability are far from ideal today.
Encryption – ‘Encryption’ is the process of taking an unencrypted message (plaintext), applying a mathematical function to it (encryption algorithm with a key) and producing an encrypted message (ciphertext).
Internet of Things (IoT) – The interconnection via the Internet of computing devices embedded in everyday objects, enabling them to send and receive data.
Machine Learning – A type of artificial intelligence (AI) that allows software applications to become more accurate in predicting outcomes without being explicitly programmed. Machine learning algorithms are often categorised as being supervised or unsupervised.
Personal Data – Data is the lowest level of abstraction from which information and knowledge are then derived. Personal data, then, is data specific to an individual or group. Data is collected and analysed to create information, while knowledge is derived from extensive amounts of experience dealing with information on a subject.
Privacy by Design – ‘Privacy by Design’ is an approach to systems engineering which takes privacy into account throughout the whole engineering process.
Responsible Technology – Creating and using technology responsibly requires considering the social impact and unintended consequences those technologies produce.
Links & Further Resources
Society is becoming increasingly aware and questioning of AI and so, rightly, trust and confidence will need to be earned by demonstrating that AI is being developed and used with people’s best interests in mind.
Responsible Technology Institute
Use this link to stay up to date with recent news regarding responsible technology. The University of Oxford and the Responsible Technology Institute also publish the Journal of Responsible Technology which can be accessed here.
MIT Technology Review Insight
Some estimate that the COVID-19 pandemic has condensed 10 years of digital transformation into a single year. While technology has helped people and businesses manage through these historic times, the acceleration of AI and machine learning create social challenges for businesses to navigate.