A.I. & data responsibility

Is your business adopting artificial intelligence (AI) responsibly?

EXCELLENT Answers

No EXCELLENT answers have been published for this question.

GOOD Answers

No GOOD answers have been published for this question.

OKAY Answers

No OKAY answers have been published for this question.

POOR Answers

No POOR answers have been published for this question.

Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment” . The most critical difference between AI and general purpose software is in the phrase “take action”. AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.

According to PwC : “the fastest-growing category of AI is machine learning, the ability of software to improve its own activity, based on interaction with the world at large. The spectrum of AI can be divided into three: Assisted Intelligence, widely available today, improves what people and organisations are already doing. Augmented Intelligence, emerging today, enables people to do things they couldn’t otherwise do. Autonomous Intelligence, being developed for the future, establishes machines that act on their own.”

AI could contribute trillions of dollars to the global economy within a decade. However, new and advancing technologies inevitably raise new controversies and responsibility issues, and AI will undoubtedly have a significant social impact that will need to be carefully managed.

AI is likely to become increasingly powerful and useful, and thus more and more important. Yet experts caution that without ensuring that AI is deployed robustly, safely and as part of a well-managed system, businesses will create enormous risks for themselves. To counter this, businesses will need a sound grasp of what the AI agents they deploy do, to what sets out data, and with what likely or possible outcomes.

Whilst AI technologies offer huge potential, they also open businesses up to new risks, and the possibility of becoming victim to abuses of power by unscrupulous suppliers. If AI technologies or services are outsourced, the provenance, ownership and control of those third parties involved should be known.

AI’s impact on employment levels is likely to be profound as that number of jobs which could previously only be done by people diminish. Job numbers at risk of automation are estimated at 47% in the USA, 35% in the UK and 77% China - while across the OECD it's an average of 57%. Foxconn, a key manufacturing partner for Apple, Google, and Amazon, is the world's tenth largest employer and it has already replaced 60,000 workers with robots. McDonald's chief executive Ed Rensi told Fox Business that if minimum wage in the US rose, the fast-food chain would consider robots, according to CSLA's Seagrim. "It's cheaper to buy a $35,000 robotic arm than it is to hire an employee who is inefficient making $15 an hour bagging French fries," he told the broadcaster.

The World Economic Forum has warned that the rise of robots will lead to a net loss of over 5 million jobs in 15 major developed and emerging economies by 2020. Increasing automation then leads to the challenge of building societies that cope with this transition, and building resilience amongst workers. The social impact of artificial intelligence may become one of increasing inequality, with the 1% benefiting enormously, to the detriment of the 99%, according to Professor Stephen Hawking. Business can play a role in building economic resilience and new industries amongst those replaced by machines.

The use and collection of data by tech companies – by Google, Amazon, Facebook, Apple etc. – raises significant issues of privacy and consent, given that it is becoming increasingly unfeasible that one can opt out and still interact fully in society and the economy. The question of ownership of data is also raised. Whilst there may be a working assumption that it belongs to the big companies that generate it, this fails to tackle privacy concerns. In Trento, Italy, hundreds of families are living with a ‘New Deal’ on Data – they get notification and control of data generated about them. It’s securely shared in an auditable way. These people actually share a lot more than those who do not get this control over their data, potentially because, once they’re in control, they recognise the value of sharing.

However, in terms of work life and data, PwC notes the possibility that companies will increasingly monitor employee data to track efficiency. Contracts may require employees to hand over more and more of their data on their health, performance, and possibly private life in return for job security. They note that only 3 out of 10 participants in their global survey would be happy with this on average.

Artificial intelligence and robotics prompts new ethical considerations. For instance, when do driverless cars decide to swerve to avoid an accident if the only alternative is crashing into pedestrians? Further, machine learning can lead to systems developing social prejudices. For instance, if artificial intelligence is used to filter job applicants, the system may replicate subconscious prejudices built into it by its creators.

These ethical considerations do not mean that artificial intelligence and robotics should not be used, but rather that there is a philosophical and ethical dimension to these technologies that businesses should be aware of and fully engaged in.

Big data

Extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations, especially relating to human behaviour and interactions.

Artificial Intelligence

AI is ‘the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment’. The most critical difference between AI and general purpose software is in the phrase “take action”. AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate. The fastest-growing category of AI is machine learning, the ability of software to improve its own activity, based on interaction with the world at large.

AI can be divided into three broad types:

  • Assisted Intelligence (also known as “narrow AI”) , widely available today, improves what people and organisations are already doing
  • Augmented Intelligence, emerging today, enables people to do things they couldn’t otherwise do (this is similar to Artificial General Intelligence (AGI): the level of intelligence a machine would need to attain to successfully perform any intellectual task that a human being can)
  • Autonomous Intelligence (also known as Superintelligent AI (ASI)), being developed for the future, establishes machines that act on their own.
Automation

The use or introduction of automatic equipment meaning that activities can be done by machine that had previously been done by people.

Machine Learning

A type of artificial intelligence (AI) that allows software applications to become more accurate in predicting outcomes without being explicitly programmed. Machine learning algorithms are often categorised as being supervised or unsupervised.

Algorithm

A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer

Chain-of-ownership

Whilst companies may buy the use of AI technologies, this does not necessarily mean that they are the ultimate owners of the algorithm, the data that is inputted into it and its insights. The chain-of-ownership refers to who the creators of the AI are, and who the ultimate owners and controllers of it, its inputs and its outputs.

Answering YES

All Businesses MUST

Describe how they currently use AI and outline any future plans

Confirm that responsibility for AI is assumed at board level

Confirm that the board understands the governance and strategic imperatives of AI and that its responsible adoption is essential

Describe the types of data their AI tools interrogate for all stakeholder groups

Describe their policies and practices for safeguarding that data and managing and for acting on any insights gleaned

Describe how they ensure that any third parties they entrust to supply AI products and services are owned and controlled by legitimate and trustworthy individuals and/or organisations

All Businesses MAY

State any philosophies and beliefs they hold relating to data or AI

Explain any HR policies relating to automation and artificial technologies

Describe any partnerships they have with other organisations to use the data they collect/store

Explain how they support research into AI and whether they are part of any collaborative efforts to further public understanding and responsible development of AI

Answering NO

All Businesses MUST

Explain why they do not meet the requirements to answer YES to the question, listing the business reasons, any mitigating circumstance or any other reasons that apply

All Businesses MAY

Describe any efforts to use data and AI responsibly that do exist, even though all the requirements to answer YES to this question are not met

Mention any future intentions regarding this issue

Answering NOT APPLICABLE

All Businesses MUST

Confirm that the do not use AI in any part of their business operations

All Businesses MAY

Mention any future intentions regarding this issue

DON'T KNOW is not a permissible answer to this question

Version 1

To receive a score of 'Excellent'

Responsible use of data and artificial intelligence is a key aspect to the business. The company carefully considers and acts on issues relating to artificial intelligence and data, and works to build a resilient business, workforce and economy. Innovation on data and AI is integral to what it does.

Examples of policies and practices which may support an EXCELLENT statement (not all must be observed, enough should be evidenced to give comfort that the statement is the best of the four for the business being scored):

  1. The board understands issues relating to AI and takes active responsibility
  2. The board ensures chains of ownership of AI technologies mean that it controls its data, programs and insight
  3. The business ensures that the organisations it sources its algorithms from are reputable and transparent
  4. Employees are made aware of risks to their jobs posed by automation, and are prepared for this.
  5. Support is offered for employees to adjust to automation, for instance careers advice or retraining opportunities.
  6. The company is open about the data it collects on customers
  7. The company allows customers to opt out of data collection and still use its services
  8. Potential ethical issues are carefully considered in the development of AI
  9. Company has carefully defined what its sensitive data is and controls who has access to it
  10. Governance processes provide accountability of data owners
  11. Data that is no longer needed, or that cannot be fully protected is eliminated
  12. The company is willing to share data (in an anonymous form) to further social causes and improve public policy making and service delivery
  13. The company actively seeks out socially and environmentally beneficial data partnerships
  14. The company protects privacy of those on whom it holds data
  15. The company considers the impact on all employees and society before implementing AI technology
  16. The company engages employees in discussions on AI replacing current jobs and actively takes on board their views
  17. The company carefully considers potential ethical issues with AI technologies, and acts to resolve them
  18. HR policy takes into account how humans and machines can work together
  19. Employees are not required to share personal data
  20. Company data is used to promote other social goods – such as health, public policy or the environment
  21. Customers have easy access to any data held on them
  22. Employees have easy access to any data held on them
  23. Company is transparent about how AI is used
  24. Employs dedicated data stewards
  25. Any shared data is fully and properly anonymised
  26. Publicly commits to data responsibility
  27. Influences others on data responsibility and AI
  28. Board level responsibility is taken on issues relating to AI and data
To receive a score of 'Good'

The company takes notice of risks concerning AI and use of data, both to it and to society in general and aims to resolve them effectively. It is not innovative, but goes beyond legal requirements, and has a high level of oversight of the AI it uses.

Examples of policies and practices which may support a GOOD statement (not all must be observed, enough should be evidenced to give comfort that the statement is the best of the four for the business being scored):.

  1. The board broadly understands AI in the context of its business, and takes some responsibility
  2. The company has a large degree of oversight of the chains of ownership of its algorithms, but does not have full control.
  3. The company makes efforts to ensure the suppliers of its AI are reputable
  4. Employees are aware of risks to their jobs from automation
  5. The company will share data for social and environmental benefit if directly approached, It does not seek out partnerships
  6. Employees have an opt out option when it comes to sharing certain personal data
  7. Customers can access data held on them, but it is not necessarily easily accessible
  8. Employees are engaged in discussions on AI replacing current jobs
  9. Ethical issues that may arise due to AI are considered
  10. Data is protected to an industry standard level, and is eliminated if unneeded
  11. There is some board buy in on issues relating to data and AI, but no clear chain of responsibility
  12. The company protects data privacy well - it is less likely to have data breaches than the average in the sector
  13. The company is open about the sorts of data it collects
To receive a score of 'Okay'

The company complies with basic standards

Examples of policies and practices which may support an OKAY statement (not all must be observed, enough should be evidenced to give comfort that the statement is the best of the four for the business being score

  1. At senior levels there is understanding of AI
  2. Some efforts are made to have oversight of chains of ownership
  3. The company’s stability, reputation and strategy is not compromised by risk relationships with AI suppliers
  4. Company complies with the data protection act or relevant legislation
  5. Company complies with industry standards on data security
  6. Whilst care is taken to avoid mistakes relating to AI, the broader social impact is not considered
  7. The company is willing to inform employees about risks to their jobs, but only if they actively seek out the information
  8. Complies with freedom of information requests
To receive a score of 'Poor'

The company takes an irresponsible approach to data and AI and ignores risks and does not respect privacy

Examples of policies and practices which may support a POOR statement (not all must be observed, enough should be evidenced to give comfort that the statement is the best of the four for the business being scored):

  1. Chains of ownership of AI put the company’s stability, reputation and strategy at risk
  2. There is no clear position of responsibility for data or AI within the company
  3. The board and other senior figures within the company ignore issues relating to AI
  4. The company has no oversight or control of the data inputted to the algorithms used and their impacts and insights
  5. Company has experienced data breaches
  6. Company does not heed warnings on data security, or rectify problems
  7. Company fails to prepare workers for AI job losses
  8. Sells on data illegally, or fails to fully protect anonymity
  9. Employees and customers are forced to give unnecessary data to work for the organisation/use its products
  10. Data is collected and stored on customers without their consent
  11. Stakeholders have no access to their personal data
  12. The company prioritises short term profitability over workers’ welfare and safe use of AI
  13. Ethical issues are considered irrelevant at all levels
  14. Job losses due to automation are abrupt
  15. Workers are not knowledgeable about the risk automation poses to their jobs