AI Beyond benefits our ethical responsibilities in its utilisation
Everyone in business is talking about AI and nearly everyone has an opinion. Business owners want to bring it on board, realise its benefits, harness its power – and no organisation wants to get left behind where there are tangible, measurable improvements to be gained. Employees are divided in opinion, with some confident that AI will add value to their work streamline operations and eliminate repetitive processes, while others are literally afraid for their jobs.
Irrespective of the debate and discussion around its use, however, AI is here to stay; underpinning an explosion of new, powerful technologies with the potential to change the world. And with great power comes great responsibility – it is vital for organisations adopting AI to be alert to the possible negatives, scanning for biases and helping put legislative safeguarding into effect to protect consumers and users. Read more about integrating AI into the workplace here.
What is AI?
In order to better understand both the likely positives and negatives around adopting AI, we need to take a closer look at what it does. We all know that AI uses intelligent agents – systems that can reason, learn and act autonomously, much as we do ourselves. And like human beings, the more information these agents are fed around specific tasks, the better they become at carrying them out. Types of artificial intelligence include natural language processing (NLP), computer vision, robotics, machine learning and deep learning; in which artificial neural networks learn from data.
AI is automating many tasks once done by humans, improving efficiency and speed and even making better decisions than us. Businesses have seized on the opportunities it affords for streamlining operations and cutting costs. AI is a disruptor too, since its capabilities have the potential to displace many jobs, taking the place of human workers – but it creates other jobs and can assist humans by removing the need to carry out repetitive tasks, freeing up time for the more enjoyable, creative aspects of our roles. AI is naturally the subject of many current discussions and debates, and while there are still so many unknowns around it, there is an undercurrent of urgency around exploring and identifying its potential effects on humanity, both positive and negative. This inevitably leads to questions around where our own responsibilities lie regarding ensuring that we do not inadvertently allow the AI technology we are developingusing to discriminate against or harm others.
The dangers of AI
Much has been said about the potential dangers of AI, with some of those most involved with creating and developing the technology delivering stark warnings around its possible future uses, such as Geoffrey Hinton, ‘Godfather of AI’ and Elon Musk, who wrote an open letter declaring it ‘a profound risk to humanity’. When we consider that everything within the scope of human imagination could both be automated and made efficiently deadly by harnessing the power of AI, their concerns feel real and immediate.
AI could be used to the detriment of mankind just as easily as for our benefit, driving more powerful disinformation campaigns, deadlier biological weapons and horribly efficient planning for enacting stifling social controls. With careful adoption and by building regulation in as we roll AI efficiencies out rather than allowing it to play catch-up, however, businesses and individuals can anticipate and avoid the most likely negative outcomes. Three impacts of AI that spring to mind for regulatory attention are bias, misinformation and environmental costs.
Environmental impact
Digital technologies have long been hailed as saviours of the environment, but as their use goes mainstream, this is no longer strictly true. The burden on hardware and resources is increasing in line with the uptake of digital products and services. The energy needed to train and run AI models is staggering. Some large language models product emissions in line with the aviation industry. Data centres need hundreds of thousands of gallons of water a day for cooling (leading to initiatives to place them next to swimming poolswhere they can be used to keep the water warm – or, in Finland, to heat hundreds of homes). ICT industry emissions worldwide are expected to reach 14% of global emissions by 2040, with communications networks and data centres being the heaviest users. To counterbalance this, many more schemes like the ones above could be designed and implemented at the time of expanding IT infrastructure, to get the most out of the energy used to run it, and it would be nice to see incentives in place to reward organisations with the foresight to do this.
Misinformation
AI language models make disinformation campaigns much easier, reducing the cost and effort required to create and deliver content. AI also has a history of creating inaccurate content. About 9 months ago, StackOverflow, a website used by developers, issued a temporary ban on posting ChatGPT content to the site due to the inaccuracy of the coding answers the AI generator produced, as this kind of activity constituted a violation of their community standards.
Disinformation campaigns could also be used to attempt to influence the outcomes of democratic elections, for example, with the barriers to creating plausible content disappearing and the production of social media bots posing as real voters becoming ever easier to do and harder to detect. But the biggest and arguably the most immediate concern raised around the uptake of AI technologies by business and industry has been around bias.
Bias in AI
It is questionable whether such a thing as neutral data really exists. And AI-powered machines, which learn from the data their human creators feed them with, inevitably replicate and even emphasise any biases contained within that data. In some cases, AI even automates the very bias types it was created to avoid. An Amazon recruitment system, for example, was fed data on the most suitable candidates for a specific role. However, since most of the previous successful candidates had been men, while learning how to judge which people were suitable for that role going forward, the computer famously became biased in favour of male candidates!
Other notorious examples of AI bias include an American programme for profiling criminals that wrongly identified black men as at more risk of reoffending than their white counterparts, and Microsoft’s chatbot Tay, which only needed 24 hours to start sharing discriminatory tweets based on interactions with its more Machiavellian users. The biases exhibited in these examples and the negative outcomes that they could have on human lives and livelihoods are clear proof that there is still a lot of work to do before AI can be trusted to make suitably nuanced judgements about individuals.
How to avoid bias in AI
Avoiding bias in AI is both a critical and ethical challenge. Bias can be introduced into AI systems unintentionally by biased training data, biased algorithms or biased decision-making processes. To minimise bias in AI, organisations could consider strategies such as ensuring that training data is representative, reprocessing and cleaning data to identify and remove biases, and always being mindful of how data is collected in the first place. Regularly auditing and testing AI systems for bias is another way to avoid replicating it. There are tools and methods for identifying bias in AI models that you can use to assist with this, such as disparate impact analyses. Algorithmic fairness can also be implemented with the specific aim of reducing bias, and human review and oversight is always likely to play a vital part in any AI decision-making process.
How is AI regulated?
This brings us back to the subject of this article and an important element of a much wider debate around AI – what are our responsibilities as individuals and organisations enjoying the benefits provided by AI technologies, and how can we ensure that our employment of these technologies and the data they generate and use does not harm other individuals and organisations?
The regulation of AI is a complex and rapidly evolving landscape. It can vary significantly from one country to another and encompasses a wide range of aspects from data privacy and ethics to safety and liability. Many countries have data privacy and protection laws that apply to AI systems. For example, in the UK, the General Data Protection Regulation (GDPR) sets strict requirements for the use and processing of personal data, including that used by AI systems. Compliance with these regulations is crucial when developing and deploying AI applications. AI systems are also subject to safety and security regulations in some industries, such as autonomous vehicles and healthcare, and AI systems used in HR are now regulated to prevent discrimination against protected groups.
Conclusion
While the outlook is positive, determining liability for AI-related incidents remains complex and more safeguards are needed to protect the public. Legislation is still in progress to clarify the liability of AI developers and users in the case of accidents or errors caused by AI systems. Human and consumer rights groups such as the UK’s Big Brother Watch are continually identifying ways in which AI and the data it captures negatively affects or discriminates against people, identifying areas for improvement and action and in many cases launching high-profile campaigns against companies, government departments and other entities to achieve procedural and legislative change to protect the public. We can’t be far away from seeing the establishment of government and industry bodies and standards to ensure the responsible development and use of AI technologies and setting much-needed requirements and best practices.
It is increasingly likely in the future that as a fundamental element of thoughtful branding and any forward-thinking CSR strategy, organisations and brands at the forefront of the AI expansion should lead the identification and mitigation of actual and possible harms caused by AI technologies as they become apparent, ensuring they both contribute to and stay safely within emergent regulatory frameworks.
Maintaining ethical AI practices requires a holistic approach that encompasses both technical and organisational aspects. It should be seen as an ongoing commitment and integral to your organisational culture and operations. By prioritising ethics in AI, businesses can build trust, foster innovation and make a positive contribution to society.
Request the visual PDF of the Al Beyond benefits our ethical responsibilities in its utilisation long article here.
Start your journey with us today