Guidelines for Prudent Management of Artificial Intelligence
In the rapidly evolving world of artificial intelligence (AI), the need for ethical guidelines and regulations has become increasingly apparent. Recent advancements in AI, such as ChatGPT, generative AI, and large language models, have raised concerns about its potential misuse and the spread of misinformation.
The Writers Guild of America has recently gone on strike, demanding increased wages and severe limitations on the use of AI for writing purposes. This action underscores the growing unease about AI's role in various industries, particularly creative ones.
To address these concerns, organisations should develop a code of ethics outlining their commitment to ethical behaviour. This includes a pledge to provide accurate information and refrain from creating or distributing misinformation. A philosophy of 'do no harm' should guide AI usage, prioritising honest, accurate information over deceit or theft.
Businesses also need to develop a system of AI governance best practices to minimise risks and support the responsible use of AI for the betterment of humankind. This system should include regular ethics reports by a designated data steward, ensuring accountability and promoting compliance.
The use of AI can support criminal behaviour and the spread of misinformation, as demonstrated by the ability to create 'deep fakes' of individuals. This includes fake images and voice recordings. Developing algorithms that separate and identify accurate information from misinformation can prevent AI from performing criminal acts and distributing misinformation.
Regular audits can help identify potential legal issues and promote compliance in AI governance. All staff and management should be educated about the organisation's code of ethics and long-term goals through data governance courses.
The United States and China have made the strongest efforts so far in developing AI regulations and recommendations. The US leads in AI research, while China heavily invests and advances its own AI systems. Europe, particularly the EU, responds primarily with regulation such as the AI Act adopted in 2024, but is considered behind in AI development compared to the US and China.
The U.S. Senate has held nine sessions with tech CEOs to discuss AI concerns, with the first taking place on September 13, 2023. President Biden issued an executive order regarding AI concerns on October 30, 2023. China implemented the Interim Administrative Measures for Generative Artificial Intelligence Services on August 15, 2023, requiring businesses offering generative AI services to complete a security assessment and file algorithms.
Best practices for AI governance within a business include identifying AI-generated materials and avoiding the use of watermarks to distinguish AI-generated art. AI can be combined with Data Governance to support data privacy and security laws, reducing the risks of stolen and exploited data.
However, unintentional biases and prejudices in AI algorithms can impact hiring practices and customer service. Excel's What-If analysis tools can help test for biases and promote equity and fairness. Businesses should avoid using AI to manipulate customers into making purchases.
In conclusion, as AI continues to permeate our lives, it is crucial that we establish ethical guidelines and regulations to ensure its responsible use. By prioritising honesty, accountability, and fairness, we can harness the power of AI for the betterment of humankind.