While the integration of AI in the realm of finance has provided it with a much-needed boost by streamlining processes, enhancing efficiency, and enabling the delivery of tailored services, several ethical implications have come with its usage. These concerns have been amplified by the fair share of detractors that follow the financial industry, who often pose a very valid question: In an industry where profit often takes center stage, how can we ensure a technology as powerful and boundless as AI can be used responsibly?
In this regard, the financial sector has already had some scandals, ranging from biased practices of AI-driven lenders to sophisticated HFT algorithms that investment firms have used to manipulate the markets. Therefore, it is time that those at the helm of financial institutions follow the steps that other industries–especially tech–have taken. We must gather all stakeholders–from governments to banks, fintechs, and fund managers–and figure out a way in which AI can be leveraged for the greater good. It is a technology that has the potential to redefine the world, but only if we know how to steward it.
Here are three steps that can assist with this process.
Practical Ethical Implications
An ongoing challenge that has taken center stage in AI-related discussions is algorithmic bias. At the end of the day, we must remember that AI systems are only as good as the data they’re given. And if our data is filled with biases, then, by feeding AI with it, we are perpetuating those biases.
In finance, the issue of algorithmic bias is present in lending. Some AI-powered lending platforms have faced accusations of discrimination, offering different interest rates and credit limits based on factors such as gender or ethnicity. And even if proving such discrimination can be challenging, in the United States, the Federal Reserve has acknowledged that AI systems could amplify the existing biases in financial institutions.
Ethical implications are also present when attempting to utilize AI for investment decision-making, most notably by leveraging AI to engage in market manipulation. For example, AI can be used to spread false information on social media platforms. This happened recently when an AI-generated image of smoke coming out of a building close to the Pentagon, in Washington D.C., triggered public panic and caused the markets to tank.
Other ways in which the market could be manipulated include high-frequency trading–which accounts for 70% of all orders in the US market, 40% in the European, and 10% in the Asian–and that now, people assisted by AI could develop HFT algorithms to create fake orders and have an impact on the price of a particular stock. If left unregulated, these actions will further erode the already diminished trust in the financial markets, and can have far-reaching consequences.
Regulation In The European Union: The AI Act
In response to the prospective aforementioned scenarios, the European Union (EU) has taken a proactive stance by introducing the AI Act, which is widely lauded as the world’s first comprehensive law on artificial intelligence.
The way this works is that the act categorizes AI systems based on the risk levels they pose, and, depending on this, imposes the necessary regulations. By doing this, the European Union is taking active steps to ensure that the AI systems deployed within its jurisdiction are used in a safe, transparent, and non-discriminatory manner, however, to accomplish this, human oversight is key, since it ensures that it is we who are utilizing the technology and supervising it, rather than delegating everything to automated systems which lack the relevant human criteria.
The UK Approach
Opposite to the European Union, the UK has not yet introduced AI-specific legislation. Instead, it has opted to apply existing laws to AI. These laws rely on a context-specific, principles-based regulatory framework, that focuses on the intended outcomes rather than on the underlying technology. The regulatory framework is guided by principles such as safety, security, transparency, fairness, and governance.
Future Challenges - What’s Next?
As we can see, the public sector’s approach to regulating AI varies across nations, and, as the adoption of AI in the financial industry keeps rising, it is important that all stakeholders, including financial institutions, technology companies, and regulators, pave the way for a responsible AI implementation that harness its benefits but mitigates its risks.
For example, in the United States, the leaders of Big Tech companies–including Mark Zuckerberg, Elon Musk, and Satya Nadella–have already had meetings with civil society leaders and government regulators to address the potential pitfalls of AI, acknowledging that, if left on “autopilot,” a term used by Microsoft’s CEO Nadella, it could have devastating effects.
In fact, we can even do it better. For example, the US gatherings have been criticized because of their lack of inclusivity with smaller startups. In the world of finance, we can ensure that everyone has a seat at the table: from the emerging companies that are developing groundbreaking technology to the legacy banks.
Here’s how we get there.
- Establishing a governing body: By establishing a multinational organization similar to the United Nations, the financial industry can create a unified and global framework that regulates ethics in AI. This body would not only set the right guidelines, but also, it could function as a cluster, which fosters collaboration and knowledge across institutions from different spheres, and involves both the public and private sectors.
- Correct data biases: Develop and implement techniques to correct biases during the data preparation process, including the incorporation of synthetic data into datasets to ensure adequate diversity and representation and minimize the risks of data bias. This is a critical issue, so it is something that we want to be proactive on. As we have seen, some financial institutions have reacted too late and implemented AI blindly, and now, they are facing the consequences.
- Conduct regular audits: Audit algorithmic systems on an ongoing basis to ensure that any biases are promptly identified and corrected. These audits need to be supervised by independent experts, in order to ensure utmost transparency.
By implementing the aforementioned steps, stakeholders in the financial industry can maximize the chances of a successful integration of AI. This is something that offers immense potential, however, if we fail to consider its ethical implications, it also can cause considerable damage. Responsible AI implementation will be the result of a collective, harmonious effort, in which all stakeholders collaborate and can embrace solid ethical principles with utmost transparency and a firm commitment.
© 2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Comments
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.