Artificial Intelligence, Legal Profession, and Other Industries
Digital technology innovations bring about constant change, and at an ever-faster pace. Today’s drivers of digital transformation are artificial intelligence (AI) solutions. From disrupting business models to reinventing communication, to finding new solutions in key industries such as healthcare, transportation, education, manufacturing, and agriculture, AI is at the heart of innovation. And the legal industry is no exception. At Microsoft’s legal department, we have developed and deployed a number of AI-based solutions to make our work more efficient and reduce workloads. One example is an internal legal bot, which helps our sales teams to navigate certain standard legal situations and answer organizational legal questions. The bot has helped our attorneys and paralegals to shift thousands of hours towards higher impact work.
In parallel to the technological development work we are doing, Microsoft is also spending a lot of time thinking about the consequences which these technologies could have from a societal, legal, and regulatory perspective. Addressing these consequences to the benefit of society is a key task for governments, private companies, academia, and the civil society alike.
“Ultimately the question is not only what computers can do. It’s what computers should do”, is how Brad Smith, Microsoft’s President and Chief Legal Officer, and Harry Shum, Microsoft’s Executive Vice President, Artificial Intelligence and Research, frame the issue in The Future Computed: Artificial Intelligence and its role in society.
Ethical Principles for Developing and Deploying AI
At Microsoft, we believe that, when designed with people at the center, AI can extend your capabilities, free you up for more creative and strategic endeavors, and help you or your organization achieve more. Therefore, the development and deployment of AI must be principles based and guided by an ethical framework. In The Future Computed, we outline six principles that we believe should guide the work around AI. They are: the two foundational principles of transparency and accountability, underpinning the principles of fairness, reliability & safety, privacy & security, and inclusiveness.
We define these principles as follows:
- Transparency: We know trust in technology requires clear information about AI systems so people can judge for themselves whether they are working the way they are supposed, and so they can make informed decisions about using them and their potential impact.
- Accountability: We also know that for trust to be possible, the people and companies who create AI must be accountable for how their systems operate and the impact they have, both as a question of ethics and values, and a function of regulation and law.
- Fairness: We believe AI should treat everyone with dignity and respect—and that AI solutions should be designed to protect against bias. But there’s a danger today that AI will perpetuate the assumptions and biases that so many societies are working to overcome. To guard against this, we’ll need to develop analytic techniques that identify bias in results. And we must work with individuals and organizations from across society to ensure that the work we do reflects shared values.
- Reliability & Safety: We need to make sure that these new systems are safe and reliable—particularly when the decisions they make or the actions they take impact people.
- Privacy & Security: Because data is the fuel that powers the development of AI models, we must protect the privacy and security of people’s personal information.
- Inclusiveness: And to ensure that everyone benefits, AI solutions should be built using inclusive design practices that reflect the full range of experiences of all who might use them so they are accessible to people of every age, skill level, and ability or disability, in every country and community.
Jack Pineda Dale
Legal Director Baltics, Slovenia, and Serbia
Microsoft Corporate, Legal, and External Affairs