Exclusive Trusted Magazine Q&A with Laura Miller, Director of Ethics and AI | Digital Humanitarian | Policy Manager and Compliance Director
How could you describe your career path in few words?
My career path began with a focus on creating works that shed light on social justice issues through the mediums of fine art and photography. However, I soon realized that what I truly loved about art was not just the technical aspects of it, but the power of messages and ideas to inspire change. This realization led me to pursue an education in philosophy and ethics, where I became more aware of the need for global and innovative solutions to address the most pressing moral issues.
I recognized the potential of technology and artificial intelligence to help address these challenges, and now I apply my training in ethics to cutting-edge AI innovations, ensuring that these new technologies are used ethically and sustainably for the betterment of society.
Overall, my career path shows the importance of following one's passions and being open to new opportunities. It highlights the power of art and communication, philosophy and ethics, and technology to make a positive impact on the world and emphasizes the need for responsible and ethical use of technology to address our most pressing societal issues.
What are the highlights of the key digital innovation trends for 2022? Can you give us some major examples?
2022 was an incredible year that came in like a lamb and went out like a lion. Most people were unaware in the beginning that ChatGPT would launch at the end of the year and take the world by storm. Companies scrambled to keep up with ChatGPT, organizations were eager to incorporate it into their operations, and there were even speculations about the creation of AGI. However, there were also some negative consequences that emerged as biases, racism, and harm were detected in ChatGPT’s results and prompts. Additionally, it was realized that regular users of AI were essentially the trainers of the technology and the providers of the data it requires, and that their privacy was not as secure as they may have believed. But focusing on ChatGPT fails to consider many of the wonderful innovations that occurred in 2022. Here are just a few:
Predicting natural disasters.
Identifying disease which allows for earlier intervention.
Improving water quality (identifies harmful nutrients).
Resource allocation for crime through investigative assistance.
Allows persons with disabilities to create art.
Predicting diseases that are likely to become pandemics.
Intervention in the designer drug market and activity.
Support conservation via information on at-risk ecosystems.
Enabling education access in developing nations.
Improving communication for those who have disabilities.
Detecting threats (various kinds).
Supports DEI interventions when trained to seek out harmful language.
Prevention of cybercrime by identifying illegal behavior and patterns.
The ability to understand human language, in context (NLP).
Aids communication through translations in 100 languages (Google LLM).
Based on your experiences, what are the impactful trends in digital innovation that are becoming more important in the context of 2023?
Education has become increasingly vital considering the emergence of ChatGPT. Learning about the limits of the technology, developing a vocabulary of trustworthiness and responsibility, and correctly deploying ChatGPT will be crucial. Missteps in these areas could be costly for users, businesses, organizations, and higher education institutions.
Legislation and regulation surrounding AI has become a pressing issue. In 2022, 17 states in the US passed AI legislation, and the Blueprint for an AI Bill of Rights was released in October of that year. The European Union is also set to pass the EU AI Act in 2023, although ChatGPT and other similar technologies may slow its progress. Given the global nature of technology, it's important for businesses and organizations to focus not only on their home country's regulations but also on global regulations.
Ethics is a crucial consideration in AI development and deployment, with many arguing that legislation alone cannot address all the harms that AI can produce. However, the number of ethical frameworks can make it challenging for organizations to determine which framework best aligns with their values. Some individuals may see ethics as subjective and believe that an organization can determine what is ethical based on what they want.
In your opinion, how can they create high value for organizations?
Organizations, especially those developing AI technologies, must educate their employees about the technology's nuances, best practices, and potential ethical concerns. This education will help to ensure that the organization can develop and deploy AI systems that are compliant with the law and align with ethical standards. It will also help the organization to identify and address any potential ethical or legal issues before they become problematic. Informed decisions begin with education.
Legislation helps to ensure that the use of AI is legal and ethical and protects organizations from potential legal consequences and reputational damage. Additionally, complying with AI-related laws and regulations can help to build trust with customers and stakeholders by demonstrating that the organization is committed to ethical and responsible use of technology. Legislation is not enough to guarantee ethical innovation, but it’s an important first step that cannot be ignored.
Ethics is essential because the bare minimum is not the expectation for AI’s advancement into our world. Customers, users, employees, and governments all have a high stake in ensuring that technology avoids harms and promotes the good and all are willing to hold organizations accountable for their missteps or misdeeds. Ethics should not be allowed to become an anything solution and instead should be intentional with metrics, evaluation, and oversight and be directed toward the unique challenges of a particular technology or deployment.