Women In AI: Lee Tiedrich, AI expert at the Global Partnership on AI

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more […]
© 2024 TechCrunch. All rights reserved. For personal use only.

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

As an AI expert at the Organization for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI), an international initiative to promote responsible AI use, Tiedrich develops approaches for AI that evaluate and manage risk while aligning law, policy and practices with science. She’s served on Duke University’s faculty and advised a number of companies, and was a longtime partner at the law firm Covington & Burling LLP.

Tiedrich, a tech transactions and intellectual property attorney, also served on the Biden Campaign Policy Committee and is registered to practice before the United States Patent and Trademark Office (USPTO).

Briefly, how did you get your start in AI? What attracted you to the field?

I’ve been working at the intersection of technology, law and policy for decades, starting with cellular, then internet and e-commerce, through today. I’m drawn to helping organizations optimize emerging technology benefits and mitigate risks in a rapidly changing and complex legal environment. I’ve been working on AI matters for years and long before it dominated headlines, beginning when I was a partner at Covington & Burling LLP. In 2018, as commercial AI use and legal challenges grew, I became co-chair of Covington’s global and multidisciplinaryArtificial Intelligence Initiative and concentrated more of my practice on AI, including AI governance, compliance, transactions and government affairs.

What work are you most proud of (in the AI field)?

Unlocking AI’s benefits and mitigating risks requires global and multidisciplinary solutions. I’m proud of my extensive work that unites different disciplines, geographies and cultures to help solve these pressing challenges. This work began while at Covington, working on AIgovernance and other matters with client lawyers, engineers and business teams. More recently, as a member of both the Organisation for Economic Co-operation and Development (OECD) AI and Global Partnership on AI (GPAI) global expert groups, I’ve been working on an array of high-stakes multidisciplinary AI matters, including AI governance, responsible AI data and model sharing and how to address climate, intellectual property and privacy matters in an AI-driven world. I co-lead both the GPAI Intellectual Property Committee and the Responsible AI Strategy for the Environment (RAISE) Committee. My multidisciplinary work also extends to Duke, where I designed and teach a course that brings together graduate students from different programs to work on real-world responsible tech matters with the OECD, corporations and others. It’s very gratifying to help prepare the next generation of AI leaders to address multidisciplinary AI challenges.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

I have navigated male-dominated fields for much of my life, starting as a Duke undergraduate where I was among the few women electrical engineering students. I was also the 22nd woman elected to the Covington partnership, and my practice focused on technology.

Navigating male-dominated industries starts with doing great innovative work and confidently making it known. This increases demand for your work and typically leads to more opportunities. Women should also focus on building good relationships within the AI ecosystem. This helps cultivate important mentors and sponsors as well as clients and customers. I also encourage women to use their network to proactively pursue opportunities to expand their knowledge, profile, and experience, which can include participating in industry associations and other activities.

Finally, I urge women to invest in themselves. There are many resources and networks that can help women navigate and advance in AI and other industries. Women should set goals and identify and capitalize on resources that can help them achieves these goals.

What advice would you give to women seeking to enter the AI field?

There are so many opportunities in the AI field, including for engineers, data scientists, lawyers, economists and business and government affairs experts. I encourage women to find an aspect of the AI field that they’re passionate about and pursue it. People often excel more when they work on matters they care about.

Women also should invest in developing and promoting their expertise. This can include joining professional associations, attending networking events, writing an article, public speaking or pursuing continuing legal education. Given the wide range of novel and challenging issues AIpresents, there are many opportunities for young professionals to quickly become an expert. Women should proactively pursue these opportunities. Building expertise and a good professional network can help.

What are some of the most pressing issues facing AI as it evolves?

AI holds so much promise to advance global prosperity, security and social good, including helping address climate change and helping achieve the UN Sustainable Development goals. However, if not developed or used properly, AI can present safety and other risks, including to individuals and the environment. Society faces the grand challenge of developing frameworks that unlock AI’s benefits and mitigate the risks. This requires multidisciplinary collaboration, as laws and policies need to factor in relevant technologies as well as market and societal realities. Since technology transcends borders, international harmonization is important, too. Standards and other tools can help advance international harmonization, particularly as legal frameworks vary across jurisdictions.

What are some issues AI users should be aware of?

I recently called for a global AI learning campaign in a piece I published with the OECD. It explains the urgent need for users to become aware of the benefits and risks of AI applications they intend to use. This knowledge will empower them to make better decisions about whether and how to use AI applications, including how to mitigate risks.

Additionally, AI users should know that AI has become increasingly regulated and litigious. Government AI enforcement is expanding too, and AI users may be liable for harms caused by AI systems made available by their third-party vendors. To reduce potential liability and other risks, AI users should establish proactive AI governance and compliance programs to manage their AI deployments. They also should diligence third-party AI systems before agreeing to use them.

What is the best way to responsibly build AI?

Responsibly building and deploying AI requires many important steps. It begins with publicly embracing and upholding good responsible AI values to serve as the North Star, such as those embodied by the OECD AI Principles. Given AI’s complexities, it’s also essential to develop and implement an AI governance framework that applies throughout the AI system lifecycle and fosters multidisciplinary collaboration among technical, legal, business, sustainability and other experts. The governance framework should factor in the NIST AI Risk Management Framework and other important guidance, in addition to ensuring compliance with applicable laws. Because the AI legal and technology landscape changes rapidly, the governance framework should enable the organization to nimbly respond to new developments.

How can investors better push for responsible AI?

Investors typically have many ways to push responsible AI within their portfolio companies. To begin, they should embrace responsible AI as an investment priority. In addition to being the right thing to do, it’s good for business. Market demand is rising for responsible AI, which should increase portfolio company profitability. Furthermore, in our increasingly regulated and litigious AI world, responsible AI practices should reduce litigation risks and potential reputational harms caused by poorly designed AI.

Investors also can push responsible AI by exercising oversight through their corporate board appointments. Increasingly, corporate boards are expanding oversight of technology matters. They also should consider structuring investments to include other oversight mechanisms.

Additionally, even if not addressed in the investment agreements, investors can introduce portfolio companies to potential responsible AI hires or consultants and encourage and support their engagement in the ever-expanding responsible AI ecosystem.

 


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *