Signed in as:
filler@godaddy.com
Signed in as:
filler@godaddy.com
The Global AI and Law Network (GAIL) brings together legal, compliance and AI professionals from around the world to explore how artificial intelligence is reshaping governance, ethics and the future of work.
GAIL is the live events arm of Global Legal AI. Through panel discussions, expert spotlight sessions and community webinars, the network creates a space for real time dialogue on the practical challenges and opportunities of AI. These sessions connect practitioners, share real world experiences and highlight frameworks that support responsible and effective AI use.
GAIL’s work encourages open collaboration across disciplines. It brings together professionals who are interpreting the legal implications of AI and also shaping how AI is governed in practice. The purpose is to keep the conversation on AI governance inclusive, solutions focused and grounded in integrity.
AI governance is no longer a technical or compliance-only issue. As AI systems increasingly influence strategic decisions, operational processes, and risk exposure, boards and executive teams are now directly accountable for their impact on organisations and stakeholders.
This session explores how AI governance is being approached in practice at board and executive level. It looks at where accountability for AI systems truly sits, how responsibility is being exercised across management and legal functions, and why policies alone are no longer enough. Drawing on real-world experience, the discussion focuses on what effective AI governance looks like when it is embedded into decision-making, risk oversight, and organisational culture, rather than treated as a standalone compliance exercise.
AI decision-making is no longer a theoretical or future concern. As AI tools are increasingly used in courts, arbitration, and legal processes, questions of liability, evidence, and accountability are already being tested when outcomes are challenged.
This session examines what happens when AI-assisted decisions come under legal scrutiny. It focuses on how responsibility is assessed when human decision-makers rely on AI tools, what courts and regulators are beginning to expect in terms of explainability and defensibility, and why detecting AI use is less important than justifying outcomes. Drawing on real-world disputes, the discussion highlights how legal teams should prepare for AI-related challenges in 2026.
Please check your availability and read the Note to Speakers information before submitting your registration.
By submitting this form, you consent to be contacted about future events, collaborations, and community opportunities.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Get weekly insights, playbooks, and event invites to help you lead responsibly in the age of AI.