William Fry in-house event highlights importance of AI literacy
AI literacy is “no longer optional”, William Fry has stressed to in-house counsel from Ireland’s leading companies at its annual end-of-year Counsel Connect seminar.
The law firm’s event addressed the significant challenges posed by the regulatory obligations under the EU AI Act, and in particular the AI literacy requirements which take effect in February 2025.
The seminar, titled “In the AI of the Beholder: Complying with Subjective AI Literacy Requirements under the AI Act”, was led by Rachel Hayes and Dr Barry Scannell, technology partners at William Fry.
The session provided a roadmap for businesses to achieve compliance with the Act’s AI literacy requirements, which mandate that organisations ensure their staff possess the skills and knowledge required to make informed and responsible use of AI systems.
Discussions covered the practical and legal implications of these obligations, offering strategies to manage risks related to data protection, intellectual property, and AI-generated inaccuracies.
AI literacy, as defined under the AI Act, involves equipping staff with the knowledge to understand and responsibly use AI systems.
Dr Scannell said: “AI literacy is no longer optional. From February 2025, organisations must demonstrate a sufficient level of AI understanding within their teams, not just for compliance but to ensure ethical and effective use of this transformative technology. Failure to address this could result in significant legal and reputational consequences.”
The seminar detailed practical steps for businesses, including establishing training programmes, implementing AI governance groups, and ensuring robust review mechanisms for AI-generated outputs. Participants were urged to adopt a risk-based approach to AI governance, balancing innovation with accountability.
The event also explored the risks posed by generative AI systems, highlighting examples such as the well-publicised case of a New York lawyer who faced professional sanctions for submitting fictitious AI-generated case law.
Key risks discussed included data protection concerns, potential copyright infringements during text and data mining, and the dangers of over-reliance on AI-generated outputs, which may undermine critical legal reasoning.
Ms Hayes said: “Generative AI outputs are only as reliable as the inputs and oversight to which they are subjected. Organisations must instil a culture of diligence when working with AI systems, particularly if outputs are used in high-stakes environments like legal and regulatory contexts.”
Dr Scannell added: “AI literacy is not just a regulatory checkbox — it’s a cornerstone of sustainable AI adoption.
“As the EU AI Act reshapes the regulatory landscape, organisations that embrace these requirements as an opportunity for growth will position themselves as leaders in the AI-driven economy.”