Denise Kirwan: Ban imposed on generative AI by Australian child protection services
Denise Kirwan, partner at CKT, reviews a recent Australian decision relating to the use of ChatGPT in child protection.
The child protection agency in Victoria has been ordered to impose a ban on staff using websites that function using generative AI software after a social worker was discovered to have entered personal information of an at-risk child into ChatGPT. The worker admitted to utilising this service to draft a protection application report.
In Australia, a protection application report is used to inform decisions when determining if a child requires protection. The issue of a worker using ChatGPT to draft a protection application report containing sensitive information was reported by the Department of Families, Fairness and Housing to the Office of the Victorian Information Commissioner (OVIC).
Finding
Following investigation, OVIC concluded that the report contained numerous indications that ChatGPT had been used throughout the drafting process.
Upon close analysis, the use of AI became apparent due to the style of the language contained in the report, that did not correspond with the standard language used under Australian child protection guidelines.
This highlights the significant risk posed by using such services, where strict guidelines are in place regarding the structure and language to be used in sensitive reports, concerning the welfare of children. Such a finding also shows the general high risk of non-compliance that can be present when using AI websites, such as ChatGPT, to draft specialised documents.
Perhaps the most concerning result of the use of ChatGPT in this instance, was its interpretation of sensitive personal information contained in the report.
The protection application report was drafted for the purposes of a case concerning the welfare of a child, whose parents had been charged with sexual offences unrelated to the child.
In the report, a child’s doll, which had been reported to child protection as having been used for sexual purposes by the child’s father, was referred to as a mitigating factor.
The OVIC investigation found that such a conclusion was reached as the AI-based programme interpreted the reference to the doll as highlighting the parent’s efforts to provide the child with “age-appropriate toys” in support of the child’s development.
This alarming misinterpretation of a significant element of the report, highlights the necessary requirement that trained child protection workers assess the potential risks posed to the welfare of a child, in an individual case, based on their own expert knowledge, and do not engage in the use of AI to do so.
The findings in this report proves that generative AI tools can struggle with contextualising and interpreting the information it is given, in an appropriate manner.
This is of particular importance in the context of child protection, as AI reliant interpretations such as the one outlined above, could have detrimental consequences for children at risk of harm, as well as impacting the proper functioning of child protection agencies.
Conclusion
The worker found to have used ChatGPT is no longer employed by the child protection agency in Victoria and OVIC stated that the generative AI-induced errors made in this report did not change the decision-making of the child protection agency or the court in this instance.
Nevertheless, the serious errors made in this report due to the use of AI software, including the serious misinterpretation of a vital element of the report, highlights the dangers posed by using such services in this context and the potential harm it may cause children in need of protection.
As well as this, such a finding emphasises the serious consequences that may be faced by child protection workers in similar situations.
- Denise Kirwan is lead partner in the child care department at Comyn Kelleher Tobin LLP.