Navigating AI Policy in practice

John Held , 16 June 2025

When John Held sat down to write a policy for AI use in architectural practice, he quickly found himself navigating a complex web of issues — from copyright infringement, legal liability, data security and confidentiality to impacts on the environment, creativity and professional development.

“ ‘My feeling is, son’, he said thoughtfully, ‘that we have made Multivac the wrong smartness’…

…. ‘The trouble is, it’s half-smart, like an idiot. It’s smart enough to go wrong in very complicated ways, but not smart enough to help us find out what’s wrong – and that’s the wrong smartness’.”— Isaac Asimov, Point of View 1975, from the I, Robot collection

I started with the noble intention of writing a policy for our office on the use of Artificial Intelligence in Architectural Practice, and ended up reading science fiction, with many detours on the way. I was assured by some that we all have to use it or we will lag behind. It would eliminate drudgery, make us more efficient, and even help with design tasks. It could generate lifelike renders of buildings in the style of any starchitect.

I found it creeping into almost every piece of software I used, often with no way to disable it. It gave me helpful summaries of documents and summarised my google search results, so I only needed one click rather than searching different website links. It summarised my news feed and my social media content.

Writing policies for the Office Manual has several objectives. It sets expectations, provides guidance for staff, outlines competencies and procedures and aims to improve the quality of the services we provide. As such, it must have an ethical as well as a procedural structure, ensure compliance with laws and codes, and at the same time allow for the creativity which is an essential part of our profession. Here’s where the complications started.

THE ETHICS OF AI

If our practice has an ethical underpinning, we need to interrogate the origin of Large Language Models (LLMs) used in AI – particularly the vast amount of copyrighted content scraped from the internet. The software firms have argued that they cannot afford to pay the copyright owners for their work and that it is “fair use” – a debatable argument. Architects are well known for their defence of their own copyright – are they happy to use the creative work of others?

Many architects have signed up to Architects Declare, and sustainability is a key competency and one of the highest priorities of the profession. There is little discussion, however, about the massive consumption of electricity and water to create the server capacity for generative AI. This is keeping fossil fuel companies in business, using up new renewable capacity, without any satisfaction of basic human needs. It also suggests massive misappropriation of the finance necessary for climate and ecological action towards the Al industry.

Ketan Joshi illustrates the wastefulness of much AI activity by estimating the energy consumed when asking ChatGPT to multiply two five-digit numbers (something LLMs are notoriously bad at). As well as giving the wrong answer, it consumed an estimated 2.5 million times the energy it would have taken to get the right answer on a pocket calculator.

SECURITY, CONFIDENTIALITY AND LEGAL LIABILITY

Quality systems will always have policies and procedures to safeguard client’s physical and intellectual property, and most client agreements will include clauses on privacy and confidentiality, and a clear attribution of ownership of copyright and moral rights. Indiscriminate use of LLMs could easily breach those requirements if your content is then used for their training. When using AI, can you assure your client those conditions are not breached and that your own intellectual property and moral rights are not compromised?

Australia’s privacy regulator has recently fired a warning shot at businesses that are ignoring the privacy risks that come with using artificial intelligence, as it releases its advice on how to use the technology safely.

The legal and reputational implications of using incorrect information generated by AI are concerning: lawyers quoting cases that don’t exist and scientific papers with incorrect references result from LLMs “hallucinating” and giving incorrect information. The proliferation of AI-generated text raises concerns that such inaccuracies will only get worse.

MAKING STUFF UP

Would you hire someone with a penchant for making stuff up? LLMs have been compared to the drunk intern, eager to please, and useful – until they aren’t. The sincere tone of ChatGPT – perhaps too sincere for Australians with a finely tuned bullshit detector – hides the fact that if they don’t know the answer, they will want to please and will make it up. Amanda Guinzburg’s substack post Diabolus ex Machina is a good example of how ChatGPT directly lies to her in a friendly dialogue with a chilling overtone.

THE NEXT GENERATION

Every architect remembers the long and laborious process of professional learning starting at university and continuing in the studio – making mistakes, doing repetitive work, sitting for registration, and gradually building expertise. It’s something that never stops. Schools and universities are struggling with students who don’t write their own essays, design their own work or understand the pain of the creative process. If the architecture profession is to survive, it won’t be because architects know which prompts to use: it will be because they understand people, and space, and creatively solving the many problems facing the built environment.

That understanding is possibly best summed up by Nick Cave discussing songwriting in the Red Hand files:

“ChatGPT rejects any notions of creative struggle, that our endeavours animate and nurture our lives giving them depth and meaning. It rejects that there is a collective, essential and unconscious human spirit underpinning our existence, connecting us all through our mutual striving.

ChatGPT is fast-tracking the commodification of the human spirit by mechanising the imagination. It renders our participation in the act of creation as valueless and unnecessary”.

If we want the next generation of architects to be truly creative, we must give them the opportunity to learn, experience and create. We must also be able to trust their judgement on how to responsibly use these new technologies.

WHAT SHOULD OUR AI POLICIES LOOK LIKE?

When confronted with the first iterations of AI I really hoped it could remove some of the drudgery of architectural practice – minute taking, checking specifications against schedules, code compliance and the like. In general those hopes have not been realised, although we still get a laugh from its attempts at minute writing. Perhaps this will change in the future: but in the meantime we should structure our policies to ensure AI is used safely and wisely.

Asimov’s robots all obeyed his Three Laws of Robotics to prevent robots from harming humans. Even so, robots were only allowed to be used in space, not earth, because its citizens were afraid of giving them free rein. On the other hand, our society seems to have few reservations about using new technologies without carefully considering the consequences.

Rather than adopting AI “because everyone’s doing it”, our policies should perhaps reflect those laws of robotics in setting broad principles for its use:

Ethics

  • Is what we are doing ethical?
  • Are we considering copyright breaches, sustainability and energy use?
  • Are we complying with our obligations to our clients for privacy and confidentiality?

Knowledge & Truth:

  • Can we be assured that what AI is giving us is true?
  • Are we sure there are no legal issues?
  • Is it contributing to the knowledge of our group?
  • Do all team members understand their responsibilities when using AI?

Creativity

  • Does it bypass creative solutions to complex problems?

Professional Development

  • Does it contribute to the skills and knowledge of the next generation of architects?

I’ve left all of these points as questions – because they really do need further thought and expansion. I’d really like to hear what others have done to guide a sensible use of AI in architectural practice.

John Held is the Immediate Past National President of the ACA and a Director of Russell & Yelland Architects.

Photo: Igor Omilaev, Unsplash