By the late 2010s, most large organisations had achieved impressive levels of automation. Enterprise systems, RPA bots, and integration platforms handled data movement and repetitive tasks with precision. Yet something fundamental remained missing: the ability for computers to understand language, context, and intent.
Accountants and analysts still spent enormous time summarising reports, drafting memos, and explaining results in plain language. The gap between data processing and communication persisted. It was this very gap that Generative AI set out to close.
The emergence of models like OpenAI’s ChatGPT and Anthropic’s Claude represented not just another step in automation but an evolutionary leap — machines that could read, write, and reason in natural language. For the first time, professionals could converse with technology as though they were talking to a colleague.

The story of ChatGPT begins with the broader development of Large Language Models (LLMs) by a research organisation called OpenAI, founded in 2015 by a group including Elon Musk and Sam Altman. Their goal was to build artificial general intelligence (AGI) that would benefit humanity.
OpenAI’s early breakthroughs came from a series of models named GPT, short for “Generative Pre-trained Transformer.” The “transformer” architecture, first introduced by Google researchers in 2017, revolutionised how computers processed language. Unlike earlier models that could handle only short text sequences, transformers could consider entire paragraphs at once, recognising relationships and meanings across words and sentences.
GPT-1 (2018) was the proof of concept — trained on vast text corpora to predict the next word in a sequence.
GPT-2 (2019) showed startling fluency, capable of writing paragraphs that sounded human, but OpenAI initially withheld its full release due to concerns about misuse.
GPT-3 (2020) scaled this architecture dramatically — 175 billion parameters — and became the foundation for what the world would soon know as ChatGPT.
Then, in late 2022, OpenAI released ChatGPT, a conversational interface built on GPT-3.5. It allowed anyone — not just programmers — to interact with an LLM using plain language. Overnight, millions of users began asking it to write emails, explain concepts, draft essays, and even summarise financial reports.
This was the beginning of Generative AI as a productivity tool, and the start of a profound shift across industries, including accounting.
Before ChatGPT, AI tools felt distant and technical. They required structured commands or specialised integrations. ChatGPT changed that by allowing natural conversation. Users could ask follow-up questions, correct mistakes, and guide the AI step by step — a feature known as “reinforcement through dialogue.”
For the first time, professionals could simply type:
“Summarise this balance sheet and highlight areas of risk,”
and receive a coherent, human-like explanation in seconds.
This accessibility transformed perception. AI was no longer a backend process; it was an assistant. Accountants, auditors, consultants, and students found themselves empowered by a tool that could generate draft reports, summarise regulations, or explain complex standards in plain English.
Within months of ChatGPT’s release, early adopters in accounting began exploring practical uses.
One of the most immediate applications was financial summarisation. Accountants uploaded excerpts of annual reports or financial statements and asked ChatGPT to produce executive summaries. Instead of spending hours crafting narratives for board packs or client presentations, they could generate high-quality first drafts almost instantly.
Another early use case was policy interpretation. Many professionals asked ChatGPT to explain accounting standards in simpler terms — for instance, “Explain IFRS 16 lease accounting like I’m a new graduate.” The responses provided clear summaries, often supported by analogies.
Firms also used ChatGPT for educational purposes. Junior accountants could quickly clarify technical topics, from deferred tax calculations to revenue recognition rules, without waiting for senior staff. The model became a real-time mentor — always available, endlessly patient, and capable of tailoring explanations to the learner’s level.
In 2023, Deloitte announced internal trials of generative AI tools to enhance productivity within audit and advisory teams. The pilot used GPT-based systems to draft audit planning documents and summarise client correspondence.
Auditors uploaded long email threads and asked the AI to extract key issues, deadlines, and action items. What once took an hour of reading could be completed in minutes. The AI didn’t replace professional judgment, but it eliminated the burden of administrative summarisation.
According to Deloitte’s innovation leads, the results were “transformative for time management.” Junior staff could spend more hours analysing data rather than managing documents, improving both engagement and efficiency.
This experiment highlighted an important truth: Generative AI did not remove human expertise; it multiplied its impact.
As OpenAI’s ChatGPT gained global traction, another player emerged — Anthropic, a company founded by former OpenAI researchers in 2021. Their model, named Claude (after Claude Shannon, the father of information theory), focused on safety, context length, and interpretability.
Claude’s early versions offered longer memory and context windows, allowing users to upload full documents or datasets for analysis. This made it particularly useful for professional fields like accounting, where users often needed to interpret long policies, contracts, or reports.
For example, a financial controller could upload a 30-page accounting policy and ask Claude to extract relevant clauses about asset impairment. This ability to process extensive text without losing coherence gave Claude an edge in corporate and academic settings.
Anthropic positioned Claude as a model for responsible, explainable AI. Its emphasis on constitutional AI — rules of behaviour embedded into the model — reassured many industries wary of regulatory risk. For accountants, that meant more reliable responses and reduced risk of inappropriate or speculative content.
As both ChatGPT and Claude matured, professionals found creative ways to apply them in everyday accounting practice.
Analysts began feeding financial tables into prompts and asking the AI to identify trends:
“What stands out in this profit and loss statement?” or “Explain why gross margins declined in Q2.”
These conversational analyses helped professionals frame insights before diving into deeper quantitative validation.
Audit teams used generative AI to produce initial drafts of audit programs, internal control descriptions, and sample risk assessments. Instead of starting from a blank document, they could generate a structured outline in seconds and refine it using their expertise.
AI tools became invaluable for scanning large bodies of regulation. ChatGPT could summarise updates to the Australian Accounting Standards Board (AASB) or international IFRS announcements, helping firms stay current.
Consultants and accountants used AI to draft professional emails, proposal summaries, and explanatory notes for clients. The tone could be adjusted — “make this sound formal” or “simplify this for a non-financial audience.” These capabilities saved time while maintaining professionalism.
In mid-2023, PwC launched a global initiative to upskill over 60,000 employees in generative AI. The firm trained staff to use ChatGPT-style assistants for internal tasks such as document summarisation, proposal drafting, and internal knowledge retrieval.
An internal pilot with finance professionals showed notable results: report preparation time dropped by nearly 40 percent, and employees reported greater satisfaction with their ability to focus on analytical work rather than repetitive documentation.
The key learning from PwC’s pilot was that AI literacy mattered as much as the tool itself. Staff who learned to structure clear prompts and critically evaluate AI outputs achieved the greatest productivity gains.
Generative AI did not benefit only large firms. Small and mid-sized businesses also gained powerful capabilities without expensive enterprise software.
A sole practitioner accountant in Melbourne described using ChatGPT to draft engagement letters, create website content, and explain tax concepts to clients in plain language. What once required a marketing consultant or copywriter could now be produced in minutes — at negligible cost.
Bookkeepers began using AI to write reconciliation summaries or generate commentary for client reports. Instead of sending plain figures, they could provide clients with readable, professional narratives generated automatically from data exports.
Generative AI succeeded where many previous technologies struggled: it was immediately useful. There was no need for integration, configuration, or training servers. A browser and a prompt were enough.
The productivity gains were both quantitative and qualitative. Tasks that previously required mechanical repetition (summarising, drafting, rewording) were now semi-automated. Professionals could move up the value chain — from preparing information to interpreting it.
Moreover, these tools supported knowledge amplification. A mid-level accountant could perform research and writing tasks once reserved for specialists. The AI acted as an instant reference assistant, a language editor, and an analytical co-pilot.
This democratisation of capability marked a turning point: AI became not just a back-office tool but a front-line partner.
Following ChatGPT’s success, a new ecosystem of models and interfaces emerged. OpenAI continued to innovate, releasing GPT-4 in 2023 with advanced reasoning and multimodal capabilities (text, images, and code).
Anthropic released Claude 2 and Claude 3, extending context limits to entire books or audit reports. Google entered the field with Gemini (formerly Bard), and Microsoft embedded GPT directly into Office applications as Copilot.
For accountants, this convergence meant that generative AI was no longer a standalone chatbot — it became integrated into familiar tools like Excel, Word, and Outlook. Financial professionals could now generate commentary directly from spreadsheets or summarise email threads automatically within Outlook.
This seamless embedding of AI into daily workflows solidified its role as a core productivity layer rather than an experimental novelty.
While the enthusiasm was undeniable, the early adoption phase also revealed limitations. ChatGPT could occasionally generate inaccurate or overconfident responses — a phenomenon known as “hallucination.” It also struggled with numerical accuracy in complex calculations.
Accountants quickly learned an important principle: AI is a first draft, not a final authority. The best results came when professionals combined AI-generated content with human verification and contextual judgment.
Firms began developing internal governance policies around AI usage, ensuring client confidentiality, citation of sources, and transparency in reporting. These frameworks laid the foundation for responsible and ethical AI adoption across the profession.
The birth of ChatGPT and Claude marks one of the most significant technological moments since the spreadsheet. For accountants, it bridged the final gap between data processing and narrative understanding.
Where ERP systems automated transactions and RPA automated actions, Generative AI automates communication and reasoning. It gives professionals a thinking partner capable of transforming complex data into clear insights.
In the broader story of accounting technology, this represents the dawn of cognitive collaboration — humans and machines working together to interpret, explain, and improve financial understanding.
From early pilots in global firms to independent practitioners in small offices, the message is clear: Generative AI is not the end of the accountant’s role. It is the expansion of it — a new partnership where human expertise and machine intelligence combine to deliver insight, speed, and clarity in ways once thought impossible.