Last year, the Wall Street Journal, the Journal of Accountancy¸ and Accounting Weekly published three pieces that on the surface offer competing views of accounting and generative artificial intelligence, a form of AI technology that identifies patterns in large quantities in training data and then generates original content – text, images, music, video, etc. – by recreating those patterns in response to user input. ChatGPT, which burst into the zeitgeist in November 2022 is the most well-known example of generative AI. The Wall Street Journal article discussed how accounting will be disrupted by the technology; it in, Mark D. McDonald, a senior director at Gartner, a technology research and advisory firm, is quoted as saying that a decade from now “accounting professionals will have a totally different set of skills than the experienced professional of today and will largely look more like data scientists and systems engineers.” In the Journal of Accountancy, Blake Oliver wrote that 300,000 accountants and auditors left the profession between 2020 and 2021 and there have not been enough new entrants to replace them; however, AI can increase productivity and save the accounting industry from decreasing numbers in the field and the burnout many experience inside it. Finally, Accounting Weekly reported that “AI will destroy more jobs than it creates.” So which is true? Will AI change, save, or destroy accounting as a profession?
The truth is that the three pieces, while appearing to make very different arguments about how accountants should view AI, are actually all saying the same thing: Generative AI will change accounting and the firms that try to take advantage of that will be better positioned to thrive going forward. The Wall Street Journal article also noted that generative AI would allow accountants to “take on greater responsibility and decision-making authority than in the past,” making the profession more rewarding for those involved and more attractive for young professionals making career decisions. The Journal of Accountancy piece described at least one accounting professional who relies on ChatGPT to write application programming interface and who has been learning Python coding through ChatGPT to get better at his job while also reporting that AI “can help the profession tackle pressing challenges related to production capacity, staffing shortages, and accountant burnout.” Accounting Weekly writes that Dean Furman, an AI consultant, advises “There’s a lot of opportunity, and I think that’s the exciting part…There’s opportunity for smaller firms to take advantage and become the accounting firm of the future.”
Those opportunities come with risks. For example, ChatGPT has produced well-publicized “hallucinations,” the industry’s term for the confident, seemingly correct answers a generative AI application provides that are in fact incorrect and not supported by any outside resources. Other prominent generative AI applications – including Google’s Gemini (which also produces natural language), DALL-E 2 (images) and Synthesia (videos) – have been similarly unreliable.
To properly plan for the future of accounting with AI, it is useful for CPAs to consider how generative AI can help their operations, the pitfalls of adopting generative AI applications, and the need for appropriate thought and precautions.
What Will AI Do for CPAs?
CFOs and industry consultants have identified numerous functions that CPAs and accounting firms can push to generative AI software. These include (i) preparing the first drafts of financial reports and related documents (client handbooks, tax workpapers, etc.), which accountants or other employees can later revise and improve; (ii) reviewing and summarizing companies’ financial statements as part of the audit process; (iii) drafting communications to third parties, including clients; (iv) conducting research; (v) translation, not only between languages, but from complex ideas to simpler explanations; (vi) coding; (vii) customer service and potential customer communications via chatbots; (viii) pulling IRS publications; and (ix) preparing marketing materials, including graphics and articles. Assigning these tasks to AI platforms could reduce the time required to perform them, saving money for firms and clients. That would also, as the articles above note, free up CPAs for higher-level functions and client relationships, which are more intellectually challenging and rewarding, while also helping alleviate exhaustion in the profession.
What Are the Dangers for Accountants Using AI?
I frequently say that we overestimate what AI will do in the next two years and underestimate what it will do in the next ten. That is a relevant point here because the technology is not yet capable of reliably providing all the support described above. The potential for hallucinations makes research performed by generative AI suspect. Ashley Francis, a CPA interviewed for the Journal of Accountancy piece, notes that she can envision using AI software soon to write tax memos, projects that currently take her 40-80 hours. AI can substantially reduce that time. “That technology is coming,” she says, but it is “in its infancy.”
CPAs should also be concerned about maintaining the confidentiality of their clients’ information when they use generative AI applications. The privacy policies and terms of use documents maintained by most widely available generative AI applications, including the most common ones like ChatGPT and Gemini, make it virtually impossible for accountants to provide any client data to those platforms. Their parent companies retain broad rights to use data you provide, meaning you cannot share client data without violating your confidentiality obligations.
Firms should also be careful about how they rely on AI to communicate with clients and potential clients. In my practice, I encounter two issues most frequently. First, marketing email messages that AI applications currently draft can be problematic if not monitored carefully. The quality of AI writing can vary greatly, and firms that use AI to draft email blasts should ensure that someone reads those messages to ensure recipients are not turned off by them. Although generative AI more reliably prepares marketing materials than it performs research (due to hallucination concerns), you should still carefully review any messages to ensure the information is correct. There is also a chance that text or art could be similar to other copyrighted works, producing an infringement claim. Second, many companies that rely on chatbots for potential client intake or customer service deploy the bots so that users can easily believe they are interacting with a human. In my experience, this is a mistake. Some potential clients may believe they have formed an actual relationship that creates a fiduciary relationship, which could lead to actual obligations owed to those people, depending on the circumstances. Additionally, there are a growing number of examples in which human-appearing AI alienated users when they realized the “person” they had developed a relationship with was AI. To avoid this, I advise clients to be transparent about their AI usage.
AI Use Policies
Each organization will take a different approach to how to use AI. I recommend that leaders educate themselves about the technology and form an opinion about how, and the extent to which, they want to incorporate AI into their operations now and in the future. Some companies may look at the risks identified above (and others) and decide to wait until the technology improves and there are more options available. We will almost certainly see enterprise level generative AI applications designed to interact with an individual firm’s data to produce research and document drafts, while also treating the firm’s data as private and confidential. Other organizations may decide that the benefits of early adoption outweigh the risks and choose to engage with existing generative AI solutions now.
During this process, leaders should also seek input from employees. How do they use AI? What issues have they encountered? How do they hope to use the technology in the future?
If it decides to develop an AI strategy, the organization should work with counsel to prepare an AI use policy, which can be a standalone document or incorporated into existing policies. Depending on the company’s needs, an AI use policy may detail what applications are acceptable, what uses are acceptable, whether employee use will be monitored, what training is required, etc. The goal should be to protect the organization from the risks associated with AI while also giving employees an opportunity to experiment, which can help the company in the long-run, as such experimentation can allow it to identify and adopt useful AI faster and better going forward.
AI and the Future of Accounting
The Wall Street Journal, the Journal of Accountancy, and Accounting Weekly all make clear that AI will change the nature of accounting as a profession, although the technology still needs to develop further before CPAs can reliably incorporate all its functions. People have been leaving the profession, and AI is likely to increase that migration. However, AI has the potential to perform the duller, baser parts of an accountant’s career, giving him or her more time for counseling human clients and the more intellectually rewarding parts of the job. Some AI researchers have suggested that, in this way, AI could be re-humanizing rather than de-humanizing. If that’s true, accountants, like many other professionals, may have a lot to look forward to from AI.