BakerTilly.ca Logo

Blog

Blog

June 5, 2024 by James Hinebaugh

The risks and rewards of artificial intelligence

It’s rare to get through a day in 2024 without considering the state of artificial intelligence (AI) and where it is likely to go next. Just a few short years ago, it played a limited role in our daily lives, and it now seems to be everywhere. At Baker Tilly Windsor, we’ve experimented with building and hosting our own large language model (LLM) tools, to understand what levels of useful work can be accomplished in a fully secure environment. We see the future as a combination of these platforms, with advanced tools built from cutting‑edge, third‑party components and client data protected in secure environments.

In my work as a senior manager focused on the tech sector, I’ve seen AI revolutionize the way many companies do business, but it’s too early for us to be entirely enthusiastic about the potential of these tools, as risk is still an important part of the equation. While these tools can already achieve amazing things, we need to supervise them closely, oversee their findings and think carefully about what and who might have access to our data. With that conflicted perspective in mind, this article offers a closer look at a few risks and rewards of AI, so we can embrace its potential without neglecting to address possible pitfalls.

Reward: Achieving excellence

AI tools enable staff to be more effective and produce work that is closer to their objective sooner. AI can make it easier for junior staff to achieve excellence by accurately and repeatedly applying information from senior staff to the workflow. It also gives more responsibility to junior staff, allowing them to flex their muscles and make a more powerful impact. As they get more comfortable working with these tools, the overall quality of their work increases. In essence, AI has the power to turn junior staff into superheroes, while giving senior staff the breathing room to keep their focus on the big picture of engagement excellence.

Risk: Quality control

Anyone who has experience working with AI will understand it offers some incredible strengths, but also some frustrating weaknesses. With familiarity, we have developed an intuitive understanding of how AI makes mistakes. LLM tools, for example, are trained to please the person using them, and sometimes that means they confidently respond to a question, even if they don’t have the correct answer. Most people expect predictable performance like you would get from a traditional computer program, but AI is far more erratic and unpredictable. This means you need to carefully consider every conclusion. While these tools tend to be correct more often than not, even if a tool is accurate nine times out of 10, all it takes is one mistake to cause potentially dire consequences.

Reward: Onboarding efficiency

As mentioned above, AI will soon enable our junior staff members to bring a file much closer to completion with substantially fewer hours of involvement from senior staff. This applies equally to our onboarding process. For example, our team handles the Scientific Research and Experimental Development (SR&ED) tax credit, which are tax incentives intended to encourage businesses to develop new technologies in Canada. To ensure our technical reports adhere to the highly nuanced regulations of the SR&ED tax code, the onboarding process for new team members has required an enormous amount of one‑on‑one knowledge transfer. This process is so time‑consuming for our senior staff that we have historically avoided team expansion during our busy season from March to June, even though this is when we often need the most help. However, in recent months, we have been able to distil our expertise into AI tools that automate much of this knowledge transfer. We can teach the AI what makes a quality report, which allows the AI to act as the junior staff members’ preliminary reviewer, offering suggestions for what information to emphasize and how to avoid known red flags. With these tools in place, we can grow our team when we need to, knowing they will be up and running in significantly less time.

Risk: Integrity issues

With the AI tools currently available, confidentiality is still a significant concern. Any information you enter into the ChatGPT prompt field and any response you receive is kept in a log on OpenAI and Microsoft servers, which can be accessed by their employees as they work to improve the user experience and even train the next generation of models. If you are pasting sensitive information into this platform and you include anything that might compromise someone’s intellectual property, you risk this information getting into the hands of their competition. Microsoft is working on strategies to address this issue with its enterprise tools, but extreme care must be taken while the technology is in its infancy. At this point, my advice would be to only trust tools operated through your own, access‑controlled servers.

The future of AI

As we wait for more secure tools to develop, making AI more dependable, it’s essential we don’t have our heads in the sand. If we take the appropriate precautions, we can safely use these tools to our advantage, and this will put us in a better position to maximize their potential for our clients as AI makes exciting leaps forward in the years ahead.

Meet the Author

James Hinebaugh James Hinebaugh
Windsor, Ontario
D 226.774.3924
E .(JavaScript must be enabled to view this email address)

S