Overcoming the obstacles to AI adoption

Chatbots in e-commerce, diagnostics in healthcare, recruitment in HR, and surveillance in policing can do an efficient job, but they could be subject to AI bias, experts advise.
When ChatGPT was launched in November 2022, it appeared to be only a matter of months before AI was in widespread use across the business world. That has turned out to be far from the case with the initial hype fading in the harsh light of business realities.
“While Generative AI tools hold enormous transformative potential, several significant barriers continue to hinder its widespread adoption across industries and fields,” says Dr Alessia Paccagnini, associate professor at the University College Dublin School of Business.
“First, a fundamental lack of digital literacy and technical expertise poses a major obstacle. For instance, many organisations do not yet have the internal knowledge or confidence to explore or implement GenAI meaningfully.”
This challenge is compounded by the broader shortage of workers trained in STEM fields, especially those with a deep understanding of AI, data science, or machine learning, she adds. “Without skilled professionals, businesses struggle to identify relevant use cases, assess risks, or integrate GenAI tools into existing workflows.”
There are also concerns around data privacy and security while the manipulation of information remains critical. “Organisations fear exposing sensitive information when using third-party AI tools and remain cautious about potential data leaks, breaches, or unauthorised access — especially in highly regulated sectors,” Paccagnini explains.

The risk of information being misused or misrepresented by GenAI systems adds another layer of complexity, she continues. “Experts in this sector will be required to provide consultancy and help to avoid misunderstanding.”
Ethical concerns around bias, misinformation, hallucinations, and accountability also contribute to hesitation.
“Businesses are wary of reputational damage or unintended consequences resulting from GenAI-generated outputs that lack human oversight or transparency,” she notes. “For example, AI bias can amplify existing biases and discrimination, prejudice, and stereotyping. Chatbots in e-commerce, diagnostics in healthcare, recruitment in HR, and surveillance in policing can do an efficient job, but they could be subject to AI bias.”
She also points to a growing fear of using GenAI tools improperly preventing its use.
“This creates a paradox where the absence of use becomes a form of misuse,” she notes. “When teams avoid engaging with GenAI out of uncertainty or fear, the resulting lack of experimentation or understanding can lead to poor decision-making, misinterpretation of its capabilities, or missed opportunities.”
According to Chartered Accountants Ireland president Barry Doyle, many businesses face difficulties in finding appropriate use cases for the technology. “Businesses can spend quite a lot of money adopting AI, but do they truly need it?” he asks.
“For example, manufacturing companies are probably using robotics on the production line and there might be use cases for AI there. But that is not relevant to a services business like a marketing agency. You have to ensure that the use case is appropriate for the type of business concerned.”
Lack of AI literacy is another obstacle to adoption.
“There is a huge cyber threat to be considered,” says Doyle. “If people across a firm are using public ChatGPT, is that a cyber risk or potential data leak. The AI literacy piece is key. The Government needs to continue to enhance the skillsets of the working population through Skillnet Ireland and by releasing funds from the National Training Fund for upskilling and retraining.”
The EU AI Act can also present challenges in terms of the cost of adoption.
“If a small company finds itself within scope of the AI Act, compliance costs are between €6,000 and €7,000 initially and about €8,000 every year after that,” he points out. “Not only do they have the implementation cost, but they have ongoing compliance costs after that. Simple things that SMEs might use like a chatbot can fall in scope of the Act.”
There is a role to be played by accountants in advising firms on adopting the technology in terms of minimising risk and costs, he adds.
“It can be difficult for people to find the space and time to test and find the right use cases for the technology. There is a need to give companies the support to do that, and not just financial support.
“Many businesses are under resource pressures and even when they have an external consultant developing the strategy, they don’t have the resources to implement it. We are still at the early stages of AI adoption, and we haven’t defined what good looks like yet in many cases. Businesses don’t want to spend too much time on something that doesn’t produce value. That’s where accounting professionals come in.”

Salesforce director of solutions engineering, Glenn Sheridan, says AI adoption has been a struggle for a few reasons. One of them is overinvestment in the use of basic AI support tools.
Organisations have been utilising the technology for narrow use cases like helping with writing emails and so on, but these haven’t translated into real value. Similarly, many pilot projects tend to have focused on use cases that applied to a limited number of people in the organisation and therefore failed to scale afterwards.
“At Salesforce we have built AI into the flow of work,” Sheridan points out. “It’s at people’s fingertips all the time and is easy to take advantage of whenever it is needed.”
Agentic AI offers a way forward, he adds. “In Salesforce we have an agentic AI layer called Agentforce which can take actions on your behalf.”
The platform allows organisations to build and deploy autonomous AI agents to automate tasks across a variety of business functions. The agents can operate independently, retrieving data, creating action plans, and executing tasks without human intervention, thereby augmenting employees and improving efficiency.
There is no set playbook for AI adoption, according to Stephen Noonan, head of accountancy body ACCA Ireland. “Part of the challenge is that there is not a one-size-fits-all approach,” he says. “Finding success with AI depends on a range of variables from organisational strategy to technical capabilities to workforce management to governance, amongst other things.
"As part of our Smart Alliance report we published some best practices that we learned through our research and undertaking case studies, and these can serve as building blocks, but these need to come together through a clear strategy and a willingness to experiment and learn.”
The key is that this cannot remain static, he continues. “AI solutions need regular review against the organisation's strategy, policies, and risk appetite to validate ongoing adherence and alignment as well as to measure ongoing success against selected metrics.”
Eliminating the barriers to GenAI adoption will require a multi-faceted, proactive approach, Paccagnini adds. “Most importantly, companies will need to invest in digital skilling so that existing employees have the knowledge and tools to engage with GenAI both confidently and responsibly. This means providing targeted programmes in AI literacy, prompt engineering, data awareness, and ethical AI use.”
Companies will also need to invest in attracting new talent to their workforce by listing required competencies for future roles, she adds. “Businesses will need to partner with universities, technical institutes, and schools to align learning on emerging industry skills to develop a new generation of skilled workers for an AI-augmented workforce.”
A strong and transparent framework for data privacy, security, and governance is also critical.
“Organisations must develop internal data policies while simultaneously advocating for a stronger long-term regulatory framework from governments,” Paccagnini contends. “This builds trust and ensures a shared understanding of how sensitive data is handled in AI systems.”
Finally, she says organisations must adopt and enforce ethical principles related to AI and GenAI, including guidelines for responsible data use, transparency, human oversight, and fairness.
“Incorporating these principles into the day-to-day operation of an organisation and, more importantly, fostering a culture of experimentation and continuous learning for using AI will be an important part of unlocking GenAI's potential while still control its risks.”