Highlights –
- The percentage of completed or nearly completed AI implementation projects grew to 63% today from just 6% a year ago.
- 44% of the leaders report that they have already established AI ethics and Responsible AI standards and processes.
Artificial Intelligence (AI) has been a part of the tech hype recently, but very few talk about the challenges it brings. A new survey has estimated that the number of complete or nearly complete AI projects has grown 10 times in just the past year. This is excellent news, but the survey also shows that this growth has resulted in IT teams struggling to keep up with the pace. Companies need to hire more people with the required skillsets to manage the process and executives who can ensure the smooth processing of AI and timely fulfillment of the business needs.
The report, published by Juniper Networks, surveyed 700 IT managers and executives and found issues such as integration challenges, talent shortages, and governance requirements to be the most pressing.
The most interesting information was that the percentage of completed or nearly completed AI implementation projects grew to 63% from just 6% a year ago. Additionally, there has been an increased enthusiasm to adopt AI entirely in processes compared to last year’s survey, which registered fewer use cases. Twenty-seven per cent of the IT leaders said they’re looking to deploy fully AI-enabled use cases with widespread adoption in the future. This percentage stood at 11% last year.
The longstanding debate of whether to buy or build has surfaced with new AI projects. Companies are split on executing ready-made AI solutions compared to those built in-house. Almost four in 10 executives (39%) said that their firm uses a mix of in-house and ready-made solutions, and three in 10 said that their firm uses either just in-house or just off-the-shelf solutions. Building and executing AI solutions in-house has its own set of challenges. More than half, 53% of the IT leaders who participated, said that the reliability of these in-house AI applications is a significant issue, followed by integration with currently used systems (46%), development time (44%), and finding new AI-capable talent (44%).
An issue that’s challenging many IT leaders is finding or nurturing the right talent to develop, operate, and leverage AI. The survey found three top investment intention areas (21% each) – hiring the right people who can effectively operate and develop AI capabilities; training the AI models further; expanding the current AI capabilities into new business units.
Forty-two per cent of the participants said that their existing data and analytics departments are taking the lead and incorporating AI technology. A similar amount of people have established an AI center of excellence.
Some of the essential steps to tackle challenges and enable workforce adoption of AI are as follows –
- Provide tools and opportunities to apply newly acquired AI skills (43%)
- Include AI while updating performance metrics (40%)
- Expand the workforce plan by identifying new skills and roles (39%)
- Make changes in existing learning and development frameworks (39%)
- Implement and use AI-enabled development tools that need zero to low coding (39%)
- Adopt AI modeling automation tools (34%)
Companies faced AI-related challenges in developing models and standardization data in 2021. The challenges continue in 2022, but issues related to governance policy creation (35%) and AI system maintenance (34%) have become more critical.
Great AI capabilities demand more responsibility. Only nine per cent of the IT leaders consider their AI governance and policies, like responsible AI standards and processes and the establishment of a company-wide AI leader, to be fully mature and up to the mark. There has been a shift in thinking; more leaders are now considering IT governance a priority: 95% of them agree that the key to staying ahead of future legislation is to have proper AI governance, which increased from 87% in 2021. Almost 48% of the respondents opined that companies need to take more actions for effective AI governance.
Around 44% of the leaders reported that they have already established AI ethics and Responsible AI standards and processes. A similar percentage of people reported that they had established company-wide AI leaders to oversee AI strategy and governance.
When asked about the top risks from inadequate AI oversight, 55% affirmed that accelerated hacking, also known as “AI terrorism” (55%), tops the list. The other top concerns noted are privacy (55%), loss of human agency (48%), and regulation compliance (49%).