Highlights:
- The study shows that organizations with AI projects in production lack a dedicated AI analyst or developers to oversee project development.
- The report suggests that 37% of the retailers and 35% of the financial services firms have AI platforms in production.
A new report has unveiled gaps in the importance enterprises place on security, compliance, fairness, bias, and ethics. A report carried out by O’Reilly has indicated that AI’s adoption is difficult to reach maturity today and lacks prioritization in these areas.
O’Reilly’s survey on enterprise AI adoption, conducted annually, concluded that only 26% of the enterprises have AI projects in production; the figures remain the same as last year. Furthermore, approximately 31% of the enterprises do not use AI in their work process, a figure that has increased from the previous year’s 13%.
Organizations are dependent on SaaS vendors to integrate the latest AI functionality into their software, applications, platforms, and tools and to scale their teams by gaining valuable insights from AI integration. As per Gartner, the struggle to adopt AI is evident for many organizations: Only 53% of the projects make it through the pilot to the production phase. On average, they take approximately eight or longer months to design scalable models.
Why is the growth of AI stagnant?
The growth of AI is flat this year. O’Reilly’s survey shows that many organizations with AI projects in production lack a dedicated AI analyst or developers to oversee project development. An online magazine that conducted email interviews with leading financial and insurance service firms reported that AI projects developed on well-defined business cases and designed to help overcome data quality challenges have the highest survival rate.
However, the CIOs also warned that it is crucial to keep other C-suite executives and board members enthusiastic about projects to monitor the progress and conduct short design reviews. O’Reilly’s annual survey suggests that 37% of the retailers and 35% of the financial services firms have AI platforms in production.
According to the CIOs of financial services firms, real-time risk management models that bank on supervised Machine Learning (ML) algorithms and random forest strategies are a top priority of the DevOps queue today. The CIO of the leading financial services and insurance firms wrote in an email, “We’re seeing the immediate impact of price increases, and it’s making AI- and ML-based financial modeling an urgent priority today.”
Few enterprises offer reimbursement of tuition fees to motivate IT teams to learn Artificial Intelligence (AI) and ML modeling. The aim is to make internal teams acquainted with current IT, database, and systems infrastructure, which can help create, evaluate, and promote models into production. The CIOs survey reported that overcoming challenges requires a commitment to larger IT budgets.
How do data science and ML tools minimize the risks?
Approximately seven out of 10 interviewed enterprises, about 68%, think unexpected results and forecasts from models are their greater risks. Additionally, the most significant risk reported is model interpretability, transparency, and model degradation (both at 61%). Meanwhile, security vulnerabilities were considered a risk by approximately 42% of the respondents, safety by 46%, fairness, bias, and ethics by 51%.
DevOps teams require DSML tools that aid a complete scope of the Machine Learning Development Lifecycle (MLDLC) with autopilot features. O’Reilly’s study cites autopilot and its quick advances in AI-generated coding. However, there is also a need for an autopilot to automatically audit raw data, choose the most relevant features, and spot the best algorithms. For instance, Amazon SageMaker Autopilot, an integrated component of SageMaker Studio, is utilized in DevOps Teams today to enrich model tuning and correctness.
SageMaker’s architecture is developed to adjust and flex the evolving model development, training, validating, and deployment scenarios. SageMaker seamlessly integrates with AI services, ML frameworks, and infrastructure between the AWS ML stack. During interactions with an online magazine, CIOs said SageMaker delivers better flexibility in managing notebooks, pieces of training, tunings, debugging, and deploying models. It provides the model interpretability and transparency enterprises require to see AI as a smaller risk.
SageMaker relies on the AWS Shared Responsibility Model, an AWS framework, to elaborate on the extent of its security support versus what customers need to provide. AWS protects up to the software level. Protecting client-side data, server-side encryption, and network traffic is the customer’s responsibility.
Amazon delivers an initial level of support for Identity and Access Management (IAM) as a part of its AWS instances. AWS’ IAM support includes config rules and AWS Lambda to generate alerts. Additionally, AWS’ native IAM includes APIs that can seamlessly integrate into corporate directories and deny access to ex-employees or ones who violate access rules. Though the shared responsibility model is just an initial step, it’s a valuable algorithm to design a cybersecurity strategy for an enterprise. In the interview, the CIOs said that they support native IAM support with Privileged Access Management (PAM) and develop their cybersecurity initiatives using the framework as a reference point.
AI adoption will bridge the gaps
O’Reilly’s recent AI adoption survey spots gaps in the importance enterprises place on security, compliance, fairness, bias, and ethics. For instance, approximately 53% of the AI projects proceed from pilot to production, showing the gap in integration, visibility, and transparency across MLDLCs. Enriching efficient DevOps, data scientists and researchers create, test, validate, and release models is one of the primary design aims for SageMaker. This is a case of how a DSML system can minimize model risks and allow AI to provide more business value with time.