Tips For Achieving Enterprise Machine Learning Success


Posted December 1, 2022 by nearlearns

Data and analytics executives have always been aware of the broad strokes that they can gain from adopting machine learning (ML).

 
Value comes in three ways: improving the user experience (customers and employees), creating operational efficiencies, or driving top-line growth.

But line-of-business teams constantly face challenges along the road to uncovering that value, with the number one roadblock being the inability to extract insights from their vast treasure trove of data. According to a recent data management Forrester Consulting study commissioned by Near learn, eight out of 10 data management executives cite poor data quality as their top ecosystem challenge. Other top challenges include difficulty understanding the data (76%) and lack of data visualization (74%).

A new Forrester Consulting study commissioned by Near learn about the conduct of ML uncovered the root causes of organizations' data challenges. They include the difficulty of translating academic models into operational approaches, data silos in the organization, and AI risk. Getting ML models into production is still a messy endeavor, which is why we're not seeing applications of ML Blossom increasingly. More than half of the respondents to the Forrester ML study reported that their organizations had been developing and releasing ML applications for only one to two years. Many remain in the experimental stage.

But what we often see as organizations' ML ecosystems mature is a change in the way we measure success. They transition from IT-heavy benefits to seek business decision-making outcomes such as improved digital experiences and revenue growth. Forrester data bears it out. Data and analytics executives say their top priority right now is using multi-cloud environments successfully. However, over the next three years, top priorities shift to deploying ML to automate anomaly detection.

To achieve this, the democratization of ML for anomaly detection, change point detection, and root cause analysis is key to unlocking insights across a wide range of use cases. For example, our open source Data Profiler solution provides a pre-trained deep learning model to monitor big data and detect private customer information so it can be protected.

Involving business analysts more deeply in ML development and data insights was an important decision by Near learn and went a long way toward removing silos between analysts, data scientists, and engineers. I wrote earlier this year in Informationweek about how to democratize ML across the enterprise. Here I'd like to share some best practices for operating your ML practice as a mature program:

Identify a partner. Roughly a third of ML decision makers are working with data and platform partners (internal and external) and expect this relationship to grow. It's always best to find a partner who has been "in the ML trenches" and has a proven ability to operate ML apps with transparency and explainability.

Build the business case for organizational support. Decision-makers want to see the positive impact of ML across the organization, so it's always best to build a business case that delivers cross-business results. Some of the benefits of focusing include easier data mobility, traceability, and faster time-to-action. Once you've established proof points around improved CX and revenue growth and put some verified wins on the board, it becomes much easier to keep leadership motivated.

Standardize across teams. Best practice is to leverage a platform that gives your teams controlled access to algorithms, components, and infrastructure for reuse. This allows non-data science and machine learning practitioners to use ML for business decisions with effective results. An example is our use case for credit card fraud defense, where we are using indigenous and open-source ML algorithms hosted by a shared platform to detect anomalies and build defenses automatically.

Leverage platform for model operation. Custom ML model pipelines can be inefficient and unreliable, putting ML out of reach for non-specialist practitioners. Standardizing and reusing the same stack across all ML efforts using a cloud-native platform like Kubernetes helps ensure that parameters and results are repeatable and discoverable. Repetition also enhances your model audit and governance review.

Most organizations are still in the late stages of the experimental phase with ML and are looking for the right path towards maturity. Thinking about operating an ML ecosystem is essential to reaching that high level where business data becomes a predictive engine for your business and a fertile source of new revenue streams and business opportunities.
-- END ---
Share Facebook Twitter
Print Friendly and PDF DisclaimerReport Abuse
Contact Email [email protected]
Issued By nearlearn
Country India
Categories Education
Tags machine learning
Last Updated December 1, 2022