AI is terrific and amazing – except when it’s not.
While AI can drive cost efficiencies, elevate customer experience, and reduce fraud, significant concerns remain around hallucinations, bias, a general lack of common sense, and difficulty in explaining how AI reaches conclusions. Thus, many financial institutions and other organizations are now exploring something called “hybrid AI” to mitigate these issues.
Each of the currently popular AI models has significant strengths, but each also has weaknesses or limitations. Before discussing some AI models, I want to provide the caveat that I’m not trying to present a comprehensive taxonomy of AI models or distinguish, as a data scientist would, between some of the finer grained distinctions among sub-models.
Generative AI (GenAI) and its language-oriented subset, large language models (LLMs), have amazing abilities to generate new text, images, and audio. Their value in automating content creation is huge. However, the rate of incorrect or misleading GenAI content creation, generally referred to as hallucinations, is deeply concerning.
Hallucination rates vary by benchmark test. One independent researcher found that even with a relatively easy request to summarize specific news articles, chatbots generated incorrect information between 3% to 27% of the time. One of OpenAI’s benchmarks shows a 79% hallucination rate!
GenAI is also very limited in its ability to explain why it came up with a response or to provide sources or citations. It operates as a black box, which makes it impossible even for data scientists to explain the basis for correct responses or hallucinations.
Machine Learning (ML) uses statistical models to make predictions or forecasts — without rules being specified in advance. ML systems learn from historical training data and can adapt based on experience and exposure to new training data. This ability to learn is great for areas ranging from loan reviews to fraud identification and medical diagnosis. At the same time, ML output can present ethical concerns based on biases present in the data used to train its models.
AI neural networks can conduct elaborate pattern-matching analysis of unstructured data with a model that is somewhat similar to biological neural networks. For instance, they are particularly good at image, video, and audio recognition. However, like GenAI, neural networks operate as a black box, and it is impossible to determine the basis for output responses or recommendations.
Symbolic AI derives outputs from explicitly defined rules and logic, which are typically provided up front by experts. It is particularly good for structured data where rules are clear and concise, such as is usually the case for compliance and legal requirements.
Although symbolic AI provides a good means of capturing expert knowledge, it is necessarily very limited in scope. As there is no learning function, it does not automatically adapt to environmental changes without new programming, nor does it deal well with ambiguity.
Per the classic definition, hybrid AI integrates two or more complementary AI models into a single solution to mitigate the limitations of the standalone AI models. It helps achieve both models’ strengths and significantly reduces their limitations. Some analysts expand this definition of hybrid AI to include solutions that combine a single AI model with integrated human experts.
Finally, the term hybrid AI is sometimes used to refer to the integration of in-house, custom AI solutions with commercial AI systems from AI vendors and industry-specific IT service providers, such as FinTechs, InsurTechs, and HealthTechs.
Many analysts consider hybrid AI the ‘next big thing’ in artificial intelligence, following the current emphasis on agentic AI.
Having defined hybrid AI, let’s make it real by looking at some of the top model integrations.
Integrating LLMs with ML can help both models in a variety of ways. Explainability can be enhanced by making ML output more human readable and by making LLM output more precise and sometimes more up to date. Both LLMs and ML can assist each other with sentiment analysis, the determination of emotions. This can be particularly helpful in customer service chatbots.
Another approach used to deal with GenAI’s limitations is to integrate it with a team of human experts. For example, an LLM-based chatbot used for customer support might reach out to the first available human expert when issues are too complex for the LLM or when sentiment analysis makes clear that an angry customer needs to talk to a human being to help defuse the customer’s negative emotions.
The combination of ML and expert systems is frequently used for fraud detection / prevention and for explicitly putting compliance and ethical constraints on ML output. The rules-based logic of expert systems can also improve the ML’s explainability.
For example, financial services organization HSBC uses a hybrid AI system to combine the ML identification of unusual customer spending with predefined symbolic AI rules for flagging potentially fraudulent transactions. Through this, experts can follow up on questionable transactions with an investigation. This dual-model approach is helpful for preventing fraud and avoiding false positives.
A practical example of hybrid AI involving neural networks is autonomous vehicles. While the neural network handles image recognition and most of the driving, the road maps on which the driving is based are effectively an expert system.
Few organizations can afford totally in-house AI development, but many are concerned about the security and privacy of using commercial, cloud-based platforms. For instance, organizations are particularly concerned about the leakage of confidential information through AI queries, internal training data, and AI results.
Hybrid AI provides a means to leverage external AI platforms with their cost and time-to-market advantages while keeping internal information local and secure. It can also leverage external market platforms while making strategic internal AI investments that represent intellectual property and potential competitive advantage.
Through this approach, organizations can gain the advantages of both — buy and build models. Furthermore, to the extent that the in-house and external AI platforms use different AI models, they can also gain complementary AI model benefits.
While Hybrid AI is relatively new and many implementations are custom, some commercial implementations are available. Given the diversity of AI models, no commercial system encompasses all types of hybrid AI. However, commercial hybrid AI systems do exist for particular AI model combinations.
Let me provide two examples.
Beyond Limits offers a hybrid AI system that promises to “transform your existing data and ML investments into an intelligent No to Low Code AI platform that combines GenAI with Neuro Symbolic reasoning to provide automated guidance and actions that enhance business results.”
The Celsior Explainable AI/ML Solution provides ML-based predictive modeling while providing strong explainability for business and other non-technical people. It can query information returned by an LLM and compare it to internal training data to help ensure the accuracy of the response. It also provides citations for the response so that experts can verify the accuracy of the content for themselves.
In addition to existing commercial hybrid AI systems, several organizations like Intel, IBM, and Mythic AI have hybrid AI chips under development.
As individual AI models continue to evolve, their limitations remain a concern — especially in high-stakes sectors like banking and financial services. Hybrid AI offers a strategic path forward, combining complementary strengths while mitigating risk. Financial institutions and other organizations need to start getting experience with hybrid AI and putting it into production.
Suffice to say, those that adopt hybrid AI now can gain early benefits in competitive advantage, efficiency, compliance, and customer trust.
To explore the full set of strategic AI recommendations for financial institutions — including explainability, governance, and adoption frameworks — download our white paper, Banking on AI.
Is your test automation strategy ready for what's next?
Learn MoreAutomating operations for scalable systems
Learn MoreDesigning UX that drives enterprise impact
Learn More