Have a personal or library account? Click to login
Hybrid approaches in smart sensing for detecting buying intent: performance, reasoning, and real-world deployment Cover

Hybrid approaches in smart sensing for detecting buying intent: performance, reasoning, and real-world deployment

Open Access
|Jan 2026

Figures & Tables

Figure 1:

Visualization associated with Table 1.
Visualization associated with Table 1.

Figure 2:

Task vs model overview 2.
Task vs model overview 2.

Figure 3:

Conceptual architecture.
Conceptual architecture.

Figure 4:

Unified taxonomy of buying intent detection methods, integrating DL, KG, RL, and MAS; based on prior surveys on intent modeling literature. DL, deep learning; KG, knowledge graphs; MAS, multi-agent systems; RL, reinforcement learning.
Unified taxonomy of buying intent detection methods, integrating DL, KG, RL, and MAS; based on prior surveys on intent modeling literature. DL, deep learning; KG, knowledge graphs; MAS, multi-agent systems; RL, reinforcement learning.

Figure 5:

Chronological timeline showing how buying intent detection evolved across four method families: classical machine learning, DL, KG, and agentic LLM systems, with representative milestones marked at the year they appeared. DL, deep learning; KG, knowledge graphs; LLMs, large language models.
Chronological timeline showing how buying intent detection evolved across four method families: classical machine learning, DL, KG, and agentic LLM systems, with representative milestones marked at the year they appeared. DL, deep learning; KG, knowledge graphs; LLMs, large language models.

Figure 6:

Visual taxonomy of AI techniques: Mapping DL, KG, RL, and MAS. AI, artificial intelligence; DL, deep learning; KG, knowledge graphs; MAS, multi-agent systems; RL, reinforcement learning.
Visual taxonomy of AI techniques: Mapping DL, KG, RL, and MAS. AI, artificial intelligence; DL, deep learning; KG, knowledge graphs; MAS, multi-agent systems; RL, reinforcement learning.

Figure 7:

F1 score comparison across different approaches.
F1 score comparison across different approaches.

Figure 8:

Radar chart comparing four AI techniques on accuracy, scalability, interpretability, and cost efficiency. AI, artificial intelligence.
Radar chart comparing four AI techniques on accuracy, scalability, interpretability, and cost efficiency. AI, artificial intelligence.

Figure 9:

Normalized performance comparison of DL, KG, MAS, and RL across F1 score, Hits@k, and Success@1 metrics. DL, deep learning; KG, knowledge graphs; MAS, multi-agent systems; RL, reinforcement learning.
Normalized performance comparison of DL, KG, MAS, and RL across F1 score, Hits@k, and Success@1 metrics. DL, deep learning; KG, knowledge graphs; MAS, multi-agent systems; RL, reinforcement learning.

Figure 10:

Performance trade-off chart—Accuracy vs cost/latency curve.
Performance trade-off chart—Accuracy vs cost/latency curve.

Figure 11:

Research Gap Heatmap-Techniques vs Challenges.
Research Gap Heatmap-Techniques vs Challenges.

Deployment-realism metrics

MetricDefinitionExample
p95 Latency (ms)95-percentile inference time per queryTransformer = 240 ms; Hybrid RL = 410 ms
Cost/1 k tasks ($)Total GPU + API cost per 1,000 predictions$0.38 vs $0.62
Grounded-answer rate (%)Outputs verifiably supported by source data92%
Tool-success rate (%)Successful external API/tool invocations88%
Success@1 (%)Correct decision in first attempt84%

Quantitative ablation results for hybrid components

ConfigurationF1 ScoreAUPRCNotes
Baseline (transformer only)0.780.81No external structure or policy learning
+ Retrieval module0.820.85Improves context grounding
+ KG module0.830.87Enhances reasoning and link precision
+ KG + RL modules (full hybrid)0.860.89Best trade-off between accuracy and robustness

Techniques and training details of different models

ModelTraining methodLoss functionHardware used
LLaMA2, GPT-J, GPT-3.5-turboSupervised pre-training on large corporaCross-entropy lossNVIDIA A100/V100 GPUs
TransE, DistMult, ComplEx, RotatEKG embedding trainingMargin ranking loss/logistic lossNVIDIA Tesla V100/GTX 1080 Ti
GNN + MAPPOMARL (graph-structured environments)Policy gradient loss + value lossNVIDIA A100 GPU, CPU cluster for environment simulation
Graph transformerSupervised/contrastive learning on graphsCross-entropy loss/contrastive lossNVIDIA V100/A100 GPU
GCN-based anomaly detectorSupervised/semi-supervised GCN trainingBinary CrossEntropy/MSE lossNVIDIA Tesla V100/RTX 3090

Summary of recent approaches and models across different graph, knowledge, and robustness tasks

Ref.ApproachModelEvaluation (Acc./Prec./Rec./F1)
[18]CounterFact benchmark for paraphrased promptsLLaMA2, GPT-J, GPT 3.5-turboAcc.: N/A; Prec.: 0.74; Rec.: 0.72; F1: 0.73
[20]Adaptive contrastive learning for KGETransE, DistMult, ComplEx, RotatE
  • FB15k: Prec. 0.36; Rec. 0.38; F1 0.37

  • WN18RR: Prec. 0.48; Rec. 0.50; F1 0.49

  • YAGO3: Prec. 0.55; Rec. 0.56; F1 0.555

[3]MAGECGNN + MAPPOPrec.: 0.81; Rec.: 0.84; F1: 0.825
[2]KGTN (graph transformer + contrastive learning)Graph transformer
  • Book: Prec. 0.68; Rec. 0.69; F1 0.6876

  • ML: Prec. 0.86; Rec. 0.87; F1 0.8559

  • Last.FM: Prec. 0.78; Rec. 0.77; F1 0.7753

[31]AGTGraph transformerAcc.: 0.982, 0.976; Prec.: 0.98; Rec.: 0.98; F1: 0.98
[27]FRGLGCN-based anomaly detectorAcc.: 0.932; Prec.: 0.93; Rec.: 0.92; F1: 0.925

Task vs model overview: Models applied to different tasks and performance highlights

TaskModels appliedPerformance highlights
Counterfactual reasoning/NLPLLaMA2, GPT-J, GPT-3.5-turboF1: 0.73; Robust to paraphrased prompts; captures textual consistency and reasoning patterns
KG completion/link predictionTransE, DistMult, ComplEx, RotatEMRR: 0.355–0.557; Hits@10 improved; effectively predicts missing links in KG datasets (FB15k, WN18RR, YAGO3)
Multi-agent coordinationGNN + MAPPOImproved average and worst node idleness in MARL scenarios; handles agent interactions and graph-based decision making
Graph-based recommendation/multi-intent PredictionGraph transformerF1:0.6876–0.8559; AUC: 0.79–0.93; captures global graph dependencies for multi-intent recommendation tasks
Anomaly detection in graphs/federated learningGCN-based anomaly detectorF1: 0.925; AUC: 0.95; robust detection of anomalous nodes in federated or distributed networks

Unified benchmark: Accuracy vs latency/cost vs practical success

Model typeRepresentative paperF1AUPRCp95 Latency (ms)Cost/1 k tasks ($)Success@1 (%)
Transformer (BERT/LLM)Zou et al. [2]0.860.882400.3882
KG transformerWang et al. [8]0.780.803100.4285
RL path-reasonerMa et al. [6]0.810.834000.6087
Multi-agent system (MARL)Goeckner et al. [3]0.830.844500.6488
Hybrid KG + RL + LLM agentZhou et al. [23]0.850.864800.7091

Advantages and disadvantages of different techniques

TechniqueAdvantagesDisadvantages
DL
  • - Learns complex patterns automatically from large data

  • - High accuracy in vision, NLP, speech

  • - Scalable with GPUs/TPUs

  • - Requires large labeled datasets

  • - Computationally expensive

  • - Often a “black box” with poor interpretability

KG
  • - Provides structured, interpretable relationships

  • - Useful for reasoning and explainability

  • - Integrates heterogeneous data sources

  • - Building/maintaining graphs is costly

  • - Struggles with incomplete/noisy data

  • - Hard to scale to massive dynamic data

Multi-agent
  • - Models distributed systems with autonomy

  • - Supports collaboration and competition

  • - Scalable for complex real-world tasks (e.g., traffic, robotics)

  • - Coordination overhead

  • - Emergent behaviors can be unpredictable

  • - Complex to design reward structures

RL
  • - Excels in sequential decision-making

  • - Learns optimal policies via trial and error

  • - Achieved breakthroughs in games (Go, Atari) and robotics

  • - Sample inefficient (needs lots of training episodes)

  • - Sensitive to reward design

  • - Poor transferability across tasks

Overview of AI/graph models: Usage and applications

ModelWhy usedWhere used
LLaMA2, GPT-J, GPT-3.5-turboPretrained language models with strong NLP capabilities; handle reasoning, paraphrasing, and text generation efficientlyCounterfactual reasoning, question answering, NLP tasks, and knowledge consistency evaluation
TransE, DistMult, ComplEx, RotatEKnowledge Graph Embedding models; capture relationships in graphs with low dimensional embeddingsKG completion, link prediction, recommendation systems
GNN + MAPPOGraph neural network combined with multi-agent proximal policy optimization; models agent interactions in graph-structured environmentsMARL, traffic optimization, robotics, resource allocation in networks
Graph transformerLeverages attention mechanisms on graph structures; captures global dependencies and relational patterns efficientlyMulti-intent recommendation, graph-based prediction tasks, link prediction, recommendation systems
GCN-based anomaly detectorGraph convolutional networks for learning node embeddings and detecting unusual patterns in graph dataFRGL, anomaly detection in networks, cybersecurity, fraud detection
Language: English
Submitted on: Sep 15, 2025
|
Published on: Jan 29, 2026
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Kuldeep Vayadande, Smita Sanjay Ambarkar, Viomesh Kumar Singh, Rahul Prakash Mirajkar, Sonali P. Bhoite, Amolkumar N. Jadhav, Rakhi Bharadwaj, Sanket Sunil Pawar, Yogesh Bodhe, Ganesh B. Dapke, published by Professor Subhas Chandra Mukhopadhyay
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.