Have a personal or library account? Click to login
ANFIS-Toolbox: A Python Package for Adaptive Neuro-Fuzzy Inference Systems Cover

ANFIS-Toolbox: A Python Package for Adaptive Neuro-Fuzzy Inference Systems

Open Access
|Mar 2026

Full Article

(1) Overview

Introduction

Modeling complex, ill-defined, or uncertain systems presents significant challenges for conventional mathematical approaches, such as differential equations. Fuzzy Inference Systems (FIS), which utilize fuzzy “if–then” rules, have emerged as a powerful alternative capable of modeling qualitative aspects of human knowledge and reasoning processes without relying on precise quantitative analyses [1]. The Adaptive Neuro-Fuzzy Inference System (ANFIS), introduced by [1], represents an influential hybrid architecture that implements an FIS within the framework of adaptive neural networks. ANFIS combines the interpretability of fuzzy logic with the learning capabilities of neural networks, allowing the construction of input–output mappings based on both prior human knowledge (in the form of fuzzy rules) and input–output data. This capability has made ANFIS a valuable tool for modeling nonlinear functions, identifying systems, and predicting time series across various domains [1].

Recent studies continue to demonstrate the enduring relevance of the ANFIS across a wide range of scientific and engineering applications. In the field of optimization and hybrid modeling, research combining ANFIS with evolutionary algorithms has achieved remarkable improvements in predictive accuracy and model robustness [2, 3, 4]. In control systems and fault detection, ANFIS remains a preferred approach for capturing nonlinear system dynamics and enhancing adaptive control strategies [5, 6, 7, 8]. In cybersecurity and information technology, ANFIS is increasingly leveraged for robust classification and intelligent resource management [9, 10, 11]. Likewise, applications in forecasting and decision support demonstrate the model’s capacity to integrate data-driven and rule-based reasoning for effective decision-making in uncertain environments [12, 13, 14]. The sustained research interest reflected in this recent literature confirms that ANFIS continues to be a robust and versatile paradigm in intelligent systems. Within this context, the ANFIS-Toolbox provides an essential open-source framework that enables researchers and practitioners to implement, extend, and benchmark ANFIS-based models efficiently, thereby fostering transparency, reproducibility, and further innovation in this active field.

Despite the proven effectiveness of ANFIS, its practical adoption can be hindered by the availability of implementations that are simultaneously accessible, flexible, and well-integrated into modern software ecosystems. Historically, many implementations have been tied to proprietary environments like MATLAB, whose Fuzzy Logic Toolbox [15] provides established ANFIS functionality but requires specific licensing and operates outside the rapidly growing Python ecosystem. While powerful, reliance on such environments can limit accessibility and integration possibilities for researchers primarily working in Python [16]. Furthermore, existing implementations, whether proprietary or open-source, may require complex dependencies or lack high-level interfaces that facilitate rapid prototyping and integration into contemporary machine learning workflows. Specifically within the Python ecosystem, which dominates data science and machine learning, there is a need for an ANFIS tool that is native, free from heavy external dependencies, and offers a familiar user experience for practitioners accustomed to libraries like scikit-learn [17]. The lack of such a tool represents a barrier for researchers and engineers who wish to apply ANFIS within their preferred Python environment without incurring licensing costs, dependency overhead, or steep learning curves associated with less integrated tools.

While the effectiveness of ANFIS is well-established, the landscape of Python implementations is fragmented, with libraries differing in features, dependencies, and levels of software verification. Libraries such as X-ANFIS [18] and ANFISpy [19] utilize deep learning frameworks as their base, offering gradient-based optimization, but this can introduce significant computational overhead. The quality of testing varies: X-ANFIS possesses basic unit tests with reported coverage around 70%, whereas ANFISpy has few visible tests and no coverage report. Other implementations like lazuardy-anfis [20] appear to have less active maintenance and lack documented tests. Older libraries are discontinued, although they contained simple unit tests. Recent initiatives like Scikit-ANFIS, while adopting a scikit-learn compatible API, currently provide limited documentation and no evidence of formal testing. This fragmentation, along with differing dependencies, APIs, and, crucially, variability in software quality, underscores the need for a Python solution that is actively maintained, offers a familiar API, supports flexible training methods, and demonstrates a strong commitment to testing and verification.

To address this gap, we introduce ANFIS-Toolbox, a Python package designed to offer an accessible, flexible, and modern implementation of ANFIS. Unlike alternative libraries that depend on heavy frameworks or exhibit limited or no test coverage, ANFIS-Toolbox is lightweight, has minimal external dependencies, and emphasizes software quality. The package provides high-level estimators (ANFISRegressor, ANFISClassifier) with an API consistent with scikit-learn conventions, ensuring a smooth learning curve for users familiar with that ecosystem.

A key strength of ANFIS-Toolbox lies in its modular architecture and flexible optimization options: it supports the classic hybrid learning procedure [1], which combines gradient descent and least-squares estimation (LSE), as well as modern gradient-based optimizers such as Root Mean Square Propagation (RMSProp) [21], Adaptive Moment Estimation (Adam) [22], and Stochastic Gradient Descent (SGD) [23]. Additionally, it allows more advanced strategies, including particle swarm optimization (PSO) [24] and hybrid schemes that combine Adam with LSE, providing users with multiple avenues to balance convergence speed, stability, and predictive performance. The project also prioritizes reproducibility and reliability through extensive unit test coverage and detailed documentation, offering a level of verification and maintainability often absent in other implementations.

Implementation and Architecture

ANFIS-Toolbox is implemented in Python, utilizing NumPy [25] for numerical operations to remain lightweight and avoid dependencies on large deep learning frameworks. The architecture is designed around modularity, separating concerns between high-level user-facing estimators and the underlying ANFIS computational graph. The core philosophy emphasizes providing familiar scikit-learn-style interfaces built on a flexible and extensible ANFIS implementation. The high-level architecture facilitates a clear workflow from user configuration to model training, as illustrated in Figure 1.

jors-14-638-g1.png
Figure 1

Training and evaluation flow in estimator. The estimator orchestrates model building and optimization during .fit(), optionally using validation data, and evaluates performance on test data via .evaluate().

User code interacts primarily with the estimators (ANFISRegressor and ANFISClassifier). These estimators translate user-defined configurations (such as membership function types and training parameters) into a low-level ANFIS model using helper utilities. The ANFISBuilder class handles the creation of membership functions and the assembly of the rule base. Subsequently, the selected optimizer, implemented within the anfis_toolbox.optim module, interacts directly with the low-level model (TSKANFIS or TSKANFISClassifier) to perform the training iterations. Supporting modules such as estimator_ utils, metrics, and losses provide shared infrastructure for input validation, performance evaluation, and objective function definition, respectively.

Model orchestration is handled by TSKANFIS (for regression) and TSKANFISClassifier (for classification) in anfis_toolbox.model. These classes connect the ANFIS layers, expose forward and backward methods for training, manage internal gradient buffers, and provide methods to access, configure, and update parameters. This abstraction allows optimizers to interact with the model in a generic way, without requiring knowledge of the internal layer structure.

Builders (anfis_toolbox.builders) and configuration objects (anfis_toolbox.config) bridge the gap between the high-level estimators and the low-level model. ANFISBuilder centralizes the logic for creating membership functions based on various strategies (e.g., grid partitioning, Fuzzy C-Means clustering, random initialization) and maps string identifiers (like "gaussian", "gbellmf") to specific classes defined in anfis_toolbox.membership. It also handles the configuration of explicit rule sets and produces a fully configured TSKANFIS or TSKANFISClassifier instance via its build() method. ANFISConfig provides a serializable representation of the model’s input specifications and training defaults, aiding reproducibility and deployment.

Membership functions are implemented as classes within anfis_toolbox.membership (e.g., GaussianMF, BellMF, SigmoidMF). Each class inherits from MembershipFunction and provides methods for evaluation and derivative calculation, along with parameter management interfaces used during backpropagation.

The high-level estimators (ANFISRegressor, ANFISClassifier) orchestrate the entire process. Their fit methods utilize helpers from anfis_toolbox. estimator_utils for input validation, instantiate the ANFISBuilder, build the low-level model, select and instantiate the appropriate trainer from anfis_toolbox.optim based on user input (e.g., “hybrid,” “adam,” or a custom BaseTrainer subclass), and invoke the trainer’s fit method. Training history is stored for inspection. Persistence is handled via save and load methods, which use pickle to serialize the fitted estimator and its underlying model state. Opt-in logging provides feedback during training.

The core ANFIS computational graph—defined in the model and its layers—follows the layered architecture originally proposed by [1], as illustrated in Figure 2.

jors-14-638-g2.png
Figure 2

ANFIS model with layers.

Each layer performs a specific function in the fuzzy inference process, as summarized below:

  • Layer 1 (Fuzzification): Implemented by MembershipLayer, which wraps 13 concrete membership function objects (defined in membership). This layer computes the degree of membership for each input concerning each linguistic label. It caches raw inputs and activations to facilitate efficient gradient calculation during backpropagation.

  • Layer 2 (Rule Antecedent): The RuleLayer calculates the firing strength of each rule by applying a T-norm operator (product by default) to the membership degrees from the previous layer. It expands the Cartesian product of membership indices or uses a user-defined subset of rules. Derivatives are propagated back by reusing cached activations.

  • Layer 3 (Normalization): The NormalizationLayer computes the ratio of each rule’s firing strength to the sum of all firing strengths, yielding normalized firing strengths. It uses numerically stable calculations and provides a Jacobian-vector product in its backward method for efficient gradient handling by trainers.

  • Layer 4 (Rule Consequent): Implemented by ConsequentLayer (for regression) and ClassificationConsequentLayer (for classification). These layers compute the output of each rule, typically a linear combination of the inputs plus a constant term (Takagi-Sugeno type). The layers augment inputs with biases and aggregate the rule outputs weighted by the normalized firing strengths. The classification variant maps outputs to logits for subsequent probability calculation.

  • Layer 5 (Aggregation): Usually a single node that sums the outputs of all consequent layer nodes, producing the final ANFIS output. This is handled within the TSKANFIS and TSKANFISClassifier model classes.

The training infrastructure resides in the anfis_toolbox.optim package. BaseTrainer defines the common interface (fit, evaluate). Gradient-based trainers (HybridTrainer, HybridAdamTrainer, AdamTrainer, RMSPropTrainer, SGDTrainer) implement various optimization algorithms by interacting with the model’s forward, backward, and update_parameters methods. Notably, the hybrid trainers implement the classic ANFIS learning approach [1], alternating between gradient descent for premise parameters (membership functions) and LSE for consequent parameters, often accelerating convergence for regression tasks. PSOTrainer offers a gradient-free alternative using PSO. All trainers record metrics in a TrainingHistory object.

The architecture inherently supports variants and extensions. The distinction between regression and classification models allows for task-specific consequent layers and loss functions. Users can customize membership functions per input, either by specifying parameters or providing explicit MembershipFunction instances. The rule base can be pruned from the full Cartesian product by supplying a specific list of rule indices. Furthermore, the modular design allows users to implement and pass custom BaseTrainer subclasses or custom loss functions to the estimators for advanced optimization strategies.

Quality Control

The ANFIS-Toolbox package underwent rigorous testing to ensure correctness, robustness, and compatibility. Quality control included unit and integration tests, compatibility checks with scikit-learn, and adherence to modern Python coding standards.

Testing Levels and Tools

A comprehensive test suite was implemented using pytest to validate individual components (membership functions, layers, optimizers) as well as the integrated functionality of the high-level estimators (ANFISRegressor, ANFISClassifier). Tests cover correct outputs, gradient computations, and parameter updates, ensuring that the estimators operate as expected within scikit-learn pipelines. Execution and environment management were handled using hatch. Code coverage was measured with coverage.py, achieving 100% line coverage, demonstrating thorough validation of the codebase.

Testing Environments

Compatibility was verified across multiple Python versions (3.10–3.14) and operating systems (Ubuntu, Windows, macOS). Automated workflows using GitHub Actions ensure consistent testing across platforms.

Code Quality

High code quality is enforced via ruff, providing comprehensive linting, formatting, and static analysis to maintain adherence to modern Python best practices. Static type checking is performed using mypy, and security vulnerabilities are monitored with bandit. The Google docstring convention is enforced through ruff’s integration with pydocstyle. All checks are automated using pre-commit hooks and integrated into the CI/CD pipeline, ensuring consistent style and readability throughout development.

User Verification

Users can quickly validate installation and basic functionality using example included in the documentation. These examples demonstrate core tasks such as model fitting and prediction. A minimal usage example:


import numpy as np
from anfis_toolbox import ANFISRegressor
 
X = np.random.uniform(-2, 2, (100, 2))
y = X[:, 0]**2 + X[:, 1]**2
 
model = ANFISRegressor()
model.fit(X, y)
metrics = model.evaluate(X, y)

Expected output:


ANFISRegressor evaluation:
  mse: 0.000408
  rmse: 0.020205
  mae: 0.016884
  median_absolute_error: 0.014638
  mean_bias_error: -0.001231
  max_error: 0.047849
  std_error: 0.020167
  explained_variance: 0.999854
  r2: 0.999853
  mape: 1.447688
  smape: 1.347030
  msle: 0.000060
  pearson: 0.999927

Successful execution and reasonable output confirm basic operation. Developers can run the full test suite locally using hatch commands (e.g., hatch test), providing thorough validation of the package’s integrity.

(2) Availability

Operating System

The software is developed in Python and has been tested on Ubuntu, Windows, and macOS. It is expected to run on any operating system that supports the required Python version.

Programming Language

The software is developed using Python v3.10 or later, a high-level, interpreted language known for its readability, versatility, and extensive ecosystem of libraries [16]. Python provides robust support for scientific computing, data analysis, and machine learning, making it well-suited for implementing adaptive and intelligent systems like ANFIS models.

Additional System Requirements

Standard requirements for running Python and NumPy applications. No specific high-performance hardware (e.g., GPU) is required. Sufficient RAM is needed, depending on the size of the dataset being processed.

Dependencies

The software depends on NumPy (version ≥ 1.23, <3.0), a fundamental Python library for numerical computation. NumPy provides efficient array operations, linear algebra routines, and mathematical functions that form the backbone of data manipulation and numerical analysis within the software [25].

List of Contributors

All authors contributed to the software.

Software Location

Archive

Code repository

Language

The repository, software code, and documentation are in English.

(3) Reuse Potential

ANFIS-Toolbox is designed for researchers, students, and practitioners seeking a flexible and accessible tool for applying ANFIS within the Python ecosystem. Its primary use cases involve developing models for non-linear regression and classification tasks where the interpretability of fuzzy rules combined with the learning capability of neural networks is advantageous. The scikit-learn-like API of its high-level estimators (ANFISRegressor, ANFISClassifier) makes it particularly suitable for users familiar with standard machine learning workflows in Python, allowing for rapid prototyping and integration into existing data analysis pipelines.

Beyond its core machine learning applications, the toolbox has potential for reuse in various fields. Researchers in control systems can leverage it for system identification and adaptive control, building on the original applications of ANFIS. It can also be applied to time series prediction, signal processing, and pattern recognition tasks. Furthermore, due to its relatively simple, dependency-light nature and clear API, ANFIS-Toolbox serves as a valuable educational tool for teaching concepts of fuzzy logic, neural networks, and hybrid intelligent systems in university courses or workshops.

The software is designed with modularity and extensibility in mind. Researchers can easily modify or extend its functionality. For instance, new membership function families can be added by subclassing MembershipFunction and implementing the required methods. Similarly, users can implement custom training strategies by subclassing BaseTrainer or providing bespoke loss functions to the estimators. The separation between the high-level estimators and the low-level ANFIS model allows advanced users to interact directly with the core model components if needed.

Contributions from the community are welcome. Potential contributors interested in extending the software, fixing bugs, or improving documentation should engage through the project’s repository. The preferred methods for contact and contribution are opening issues for bug reports or feature requests, participating in discussions, or submitting pull requests following the guidelines outlined in the repository.

Support for ANFIS-Toolbox is primarily provided through the documentation [26], including the Developer Guide, and examples demonstrating usage. While there is no formal, dedicated support service, users can report issues or ask questions via the GitHub repository’s issue tracker. The maintainers will address these as availability permits.

Acknowledgements

The authors would like to acknowledge the Digital Video Applications Research Center – NPE-LAVID, Federal University of Paraíba for their institutional and technical support during the development of this project. We also extend our gratitude to our colleagues and collaborators for their valuable insights, feedback, and encouragement throughout the research and software development process.

Competing Interests

The authors have no competing interests to declare.

DOI: https://doi.org/10.5334/jors.638 | Journal eISSN: 2049-9647
Language: English
Submitted on: Nov 4, 2025
|
Accepted on: Feb 10, 2026
|
Published on: Mar 2, 2026
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Daniel França, Manuella Aschoff, Tiago França, Danniel Macedo, Vitor França, Lucidio Cabral, Alisson Brito, Clauirton Siebra, Tiago Araujo, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.