Skip to main content
Have a personal or library account? Click to login
Emotion Learning: Solving a Shortest Path Problem in an Arbitrary Deterministic Environment in Linear Time with an Emotional Agent Cover

Emotion Learning: Solving a Shortest Path Problem in an Arbitrary Deterministic Environment in Linear Time with an Emotional Agent

Open Access
|Oct 2008

References

  1. Botelho L. M and Coelho H. (1998). Adaptive agents: Emotion learning, Association for the Advancement of Artificial Intelligence, pp. 19-24.
  2. Bozinovski S. (1995). Consequence Driven Systems, Teaching, Learning and Self-Learning Agent, Gocmar Press, Bitola.
  3. Bozinovski S. (1982). A self learning system using secondary reinforcement, in: E. Trappl (Ed.) Cybernetics and Systems Research, North-Holland Publishing Company, pp. 397-402.
  4. Bozinovski S. and Schoell P. (1999a). Emotions and hormones in learning. GMD Forschungszentrum Informationstechnik GmbH, Sankt Augustin.
  5. Bozinovski S. (1999b). Crossbar Adaptive Array: The first connectionist network that solved the delayed reinforcement learning problem, in: A. Dobnikar N. Steele D. Pearson R. Albrecht (Eds.), Artificial Neural Nets and Genetic Algorithms, Springer, pp. 320-325.10.1007/978-3-7091-6384-9_54
  6. Bozinovski S. (2002). Motivation and emotion in anticipatory behaviour of consequence driven systems, Proceedings of the Workshop on Adaptive Behaviour in Anticipatory Learning Systems, Edinburgh, Scotland, pp. 100-119.
  7. Bozinovski S. (2003). Anticipation driven artifical personality: Building on Lewin and Loehlin, in: M. Butz O. Sigaud P. Gerard (Eds.) Anticipatory Behaviour in Adaptive Learning Systems, LNAI 2684, Springer-Verlag, Berlin/Heilderberg, pp. 133-150.
  8. Glaser J. (1963). General Psychopathology, Narodne Novine, Zagreb, (in Croatian).
  9. Jorgen B. J and Gutin G. (2001). Digraphs, Theory, Algorithms and Applications, Springer-Verlag, London.
  10. Koenig S. and Simmons R.(1992). Complexity Analysis of Real-Time Reinforcement Learning Applied to Finding Shortest Paths in Deterministic Domains, Carnegie Mellon University, Pittsburgh.
  11. Peng J. and Williams R. (1993). Efficient learning and planning with the Dyna fmework. Proceedings of the 2nd International Conference on Simulation of Adaptive Behaviour: From Animals to Animates, Hawaii, pp. 437-454
  12. Petruseva S. and Bozinovski S. (2000). Consequence programming: Algorithm "at subgoal go back". Mathematics Bulletin, Book 24 (L), pp. 141-152.
  13. Petruseva S. (2006a). Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent, Yugoslav Journal of Operations Research, 16(2): 211-22610.2298/YJOR0602211P
  14. Petruseva S. (2006b). Consequence programming: Solving a shortest path problem in polynomial time using emotional learning, International Journal of Pure and Applied Mathematics, 29(4): 491-520.
  15. Sutton R. and Barto A. (1998). Reinforcement Learning: An Introduction, MIT Press, Cambridge, MA.10.1109/TNN.1998.712192
  16. Whitehead S. (1991). A complexity analysis of cooperative mechanisms in reinforcement learning, Proceedings of AAAI, pp. 607-613.
  17. Whitehead S. (1992). Reinforcement learning for the adaptive control of perception and action, Ph.D. thesis, University of Rochester.
  18. Wittek G. (1995). Me, Me, Me, the Spider in the Web. The Law of Correspondence, and the Law of Projection, Verlag DAS WORT, GmbH, Marktheidenfeld-Altfeld.
DOI: https://doi.org/10.2478/v10006-008-0037-4 | Journal eISSN: 2083-8492 | Journal ISSN: 1641-876X
Language: English
Page range: 409 - 421
Published on: Oct 6, 2008
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2008 Silvana Petruseva, published by University of Zielona Góra
This work is licensed under the Creative Commons License.

Volume 18 (2008): Issue 3 (September 2008)