Have a personal or library account? Click to login
Facial Composite System Using Real Facial Features Cover

References

  1. 1. SOLOMON, Ch., et al. 2008. EFIT-V: Interactive evolutionary strategy for the construction of photo-realistic facial composites. In: Genetic and evolutionary computation conference. Available on: <https://www.researchgate.net/publication/220739679_EFIT-V-interac tive_evolutionary_strategy_for_the_construction_of_photo-realistic_facial_composites>
  2. 2. FROWD, Ch., et al. 2004. The process of facial composite production. Available on: <http://www.evofit.co.uk/wp-content/uploads/2014/03/Frowd-et-al.-2004.-The-processof- facial-composite-production.doc>
  3. 3. FACES: The ultimate composite software. Available: <http://www.facesid.com/>
  4. 4. FROWD, Ch., et al. 2004. EvoFIT: A holistic, evolutionary facial imaging technique for creating composites. ACM Transactions on applied perception (TAP). 1.1: 19-39. Available on: <http://dl.acm.org/citation.cfm?id=1008725>10.1145/1008722.1008725
  5. 5. SOLOMON, Ch., et al.: EigenFIT: The generation of photographic-quality facial composites. Available on: <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.182.9356>
  6. 6. STAN, Z., et al. 2011. Handbook of face recognition. New York: Springer, 699 p. ISBN: 978-0-85729-931-4
  7. 7. ZAHRADNIKOVA, B., DUCHOVICOVA, S., SCHREIBER, P. 2014. Facial composite system using genetic algorithm. In: IDS 2014. International Doctoral Seminar 2014: Proceedings of the 9th International Doctoral Seminar (IDS 2014), pp. 270-274. ISBN 978-80-8096-195-4
  8. 8. BAGHERIAN, E., RAHMAT, R.W.O. 2008. Facial feature extraction for face recognition: a review. In: Information Technology, ITSim 2008. International Symposium on. IEEE. Available on: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4631649&tag=1>10.1109/ITSIM.2008.4631649
  9. 9. TURK, M. 1991. Pentland, “Eigenfaces for recognition,” J. Cognitive Neuroscience, Vol. 3, pp. 71-86.
  10. 10. BELHUMEUR, V., et al. 1997. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection, IEEE Trans. on PAMI.10.1109/34.598228
  11. 11. RYU, Y. 2001. Automatic extraction of eye and mouth fields from a face image using eigenfeatures and multiplayer perceptrons. Pattern Recognition.10.1016/S0031-3203(00)00173-4
  12. 12. CRISTINACCE, D., COOTES, T. 2003. Facial feature detection using adaboost with shape constraints. In: 14th British Machine Vision Conference, Norwich, UK.10.5244/C.17.24
  13. 13. WISKOTT, L., et al. 1997. Face recognition by elastic bunch graph matching. In: IEEE Trans. Pattern Analysis and Machine Intelligence.10.1007/3-540-63460-6_150
  14. 14. TOYAMA, K., et al. 2002. Hierarchical wavelet networks for facial feature localization. In: IEEE International Conference on Automatic Face and Gesture Recognition.
  15. 15. COOTES, T., et al. 2001. Active appearance models. In: IEEE Trans. Pattern Analysis and Machine Intelligence.10.1109/34.927467
  16. 16. XIAO, J., et al. 2004. Real-time combined 2D+3D active appearance models. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
  17. 17. SAYEED A, et al. Detection of Facial Feature Points Using Anthropometric Face Model
  18. 18. RITTER, G., WILSON, J. 1996. Handbook of Computer Vision Algorithms in Image Algebra. USA: CRC Press.
  19. 19. VIOLA, P., JONES, M. 2001. Rapid object detection using a boosted cascade of simple features. Computer Vision and Pattern Recognition, 2001. CVPR 2001. In: Proceedings of the 2001 IEEE Computer Society Conference on. I-511-I-518 vol. 1.
  20. 20. BRADSKI G. 2000. The OpenCV Library. Dr. Dobb's Journal of Software Tools.
  21. 21. HAAR, A. 1910. Zur Theorie der orthogonalen Funktionensysteme. Mathematische Annalen, 69, pp. 331-371.
Language: English
Page range: 9 - 15
Published on: Feb 6, 2015
In partnership with: Paradigm Publishing Services
Publication frequency: 2 issues per year

© 2015 Soňa Duchovičová, Barbora Zahradníková, Peter Schreiber, published by Slovak University of Technology in Bratislava
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.