Logical-Combinatorial Approaches in Dynamic Recognition Problems
DOI:
https://doi.org/10.51408/1963-0063Keywords:
Classification, logical-combinatorial approach, , supervised reinforcement learningAbstract
A pattern recognition scenario, where instead of object classification into the classes by the learning set, the algorithm aims to allocate all objects to the same, the so-called "normal" class, is the research objective. Given the learning set L; the class K0 is called “normal”, and the reminder l classes K1, K2, ... , Kl from the environment K are “deviated”. The classification algorithm is for a recurrent use in a "classification, action" format. Actions Ai are defined for each “deviated” class Ki. Applied to an object x ∈ Ki, the action delivers update Ai(x) of the object. The goal is in constructing a classification algorithm A that applied repeatedly (small number of times) to the objects of L, moves the objects (correspondingly, the elements of K) to the “normal” class. In this way, the static recognition action is transferred to a dynamic domain.
This paper is continuing the discussion on the “normal” class classification problem, its theoretical postulations, possible use cases, and advantages of using logical-combinatorial approaches in solving these dynamic recognition problems. Some light relation to the topics like reinforcement learning, and recurrent neural networks are also provided.
References
М. М. Бонгард, Проблема узнавания, Москва, Физматгиз, 1967.
F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain”, Cornell Aeronautical Laboratory, Psychological Review, vol. 65, no. 6, pp. 386–408, 1958.
Ю. И. Журавлев, Избранные научные работы, Магистр, Москва, 1998.
В. Н. Вапник и А. Я. Червоненкис, Теория распознавания образов, Москва, Наука, 1974.
M. Mohri and A. Rostamizadeh and A. Talwalkar, Foundations of Machine Learning, The MIT Press, 414p., 2012.
I. Goodfellow, Y. Bengio and A. Courville, Deep Learning (Adaptive Computation and Machine Learning Series), MIT Press, 2016.
А. Г. Аркадьев и Э. М. Браверман, Обучение машины распознаванию образов, М., Наука, 1964.
M. L. Minsky and S. A. Papert, Perceptrons: an Introduction to Computational geometry, Cambridge, MIT Press, 1969.
A. B. Novikoff, “On convergence proofs on perceptrons, Symposium on the Mathematical Theory of Automata”, Polytechnic Institute of Brooklyn, 12, pp. 615–622, 1969.
L. Aslanyan and J. Castellanos, “Logic based pattern recognition ontology content (1)”, Proceedings iTECH-06, Varna, Bulgaria, pp. 61-66, 2006.
L. Aslanyan and V. Ryazanov, “Logic based pattern recognition ontology content (2)”, Information Theories and Applications, vol. 15, no. 4, pp. 314-318, 2008.
Q. Leng, H. Qi, J. Miao, W. Zhu and G. Su, One-Class Classification with Extreme Learning Machine, Hindawi Publishing Corporation, Mathematical Problems in Engineering, vol. 2015, 11 pages.
Л. А. Асланян, “Дискретная изопериметрическая задача и смежные экстремальные задачи для дискретных пространств”, Проблемы кибернетики, том 36, стр. 85-128, 1979.
Л. А. Асланян, “Об одном методе распознавания, основанном на разделении классов дизъюнктивными нормальными формами”, Кибернетика, т. 5, стр. 103 - 110, 1975.
В. В. Рязанов, “Логические закономерности в распознавании образов (параметрический подход)”, Журнал Вычислительной Математики и Математической Физики, т. 47, н. 10, стр. 1793-1808, 2007.
Ю. Л. Васильев и А. Н. Дмитриев, “Спектральный подход к сравнению объектов, охарактеризованных набором признаков”, Доклады АН СССР, т. 206, н. 6, стр. 1309–1312, 1972.
T. Dietterich and G. Bakiri, “Solving multiclass learning problems via error-correcting output codes”, Journal of Artificial Intelligence Research, vol. 2, pp. 263–282, 1995.
Yu. I. Zhuravlev, V. V. Ryazanov, L. H. Aslanyan and H. A. Sahakyan, “On a classification method for a large number of classes”, Pattern Recognition and Image Analysis, vol. 29, no. 3, pp. 366–376, 2019.
Yu. I. Zhuravlev, V. V. Ryazanov, V. V. Ryazanov, L. H. Aslanyan and H. A. Sahakyan, “Comparison of different dichotomous classification algorithms”, Pattern Recognition and Image Analysis, vol. 30, no. 3, pp. 303–314, 2020.
A. Arakelyan, L. Aslanyan and A. Boyajyan, “High-throughput Gene Expression Analysis Concepts and Applications”, Sequence and Genome Analysis II – Bacteria, Viruses and Metabolic Pathways, iConcept Press Ltd, USA , pp. 71-95, 2013.
А. И. Дмитриев, Ю. И. Журавлев и Ф. П. Кренделев, “О математических принципах классификации предметов или явлений”, Дискретный анализ, Новосибирск, ИМ СО АН СССР, вып. 7, стр. 3-17, 1966.
М. Н. Вайнцвайг, “Алгоритм обучения распознаванию образов "кора"”, Алгоритмы обучения распознаванию образов, под ред. Вапник В. Н., Москва, Советское радио, стр. 110–116, 1973.
Ю. И. Журавлев и В. В. Никифоров, “Алгоритмы распознавания, основанные на вычислении оценок”, Кибернетика, н. 3, 1971.
L. Aslanyan, V. Ryazanov and H. Sahakyan, “Testor and logic separation in pattern recognition”, Mathematical Problems of Computer Science, vol. 44, pp. 33-41, 2015.
L. Aslanyan, V. Ryazanov and H. Sahakyan, “On logical-combinatorial supervised reinforcement learning”, International Journal “Information Theories and Applications”, vol. 27, no. 1, pp. 40-51, 2020.
Z. Zhang, “Reinforcement learning in clinical medicine: a method to optimize dynamic treatment regime over time”, Annals of Translational Medicine, vol. 7, no. 14, pp. 1-10, 2019. doi: 10.21037/atm.2019.06.75
G. L. Miller, “The revolution in graph theoretic optimization problems”, In Proceedings of the 27th ACM on Symposium on Parallelism in Algorithms and Architectures, SPAA 2015, Portland, OR, USA, June 13-15, pp. 181, 2015.
G. Neu and C. Szepesvári, “Apprenticeship learning using inverse reinforcement learning and gradient methods”, In: Proc. 23rd Conf. Uncertainty in Artificial Intelligence, pp. 295-302, 2007.
C. C. Aggarwal, C. Chen and J. Han, “The inverse classification problem”, Journal of Computer Science and Technology, vol. 25, pp. 458-468, 2010.
R. Sutton and A. Barto, Re-Inforcement Learning: An Introduction, MIT Press, Cambridge, MA, 1988.
V. Gimenes, L. Aslanyan, J. Castellanos and V. Ryazanov, “Distribution function as attractors for recurrent neural networks”, Pattern recognition and image analysis, vol. 11, no.3, pp. 492-497, 2001.
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.