60415nam#a2203937#i#4500001000500000005001700005008004000022020002300062044000900085080010100094084003900195084006000234084008900294084008900383084006700472084006500539100019300604245018500797260008100982300001201063500101701075510018702092510015002279510026902429510038102698510015503079510010003234510019203334510022503526510010703751510019803858510008804056510019804144510014904342510016004491510013804651510014804789510032704937510028905264510039205553510020305945510015806148510019706306510018106503510016606684510014006850510016206990510017007152510021607322510008007538510038807618510043808006510020608444510013908650510018008789510022408969510016409193510021109357510019209568510031209760510010910072510011310181510021510294510019010509510018310699510016810882510011311050510013311163510018111296510021811477510009811695510013811793510012411931510017412055510019212229510010112421510012612522510013712648510014912785510008512934510014513019510014413164510024013308510015913548510019413707510021113901510016514112510023714277510007414514510015214588510012314740510036314863510010515226510006015331510024115391510017415632510028815806510011216094510021916206510021516425510017416640510016316814510014116977510016117118510011817279510025517397510021017652510022917862510011418091510013918205510016218344510022618506510017918732510012518911510013119036510015419167510012919321510009419450510020519544510010619749510016019855510023020015510016820245510018120413510022920594510021120823510014921034510019721183510018221380510009121562510018821653510013621841510015821977510016822135510014622303510025822449510024022707510012322947510010923070510028523179510023823464510019623702510017123898510020324069510019324272510020324465510016524668510016324833510021224996510017725208510025025385510020925635510010825844510012425952510016126076510018626237510015226423510021926575510011226794510017126906510012727077510025327204510015927457510018727616510021127803510017328014510020528187510019228392510017628584510011328760510022328873510022829096510010229324510006429426510019429490510007129684510023229755510018829987510021130175510019130386510011730577510027030694510012730964510007431091510033631165510012631501510010631627510017531733510017731908510027532085510015032360510020332510510023632713510018632949510009233135510011633227510019833343510015233541510020733693510022333900510017234123510010734295510010534402510023734507510012234744510013734866510009135003510016835094510022535262510016135487510011735648510028535765510016536050510016236215510006836377510021136445510026436656510026736920510015137187510017037338510007637508510020137584510013237785510020537917510018038122510021738302510018538519510012938704510017338833510019439006510014839200510011239348510010239460510020739562510011039769510023539879510013740114510009240251510021040343510027040553510022840823510017141051510018641222510020341408510017641611510014841787510018541935510017242120510014542292510013542437510015442572510009342726510014742819510015542966510018443121510016943305510017443474510022043648510014143868510014944009510022344158510025644381510010644637510023044743510035144973510017945324510022345503510028745726510029846013510014846311510018046459510017946639510014746818510023246965510010647197510023147303510023247534510015047766510018947916510011748105510015748222510018848379510027248567510018548839510020949024510018149233510013749414510014749551510013149698510008149829510013149910510023050041510011450271510015350385510013250538510009350670510023450763510015850997510026651155510022451421510021251645510013951857510014151996510013852137510032452275510017352599510018652772510017252958510013053130510009853260510017753358510009953535510019953634510008253833510014353915510009654058510016254154510008454316510013354400510029154533510027254824510019755096510012755293510017555420510016655595510024155761510023856002510018756240533003356427856001756460159120240420071441.6 20190625d2019####ek#y0rusy0150####ca##$a978-5-369-02011-1##$axxu##$aОбщие вопросы математических и естественных наук. 50##$aКибернетика. 32812bbk##$aВычислительная техника. 32972bbk##$aКомпьютерные и информационные науки. 02.07.012okso##$aКомпьютерные и информационные науки. 02.06.012okso##$aФизико-математические науки. 612tbk##$aИскусственный интеллект. 28.232grnti#1$aШумский, Сергей Александрович$aМосковский физико-технический институт (государственный университет)00$aМАШИННЫЙ ИНТЕЛЛЕКТ. ОЧЕРКИ ПО ТЕОРИИ МАШИННОГО ОБУЧЕНИЯ И ИСКУССТВЕННОГО ИНТЕЛЛЕКТА$cМонография1#$aМосква$bООО "Издательский Центр РИОР"$c2019##$a340 p.##$aЭта книга о природе разума, человеческого и искусственного, с позиций теории машинного обучения. В ее фокусе – проблема создания сильного искусственного интеллекта. Автор показывает, как можно использовать принципы работы нашего мозга для создания искусственной психики роботов. Как впишется в нашу жизнь этот все более сильный искусственный интеллект? Что ожидает нас в ближайшие 10-15 лет? Чем надо заниматься тому, кто хочет принять участие в новой научной революции – создании науки о разуме?$aмашинное обучение, искусственный интеллект$a10.29039/02011-10#$aA. А. Ежов and С.А. Шумский. Нейрокомпьютинг и его приложения в экономике и бизнесе. МИФИ, 1998. ISBN 5-722-0252-Х.0#$aД.А. Ковалевич and П.Г. Щедровицкий. Конвейер инноваций. 2015. https://asi.ru/conveyor-of-iimovations/.0#$aB. Завадовская and К. Карпов. Рейтинг компаний по производительности труда сотрудников. 2017. https://bcs-express.ru/novosti-i-analitika/reiting-kompanii-po-proizvoditel-nosti-truda-sotrudnikov.0#$aАлександр Марков and Михаил Марков. Многоуровневый отбор и проблема роста мозга у плейстоценовых homo. Опыт компьютерного моделирования сопряженной эволюции генов и мемов, 2019. URL https://www.youtube.com/watch? v=AERQrIyk7og&t=5192s.0#$aВладимир Иванович Вернадский. Труды, по всеобщей истории науки. Рипол Классик, 1988.0#$aЛев Семенович Выготский. Мышление и речь. Directmedia, 2014.0#$aИ. Р. Агамирзян. Технологическое лидерство: воспользоваться шансом. In Вызов 2035, pages 8-15. Олимп-Бизнес, 2016.0#$aЛиза Фельдман Барретт. Как рождаются эмоции. Революция в понимании мозга и управлении эмоциями. Манн, Иванов и Фабер, 2018.0#$aТомас Кун. Структура научных революций. М.: Прогресс, 1977.0#$aС.П. Капица. Общая теория роста человечества: сколько людей жило, живет, и будет жить на Земле. М.: Наука, 1999.0#$aСтанислав Лем. Golem, XIV. Библиотек XXI века. ACT, 2002.0#$aИ.М. Ножов. Морфологическая и синтаксическая обработка текста (модели и программы). Канд. диссертация,, 2003.0#$aМарк Бейкер. Атомы языка: Грамматика в темном поле сознания. ЛКИ, 2008. ISBN 9785382004303.0#$aСаймон Хайкин. Нейронные сет,и: полный курс, 2-е издание. Издательский дом Вильяме, 2008.0#$aКарлота Перес. Технологические революции и финансовый капитал. Дело, 2011.0#$aКрис Андерсон. Длинный хвост. Эффективная модель бизнеса, в Интернете. МИФ, 2012.0#$aС.А. Шумский. Язык и мозг: как человек понимает речь. In Сборник научных т,рудое XV Всероссийской научной конференции Нейроинформатика-2013. Лекции по нейроинфор- матике, pages 72-105, 2013.0#$aК.В. Анохин. Когнитом: в поисках общей теории когнитивной науки. In Шестая международная конференция, по когнитивной науке: тез. докл. Калининград, pages 26-28, 2014.0#$aС.А. Шумский. Реинжиниринг архитектуры мозга: роль и взаимодействие основных подсистем. In Сборник научных трудов XVII Всероссийской научной конференции Нейроинформатика-2015. Лекции по пейроипформатике, pages 13-45, 2015.0#$aЕвгений Кузнецов. Россия и мир технологического диктата: 3 сценария будущего, 2016. URL https://www.youtube.com/ watch?v=9GtG_kczrFE.0#$aМихаил Никитин. Происхождение жизни. От туманности до клетки. Альпина нон-фикшн, 2016.0#$aП.Г. Щедровицкий. История промышленных революций и вызовы III промышленной революции. 2016. https://youtu. be/_cpWkGwZMSI.0#$aДжордж Лакофф. Женщины, огонь и опасные вещи. Что категории языка говорят нам о мышлении. Litres, 2017.0#$aАлександр Марков. Эволюция разума и сопротивление науке, 2017. URL https://www.youtube.com/watch?v= qTOyKOryWQY.0#$aК.В. Анохин. Когнитом - гиперсетевая модель мозга, 2018а. URL https: //youtu.Ье/tDalzRYEhss.0#$aК.В. Анохин. Мозг, как сеть, и разум, как сеть - вызовы математике, 2018b. URL https://youtu.be/tDalzRYEhss.0#$aСветлана Бурлак. Происхождение языка: Факты, исследования, гипотезы. Альпина Паблишер, 2018.0#$aС.В. Карелов. Впереди ИИ-национализм и ИИ-национализация. 2018. http://russiancouncil.ru/activity/digest/longreads/ vperedi-ii-natsionalizm-i-ii-natsionalizatsiya/.0#$aЭрвин Шредингер. Что такое жизнь? Litres, 2018.0#$aС.А. Шумский. Глубокое структурное обучение: Новый взгляд на обучение с подкреплением. In Сборник научных т,рудое XX Всероссийской научной конференции Нейроинформатика-2018. Лекции по пейроипформатике, pages 11-43, 2018.0#$aС.А. Терехов. Тензорные декомпозиции в статистическом принятии решений. In Сборник научных трудов XX Всероссийской научной конференции Нейроинформатика-2018. Лекции по нейроинформатике, pages 53-58, 2018. URL http://raai.org/library/books/Konf_II_problem-2018/ bookl_intellect.pdf.0#$aДХ Медоуз, И Рандерс, and ДЛ Медоуз. Пределы роста: 30 лет спустя пер. с англ. ЕС Оганесян; под ред. НП Тарасовой. 2012.0#$aЯн Гудфеллоу, Бенджио Иошуа, and Аарон Курвилль. Глубокое обучение. Litres, 2018.0#$aА.В. Коротаев, С.Ю. Малков, and Л.Е. Гринина. Анализ и моделирование глобальной динамики. Ленанд, 2018.0#$aС. Николенко, А. Кадурин, and Е. Архангельская. Глубокое обучение. Погружение в мир нейронных сетей. Питер, 2018. ISBN 978-5-496-02536-2.0#$aAlphastar: Mastering the real-time strategy game starcraft ii, 2018. URL https://deepmind.com/blog/ alphastar-mastering-real-time-strategy-game-starcraft-ii/.0#$aMichal Aharon, Michael Elad, and Alfred Bruckstein. rmk-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on signal processing, 54(11): 4311-4322, 2006.0#$aR Aharonov and N Slonim. Watch ibm's ai system debate a human champion live at think 2019. IBM Research blog, 2019. URL https://www.ibm.com/blogs/research/2019/02/ ai-debate-think-2019/.0#$aDario Amodei, Sundaram Ananthanaravanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In International Conference on Machine Learning, pages 173-182, 2016.0#$aRelja Arandjelovic and Andrew Zisserman. Look, listen and learn. arXiv preprint arXiv:1705.08168, 2017.0#$aMartin Arjovskv, Soumith Chintala, and Leon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.0#$aF Gregory Ashbv, Shawn W Ell, Vivian V Valentin, and Michael В Casale. Frost: a distributed neurocomputational model of working memory maintenance. Journal of cognitive neuroscience, 17( 11): 172S 1713. 2005.0#$aBernard J Baars, Stan Franklin, and Thomas Zoiga Rams0v. Global workspace dynamics: cortical "binding and propagation" enables conscious contents. Frontiers in psychology, 4:200, 2013.0#$aJoscha Bach. The cortical conductor theory: Towards addressing consciousness in ai models. In Biologically Inspired Cognitive Architectures Meeting, pages 16-26. Springer, 2018.0#$aDzmitrv Bahdanau, Kvunghvun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv: Ц09.04-73, 2014.0#$aLisa Feldman Barrett. How emotions are made: The secret life of the brain. Houghton Mifflin Harcourt, 2017.0#$aLisa Feldman Barrett and W Kyle Simmons. Interoceptive predictions in the brain. Nature Reviews Neuroscience, 16(7): 419, 2015.0#$aAndre М Bastos, W Martin Usrev, Rick A Adams, George R Mangun, Pascal Fries, and Karl J Friston. Canonical microcircuits for predictive coding. Neuron, 76(4):695—711, 2012.0#$aFrancesco P Battaglia, Karim Benchenane, Anton Sirota, Cvriel MA Pennartz, and Sidney I Wiener. The hippocampus: hub of brain network communication for memory. Trends in cognitive sciences, 15(7):310—318, 2011.0#$aJames A Bednar and Stuart P WTilson. Cortical maps. The Neuroscientist, 22(6):604-617, 2016.0#$aEric D Beinhocker. The origin of wealth: Evolution, complexity, and the radical remaking of economics. Harvard Business Press, 2006.0#$aTimothy С Bell, John G Clearv, and Ian H WTitten. Text compression, volume 348. Prentice Hall Englewood Cliffs, 1990.0#$aYoshua Bengio. Deep learning of representations: Looking forward. In International Conference on Statistical Language and Speech Processing, pages 1-37. Springer, 2013.0#$aYoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In Advances in neural information processing system,s, pages 153-160, 2007.0#$aYoshua Bengio, Ian J Goodfellow, and Aaron Courville. Deep learning. Nature, 521:436-444, 2015.0#$aYoshua Bengio et al. Learning deep architectures for ai. Foundations and trends® in Machine Learning, 2(1):1-127, 2009.0#$aCharles H Bennett. The thermodynamics of computation^a review. International Journal of Theoretical Physics, 21(12): 905-940, 1982.0#$aDavid Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017.0#$aChristopher M Bishop. Pattern recognition and machine learning. springer, 2006.0#$aDavid M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3 (Jan):993-1022, 2003.0#$aMatthew Michael Botvinick. Hierarchical reinforcement learning and decision making. Current opinion in neurobiology, 22(6): 956-962, 2012.0#$aClemens Boucsein, Martin Nawrot, Philipp Schnepel, and Ad Aertsen. Beyond the cortical column: abundance and physiology of horizontal connections imply a strong role for inputs from the surround. Frontiers in neuroscience, 5:32, 2011.0#$aAlan J Bray and David S Dean. Statistics of critical points of gaussian fields on large-dimensional spaces. Physical review letters, 98(15):150201, 2007.0#$aPeter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Delia Pietra, and Jenifer С Lai. Class-based n-gram models of natural language. Computational linguistics, 18(4):467-479, 1992a.0#$aPeter F Brown, Vincent J Delia Pietra, Robert L Mercer, Stephen A Delia Pietra, and Jennifer С Lai. An estimate of an upper bound for the entropy of english. Computational Linguistics, 18(l):31-40, 1992b.0#$aJ Bughin, J Seong, J Manvika, M Chui, and R Joshi. Notes from the ai frontier: Modeling the impact of ai on the world economy. McKinsey Global Institute, 2018.0#$aJacques Bughin, E Hazan, S Ramaswamv, M Chui, T Alias, P Dahlstrom, N Henke, and M Trench. Artificial intelligence- the next digital frontier. McKinsey Global Institute, 2017. URL https://www.mckinsey.de/files/170620_studie_ai.pdf.0#$aGvorgv Buzsaki. Rhythms of the Brain. Oxford University Press, 2006.0#$aGvorgv Buzsaki and Edvard I Moser. Memory, navigation and theta rhythm in the hippocampal-entorhinal system. Nature neuroscience, 16(2):130, 2013.0#$aBradley P Carlin and Thomas A Louis. Bayes and empirical Bayes methods for data analysis. Chapman and Hall/CRC, 2010.0#$aChung-Cheng Chiu, Тага N Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J Weiss, Kanishka Rao, Ekaterina Gonina, et al. State-of- the-art speech recognition with sequence-to-sequence models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4774-4778. IEEE, 2018.0#$aNoam Chomsky. Knowledge of language: Its nature, origin, and use. Greenwood Publishing Group, 1986.0#$aNoam Chomsky. The minimalist program. MIT press, 2014.0#$aDan Cire§an, Alessandro Giusti, Luca M Gambardella, and Jiirgen Schmidhuber. Deep neural networks segment neuronal membranes in electron microscopy images. In Advances in neural information processing system,s, pages 2843-2851, 2012a.0#$aDan Cire§an, Ueli Meier, Jonathan Masci, and Jiirgen Schmidhuber. Multi-column deep neural network for traffic sign classification. Neural Networks, 32:333-338, 2012b.0#$aDan С Cire§an, Alessandro Giusti, Luca M Gambardella, and Jiirgen Schmidhuber. Mitosis detection in breast cancer histology images with deep neural networks. In International Conference on Medical Image Computing and Computer- assisted Intervention, pages 411-418. Springer, 2013.0#$aAndy Clark. Surfing uncertainty: Prediction, action, and the embodied mind. Oxford University Press, 2015.0#$aMichael W Cole, Jeremy R Reynolds, Jonathan D Power, Grega Repovs, Alan Anticevic, and Todd S Braver. Multi-task connectivity reveals flexible hubs for adaptive task control. Nature neuroscience, 16(9):1348, 2013.0#$aRonan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Korav Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493-2537, 2011.0#$aAlexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denover, and Herve Jegou. Word translation without parallel data. arXiv preprint arXiv:1710.04087, 2017.0#$aGavin E Crooks. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Physical Review E, 60(3):2721, 1999.0#$aEgidio D'Angelo. Neural circuits of the cerebellum: hypothesis for function. Journal of integrative neuroscience, 10(03):317-352, 2011.0#$aEgidio D'Angelo and CAM WTheeler-Kingshott. Modelling the brain: elementary components to explain ensemble functions. Riv. del nuovo Cim, 40:297-333, 2017.0#$aTerrence WT Deacon. The symbolic species: The co-evolution of language and the brain. WW Norton & Company, 1998.0#$aJeffrey Dean, Greg Corrado, Raj at Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior, Paul Tucker, Ke Yang, Quoc V Le, et al. Large scale distributed deep networks. In Advances in neural information processing system,s, pages 1223-1231, 2012.0#$aPaul Dean, John Porrill, Carl-Fredrik Ekerot, and Henrik Jorntell. The cerebellar microcircuit as an adaptive filter: experimental and computational evidence. Nature Reviews Neuroscience, 11 (1):30, 2010.0#$aThomas Dean. A computational model of the cerebral cortex. In Proceedings of the National Conference on Artificial Intelligence, volume 20, page 938. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005.0#$aStanislas Dehaene. Consciousness and the brain: Deciphering how the brain codes our thoughts. Penguin, 2014.0#$aStanislas Dehaene, Hakwan Lau, and Sid Kouider. What is consciousness, and could machines have it? Science, 358(6362): 486-492, 2017.0#$aMarc Peter Deisenroth, Gerhard Neumann, Jan Peters, et al. A survey on policy search for robotics. Foundations and Trends in Robotics, 2(1 2): 1 1 12. 2013.0#$aDori Derdikman, Rina Hildesheim, Ehud Ahissar, Amos Arieli, and Amiram Grinvald. Imaging spatiotemporal dynamics of surround inhibition in the barrels somatosensory cortex. Journal of Neuroscience, 23(8):3100-3105, 2003.0#$aJeff Desjardins. The 8 major forces shaping the future of the global economy. 2018. URL https://worldview.stratfor.com/article/ 8-major-forces-shaping-future-global-economy.0#$aAlexev Dosovitskiv and Vladlen Koltun. Learning to act by predicting the future. arXiv preprint arXiv:1611.01779, 2016.0#$aRodney J Douglas and Kevan AC Martin. Recurrent neuronal circuits in the neocortex. Current biology, 17(13) :R496-R500, 2007.0#$aKenji Dova. Complementary roles of basal ganglia and cerebellum in learning and motor control. Current opinion in neurobiology, 10(6):732-739, 2000.0#$aRobin IM Dunbar. Neocortex size as a constraint on group size in primates. Journal of human evolution, 22(6):469-493, 1992.0#$aDavid Eagleman. Incognito: The Secret Lives of the Brain. New York City: Pantheon, 2011.0#$aChris Eliasmith, Terrence С Stewart, Xuan Choo, Trevor Bekolav, Travis DeWolf, Yichuan Tang, and Daniel Rasmussen. A large- scale model of the functioning brain, science, 338(6111):1202- 1205, 2012.0#$aDaniel Everett. How language began: the story of humanity's greatest invention. Profile Books, 2017.0#$aAldo Faisal, Dietrich Stout, Jan Apel, and Bruce Bradley. The manipulative complexity of lower paleolithic stone toolmaking. PloS one, 5(ll):el3718, 2010.0#$aBo Fan, Lijuan Wang, Frank К Soong, and Lei Xie. Photo-real talking head with deep bidirectional lstm. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 4884-4888. IEEE, 2015.0#$aManaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah Smith. Sparse overcomplete word vector representations. arXiv preprint arXiv:1506.02004, 2015.0#$aMichael J Frank and David Badre. Mechanisms of hierarchical reinforcement learning in corticostriatal circuits 1: computational analysis. Cerebral cortex, 22(3):509-526, 2011.0#$aMichael J Frank, Bryan Loughrv, and Randall С O'Reilly. Interactions between frontal cortex and basal ganglia in working memory: a computational model. Cognitive, Affective, & Behavioral Neuroscience, 1(2):137—160, 2001.0#$aStan Franklin, Tamas Madl, Sidney D'mello, and Javier Snaider. Lida: A systems-level architecture for cognition, emotion, and learning. IEEE Transactions on Autonomous Mental Development, 6(1):19-41, 2014.0#$aKarl Friston. A theory of cortical responses. Philosophical transactions of the Royal Society B: Biological sciences, 360 (1456):815-836, 2005.0#$aKarl Friston, Francesco Rigoli, Dimitri Ognibene, Christoph Mathvs, Thomas Fitzgerald, and Giovanni Pezzulo. Active inference and epistemic value. Cognitive neuroscience, 6(4): 187-214, 2015.0#$aKunihiko Fukushima. Neural network model for a mechanism of pattern recognition unaffected by shift in position- neocognitron. Electron. & Commun. Japan, 62(10) :11—18, 1979.0#$aJoaquin M Fuster. Cortex and mind: Unifying cognition. Oxford university press, 2003.0#$aTimur Garipov, Dmitry Podoprikhin, Alexander Novikov, and Dmitry Vetrov. Ultimate tensorization: compressing convolutional and fc layers alike. arXiv preprint arXiv:1611.032Ц, 2016.0#$aLeon A Gatvs, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576, 2015.0#$aSergey Gavrilets and Aaron Vose. The dynamics of machiavellian intelligence. Proceedings of the National Academy of Sciences, 103(45):16823-16828, 2006.0#$aJonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017.0#$aDileep George and Jeff Hawkins. Towards a mathematical theory of cortical micro-circuits. PLoS computational biology, 5(10): el000532, 2009.0#$aAvniel Singh Ghuman, Nicolas M Brunet, Yuanning Li, Roma О Koneckv, John A Pvles, Shawn A Walls, Vincent Destefino, Wei Wang, and R Mark Richardson. Dynamic encoding of face information in the human fusiform gyrus. Nature communications, 5:5672, 2014.0#$aIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672-2680, 2014.0#$aIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook. org.0#$aAlex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.0#$aAlex Graves, Santiago Fernandez, Faustino Gomez, and Jiirgen Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369-376. ACM, 2006.0#$aAlex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645-6649. IEEE, 2013.0#$aKevin Gurnev, Tony J Prescott, and Peter Redgrave. A computational model of action selection in the basal ganglia. i. a new functional anatomy. Biological cybernetics, 84(6) :401- 410, 2001.0#$aHabr. Нейросеть Яндекса стала соавтором пьесы для альта с оркестром, 2019. URL https://habr.com/ru/post/441286/.0#$aPatric Hagmann, Leila Cammoun, Xavier Gigandet, Reto Meuli, Christopher J Honey, Van J Wedeen, and Olaf Sporns. Mapping the structural core of human cerebral cortex. PLoS biology, 6 (7):el59, 2008.0#$aSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00Ц9, 2015a.0#$aSong Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing system,s, pages 1135-1143, 2015b.0#$aMarc D Hauser, Noam Chomsky, and W Tecumseh Fitch. The faculty of language: what is it, who has it, and how did it evolve? science, 298(5598) :1569-1579, 2002.0#$aJeff Hawkins and Subutai Ahmad. WThv neurons have thousands of synapses, a theory of sequence memory in neocortex. Frontiers in neural circuits, 10:23, 2016.0#$aJeff Hawkins, Dileep George, and Jamie Niemasik. Sequence memory for prediction, inference and behaviour. Philosophical Transactions of the Royal Society B: Biological, Sciences, 364 (1521):1203-1209, 2009.0#$aJeff Hawkins, Subutai Ahmad, and Yuwei Cui. A theory of how columns in the neocortex enable learning the structure of the world. Frontiers in neural circuits, 11:81, 2017.0#$aKaiming Не, Xiangvu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026-1034, 2015.0#$aKaiming He, Xiangvu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.0#$aDonald Olding Hebb. The organization of behavior: A neuropsychological theory. Psychology Press, 2005.0#$aSuzana Herculano-Houzel. The human advantage: a new understanding of how our brain became remarkable. MIT Press, 2016.0#$aM Hilbert and P Lopez. The world's technological capacity to store, communicate, and compute information. Science (New York, NY), 332(6025) :60—65, 2011.0#$aG Hinton, N Srivastava, and К Swerskv. Rmsprop: Divide the gradient by a running average of its recent magnitude. Neural networks for machine learning, Coursera lecture 6e, 2012a.0#$aGeoffrey E Hinton, Simon Osindero, and Yee-Whve Teh. A fast learning algorithm for deep belief nets. Neural computation, 18 (7): 1527 155 i. 2006.0#$aGeoffrey E Hinton, Nitish Srivastava, Alex Krizhevskv, Ilva Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012b.0#$aSepp Hochreiter and Jiirgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.0#$aSepp Hochreiter, Yoshua Bengio, Paolo Frasconi, Jiirgen Schmidhuber, et al. Gradient flow in recurrent nets: the difficulty of learning long-term dependencies, 2001.0#$aThomas Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine learning, 42(1):177—196, 2001.0#$aFu Jie Huang, Y-Lan Boureau, Yann LeCun, et al. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In Computer Vision and Pattern Recognition, 2007. С VPR'07. IEEE Conference on, pages 1-8. IEEE, 2007.0#$aGao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016a.0#$aGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European Conference on Computer Vision, pages 646-661. Springer, 2016b.0#$aAlexander G Huth, Wendy A de Heer, Thomas L Griffiths, Frederic E Theunissen, and Jack L Gallant. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600) :453-458, 2016.0#$aIFPMA. The pharmaceutical industry and global health. facts and figures, 2017. URL https: //www.ifpma.org/wp-content/uploads/2017/02/ IFPMA-Facts-And-Figures-2017.pdf.0#$aSergey Ioffe and Christian Szegedv. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, pages 448-456, 2015.0#$aMakoto Ito and Kenji Dova. Multiple representations and algorithms for reinforcement learning in the cortico-basal ganglia circuit. Current opinion in neurobiology, 21(3):368- 373, 2011.0#$aEugene M Izhikevich and Gerald M Edelman. Large-scale model of mammalian thalamocortical systems. Proceedings of the national academy of sciences, 105(9):3593-3598, 2008.0#$aRay Jackendoff. Language, consciousness, culture: Essays on mental structure, volume 2007. MIT Press, 2007.0#$aMax Jaderberg, Volodymvr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Korav Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016.0#$aRafal Jozefowicz, Wojciech Zaremba, and Ilva Sutskever. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 2342-2350, 2015.0#$aDan Jurafskv and James H Martin. Speech and language processing, volume 3. Pearson London, 2014.0#$aDaniel Kahneman. Thinking, fast and slow. Macmillan, 2011.0#$aPentti Kanerva. Hvperdimensional computing: An introduction to computing in distributed representation with high- dimensional random vectors. Cognitive Commutation, 1(2) :139— 159, 2009.0#$aM Kawato. Cerebellum: models. Encyclopedia of neuroscience, 2007.0#$aNitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelvanskiy, and Ping Так Peter Tang. On large- batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836, 2016.0#$aRaymond Р Kesner and Edmund T Rolls. A computational theory of hippocampal function, and tests of the theory: new developments. Neuroscience & Biobehavioral Reviews, 48:92147, 2015.0#$aMehdi Khamassi and Mark D Humphries. Integrating cortico- limbic-basal ganglia architectures for learning model-based and model-free navigation strategies. Frontiers in behavioral neuroscience, 6:79, 2012.0#$aHvunjik Kim, Andriv Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinvals, and Yee Whve Teh. Attentive neural processes. arXiv preprint arXiv:1901.05761, 2019.0#$aDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:Ц12.6980, 2014.0#$aDan Klein and Christopher D Manning. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedings of the Annual Meeting on Association for Computational Linguistics, page 478. Association for Computational Linguistics, 2004.0#$aTeuvo Kohonen. Self-organized formation of topologicallv correct feature maps. Biological cybernetics, 43(l):59-69, 1982.0#$aTeuvo Kohonen. Self-Organizing Maps. Springer-Verlag New York, 2001.0#$aAugustine Kong, Michael L Frigge, Gudmar Thorleifsson, Hreinn Stefansson, Alexander I Young, Florian Zink, Gudrun A Jonsdottir, Avsu Okbav, Patrick Sulem, Gisli Masson, et al. Selection against variants in the genome associated with educational attainment. Proceedings of the National Academy of Sciences, 11 !(5):K727 K732. 2017.0#$aJonathan Koomev and Samuel Naffziger. Moore's law might be slowing down, but not energy efficiency. IEEE Spectrum, 2015.0#$aEugene V Koonin. The logic of chance: the nature and origin of biological evolution. FT press, 2011.0#$aLeonard F Koziol and Deborah Ely Budding. Subcortical structures and cognition: Implications for neuropsychological assessment. Springer Science k, Business Media, 2009.0#$aLeonard F Koziol, Lauren A Barker, Arthur W Joyce, and Skip Hrin. Structure and function of large-scale brain systems. Applied Neuropsychology: Child, 3(4):236-244, 2014a.0#$aLeonard F Koziol, Deborah Budding, Nancy Andreasen, Stefano D'Arrigo, Sara Bulgheroni, Hiroshi Imamizu, Masao Ito, Mario Manto, Cherie Marvel, Krvstal Parker, et al. Consensus paper: the cerebellum's role in movement and cognition. The Cerebellum, 13(1):151-177, 2014b.0#$aMichael Kremer. Population growth and technological change: One million be to 1990. The Quarterly Journal of Economics, 108(3) :681—716, 1993.0#$aAlex Krizhevskv, Ilva Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012.0#$aJohn E Laird, Christian Lebiere, and Paul S Rosenbloom. A standard model of the mind: Toward a common computational framework across artificial intelligence, cognitive science, neuroscience, and robotics. AI Magazine, 38(4), 2017.0#$aGuillaume Lample, Alexis Conneau, Ludovic Denover, and Marc'Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043, 2017.0#$aNick Lane. Life ascending: the ten great inventions of evolution. Profile books, 2010.0#$aNick Lane. The vital question: energy, evolution, and the origins of complex life. WW Norton k, Company, 2015.0#$aSascha Lange and Martin Riedmiller. Deep auto-encoder neural networks in reinforcement learning. In Neural Networks (IJ CNN), The 2010 International Joint Conference on, pages 1-8. IEEE, 2010.0#$aEric Laukien, Richard Crowder, and Fergal Byrne. Fevnman machine: The universal dynamical systems computer. arXiv preprint arXiv:1609.03971, 2016.0#$aQuoc V Le. Building high-level features using large scale unsupervised learning. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8595-8598. IEEE, 2013.0#$aYann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541-551, 1989.0#$aYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(ll):2278-2324, 1998.0#$aYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521 (7553): 136 i i i. 2015. *0#$aKai-Fu Lee. Al Superpowers: China, Silicon Valley, and the New World Order. Houghton Mifflin, 2018.0#$aTao Lei, Yu Zhang, Sida I Wang, Hui Dai, and Yoav Artzi. Simple recurrent units for highly parallelizable recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4470-4481, 2018.0#$aEd S Lein, Michael J Hawrvlvcz, Nancy Ao, Mikael Avres, Amy Bensinger, Amy Bernard, Andrew F Вое, Mark S Boguski,0#$aKevin S Brockwav, Emi J Byrnes, et al. Genome-wide atlas of gene expression in the adult mouse brain. Nature, 445(7124): 168, 2007.0#$aPeter Lennie. The cost of cortical computation. Current biology, 13(6):493-497, 2003.0#$aOmer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in neural information processing system,s, pages 2177-2185, 2014.0#$aTimothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.0#$aJames Manvika, Jaana Remes, Jan Mischke, and Mekala Krishnan. The productivity puzzle: a closer look at the United States. McKinsev Global Institute, 2017.0#$aJames G March. Exploration and exploitation in organizational learning. Organization science, 2(l):71-87, 1991.0#$aHenry Markram, Eilif Muller, Srikanth Ramaswamv, Michael W Reimann, Marwan Abdellah, Carlos Aguado Sanchez, Anastasia Ailamaki, Lidia Alonso-Nanclares, Nicolas Antille, Selim Arsever, et al. Reconstruction and simulation of neocortical microcircuitrv. Cell, 163(2):456-492, 2015.0#$aDM Mateos, R Wennberg, R Guevara, and JL Perez Velazquez. Consciousness as a global property of brain dynamic activity. Physical, Review E, 96(6):062410, 2017.0#$aTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.0#$aMelanie Mitchell. An introduction to genetic algorithms. 1998.0#$aVolodymyr Mnih, Korav Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.0#$aVolodymyr Mnih, Korav Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas К Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.0#$aVolodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harlev, David Silver, and Korav Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928-1937, 2016.0#$aDmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. arXiv preprint arXiv:1701.05369, 2017.0#$aEdvard I Moser, Emilio Kropff, and Mav-Britt Moser. Place cells, grid cells, and the brain's spatial representation system. Annual review of neuroscience, 31, 2008.0#$aVernon В Mountcastle. Introduction. Cerebral cortex, 13(1):2 I. 2003.0#$aUrs Muller, Jan Ben, Eric Cosatto, Beat Flepp, and Yann L Cun. Off-road obstacle avoidance through end-to-end learning. In Advances in neural information processing system,s, pages 739-746, 2006.0#$aVipul Naik. Distribution, 2014. URL https://intelligence.org/wp-content/uploads/2014/02/ Naik-Distribution-of-Computation.pdf.0#$aVinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807-814, 2010.0#$aCraig G Nevill-Manning and Ian H Witten. Identifying hierarchical structure in sequences: A linear-time algorithm. Journal of Artificial Intelligence Research, 7:67-82, 1997.0#$aAnh Nguyen, Jason Yosinski, Yoshua Bengio, Alexev Dosovitskiv, and Jeff Clune. Plug k, play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016.0#$aAlexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. In Advances in neural information processing system,s, pages 442-450, 2015.0#$aErkki Oja. Simplified neuron model as a principal component analyzer. Journal of mathematical biology, 15(3):267-273, 1982.0#$aErkki Oja and Juha Karhunen. Signal separation by nonlinear hebbian learning. In Computational intelligence: A dynamic system perspective, pages 83-97. Citeseer, 1995.0#$aRandall С O'Reilly and Michael J Frank. Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia. Neural computation, 18 (2):283-328, 2006.0#$aRandall С O'Reilly, Dean Wvatte, and John Rohrlich. Learning through time in the thalamocortical loops. arXiv preprint arXiv:1407.3432, 2014.0#$aIvan V Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Commuting, 33(5):2295-2317, 2011.0#$aGiinther Palm. Neural associative memories and sparse coding. Neural Networks, 37:165-171, 2013.0#$aК Panetta. 5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies, 2018. 2018. https://www.gartner.com/smarterwithgartner/5-trends-emerge- in-gartner-hype- cycle-for-emerging-te chnologie0#$aJudea Pearl and Dana Mackenzie. The Book of Why: The New Science of Cause and Effect. Basic Books, 2018.0#$aJeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543, 2014.0#$aNikolav Perunov, Robert A Marsland, and Jeremy L England. Statistical physics of adaptation. Physical Review X, 6(2): 021036, 2016.0#$aSteven Pinker. The language instinct: How the mind creates language. Penguin UK, 2003.0#$aChristopher Poultnev, Sumit Chopra, Yann L Cun, et al. Efficient learning of sparse representations with an energy-based model. In Advances in neural information processing systems, pages 1137-1144, 2007.0#$aJonathan D Power, Alexander L Cohen, Steven M Nelson, Gagan S Wig, Kelly Anne Barnes, Jessica A Church, Alecia С Vogel, Timothy О Laumann, Fran M Miezin, Bradley L Schlaggar, et al. Functional network organization of the human brain. Neuron, 72(4):665-678, 2011.0#$aGil Press. The thriving ai landscape in israel and what it means for global ai competition. Forbes, Sep 2018. https://www.forbes.com/sites/gilpress/2018/09/24/ the-thriving-ai-landscape-in-israel-and-what-it-means-for-glot0#$aFriedemann Pulvermiiller. How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends in cognitive sciences, 17(9):458-470, 2013.0#$aAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv: 1511.06^34, 2015.0#$aAlec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilva Sutskever. Language models are unsupervised multitask learners. 2019. URL https://blog.openai.com/ better-language-models/.0#$aMaithra Raghu, Ben Poole, Jon Kleinberg, Surva Ganguli, and Jascha Sohl-Dickstein. On the expressive power of deep neural networks. arXiv preprint arXiv:1606.05336, 2016.0#$aMaxwell JD Ramstead, Michael D Kirchhoff, Axel Constant, and Karl J Friston. Multiscale integration: Beyond internalism and externalism, 2019.0#$aScott Reed, Zevnep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.0#$aAnton Reiner, Loreta Medina, and С Leo Veenman. Structural and functional evolution of the basal ganglia in vertebrates. Brain Research Reviews, 28(3):235-285, 1998.0#$aJeremy R Reynolds and Randall С O'Reilly. Developing pfc representations using reinforcement learning. Cognition, 113 (3):281—292, 2009.0#$aUrs Ribarv. Dynamics of thalamo-cortical network oscillations and human perception. Progress in brain research, 150:127142, 2005.0#$aGerard J Rinkus. A cortical sparse distributed coding model linking mini-and macrocolumn-scale functionality. Frontiers in neuroanatomy, 4:17, 2010.0#$aJorma Rissanen. Modeling by shortest data description. Automatica, 14(5):465-471, 1978.0#$aEdmund Т Rolls. A computational theory of episodic memory formation in the hippocampus. Behavioural brain research, 215 (2):180—196, 2010.0#$aFrank Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958.0#$aDaniel J Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, Zheng Wen, et al. A tutorial on thompson sampling. Foundations and Trends@ in Machine Learning, 11(1):1—96, 2018.0#$aSara Sabour, Nicholas Frosst, and Geoffrey E Hinton. Dynamic routing between capsules. In Advances in Neural Information Processing Systems, pages 3859-3869, 2017.0#$aJenny R Saffian, Ann Senghas, and John С Trueswell. The acquisition of language by children. Proceedings of the National Academy of Sciences, 98(23):12874-12875, 2001.0#$aRuslan Salakhutdinov, Andriv Mnih, and Geoffrey Hinton. Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24-th international conference on Machine learning, pages 791-798. ACM, 2007.0#$aJared M Saletin and Matthew P Walker. Nocturnal mnemonics: sleep and hippocampal memory processing. Frontiers in neurology, 3:59, 2012.0#$aGerard Salton, Anita Wong, and Chung-Shu Yang. A vector space model for automatic indexing. Communications of the ACM, 18(11) :613—620, 1975.0#$aAdam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. arXiv preprint arXiv:1706.01427, 2017.0#$aLara Schlaflke, L Schweizer, NN Riither, R Luerding, Martin Tegenthoff, Christian Bellebaum, and Tobias Schmidt-Wilcke. Dynamic changes of resting state connectivity related to the acquisition of a lexico-semantic skill. Neurolmage, 146:429437, 2017.0#$aJurgen Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85-117, 2015.0#$aNoam Shazeer, Azalia Mirhoseini, Krzvsztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparselv-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.0#$aJonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitlv, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Rvan, et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4779-4783. IEEE, 2018.0#$aStewart Shipp, Rick A Adams, and Karl J Friston. Reflections on agranular architecture: predictive coding in the motor cortex. Trends in neurosciences, 36(12):706-716, 2013.0#$aYoav Shoham, Raymond Perrault, Erik Brvnjolfsson, Jack Clark, James Manvika, Juan Carlos Niebles, Terah Lyons, John Etchemendv, Barbara Grosz, and Zoe Bauer. The Al Index 2018 Annual Report. Stanford University, 2018.0#$aDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016.0#$aDavid Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-plav with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017.0#$aKaren Simonvan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.0#$aSoren Van Hout Solari and Rich Martin Stoner. Cognitive consilience: primate non-primarv neuroanatomical circuits underlying cognition. Frontiers in neuroanatomy, 5:65, 2011.0#$aHagen Soltau, Hank Liao, and Hasim Sak. Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition. arXiv preprint arXiv:1610.09975, 2016.0#$aSho Sonoda and Noboru Murata. Transport analysis of infinitely deep neural network. Journal of Machine Learning Research, 20(2):1—52, 2019.0#$aEelke Spaak, Mathilde Bonnefond, Alexander Maier, David A Leopold, and Ole Jensen. Layer-specific entrainment of gamma- band neural activity by the alpha rhythm in monkey visual cortex. Current Biology, 22(24):2313-2318, 2012.0#$aMichael W Spratling. A review of predictive coding algorithms. Brain and cognition, 112:92-97, 2017.0#$aPablo Sprechmann and Guillermo Sapiro. Dictionary learning and sparse coding for unsupervised clustering. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 2042-2045. IEEE, 2010.0#$aNitish Srivastava, Geoffrey E Hinton, Alex Krizhevskv, Ilva Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research, 15(1):1929—1958, 2014.0#$aKimberlv L Stachenfeld, Matthew M Botvinick, and Samuel J Gershman. The hippocampus as a predictive map. Nature neuroscience, 20(11):1643, 2017.0#$aSTATISTA. Number of apps available in leading app stores as of 3rd quarter 2018, 2018. URL https://www.statista.com/statistics/276623/ number-of-apps-available-in-leading-app-stores/.0#$aGreg Ver Steeg. Unsupervised learning via total correlation explanation. arXiv preprint arXiv:1706.08984, 2017.0#$aGreg Ver Steeg and Aram Galstvan. Low complexity gaussian latent factor models and a blessing of dimensionality. arXiv preprint arXiv:1706.03353, 2017.0#$aAndreas Stolcke and Stephen Omohundro. Inducing probabilistic grammars by bavesian model merging. In International Colloquium on Grammatical Inference, pages 106-118. Springer, 1994.0#$aXu Sun, Xuancheng Ren, Shuming Ma, Bingzhen Wei, Wei Li, Jingjing Xu, Houfeng Wang, and Yi Zhang. Training simplification and model simplification for deep learning: A minimal effort back propagation method. IEEE Transactions on Knowledge and Data Engineering, 2018.0#$aIlva Sutskever and Geoffrey Hinton. Learning multilevel distributed representations for high-dimensional sequences. In Artificial Intelligence and Statistics, pages 548-555, 2007.0#$aIlva Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139-1147, 2013.0#$aIlva Sutskever, Oriol Vinvals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing system,s, pages 3104-3112, 2014.0#$aRichard S Sutton. Dvna, an integrated architecture for learning, planning, and reacting. ACM SIGART Bulletin, 2(4):160—163, 1991.0#$aTASS.RU. Влияние экосистемы МСП на мировую экономику. 2017. https://tass.ru/pmef-2017/articles/4278934.0#$aEmanuel Todorov. Parallels between sensory and motor information processing. The cognitive neurosciences, pages 613-24, 2009.0#$aMichael Tomasello. Constructing a language. Harvard university press, 2009.0#$aGiulio Tononi and Christof Koch. Consciousness: here, there and everywhere? Phil. Trans. R. Soc. B, 370(1668):20140167, 2015.0#$aTVkultura. На аукционе Christies впервые продали написанную искусственным интеллектом картину, 2018. URL https://tvkultura.ru/article/show/article_id/ 302385/.0#$aNaonori Ueda and Rvohei Nakano. Deterministic annealing em algorithm. Neural networks, 11(2) :271-282, 1998.0#$aMarvlka Uusisaari and Erik De Schutter. The mysterious microcircuitrv of the cerebellar nuclei. The Journal of physiology, 589(14):3441-3457, 2011.0#$aKurt Vanlehn and William Ball. A version space approach to learning context-free grammars. Machine learning, 2(1):39 71. 1987.0#$aVladimir Naumovich Vapnik. Statistical learning theory, volume 1. Wiley New York, 1998.0#$aAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008, 2017.0#$aPaul FMJ Verschure. Distributed adaptive control: a theory of the mind, brain, body nexus. Biologically Inspired Cognitive Architectures, 1:55-72, 2012.0#$aPaul FMJ Verschure, Cvriel MA Pennartz, and Giovanni Pezzulo. The why, what, where, when and how of goal-directed choice: neuronal and computational principles. Philosophical Transactions of the Royal Society B: Biological Sciences, 369 (1655) :20130483, 2014.0#$aOriol Vinvals, Alexander Toshev, Samv Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156-3164, 2015.0#$aAlexander Volokh and Giinter Neumann. Task-oriented dependency parsing evaluation methodology. In 2012 IEEE 13th International Conference on Information Reuse & Integration (IRI), pages 132-137. IEEE, 2012.0#$aChristoph Von der Malsburg. Binding in models of perception and brain function. Current opinion in neurobiology, 5(4):520- 526, 1995.0#$aHeinz Von Foerster, Patricia M Mora, and Lawrence W Amiot. Doomsday: Friday, 13 november, ad 2026. Science, 132(3436): 1291-1295, 1960.0#$aJohn Von Neumann, Arthur W Burks, et al. Theory of self-reproducing automata. IEEE Transactions on Neural Networks, 5(1):3-14, 1966.0#$aJian-Ping Wang, Sachin S Sapatnekar, Chris H Kim, Paul Crowell, Steve Koester, Suprivo Datta, Kaushik Roy, Anand Raghunathan, X Sharon Hu, Michael Niemier, et al. A pathway to enable exponential scaling for the bevond-cmos era. In Proceedings of the 54-th Annual Design Automation Conference 2017, page 16. ACM, 2017a.0#$aRuohan Wang, Antoine Cullv, Hvung Jin Chang, and Yiannis Demiris. Magan: Margin adaptation for generative adversarial networks. arXiv preprint arXiv:1704-03817, 2017b.0#$aXiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. In European Conference on Computer Vision, pages 318-335. Springer, 2016.0#$aYisen Wang, Xuejiao Deng, Songbai Pu, and Zhiheng Huang. Residual convolutional ctc networks for automatic speech recognition. arXiv preprint arXiv:1702.07793, 2017c.0#$aLawrence M Ward. The thalamic dynamic core theory of conscious experience. Consciousness and Cognition, 20(2): 464-86, 2011.0#$aChristopher JCH Watkins and Peter Davan. Q-learning. Machine learning, 8(3-4):279-292, 1992.0#$aNicholas Watters, Andrea Tacchetti, Theophane Weber, Razvan Pascanu, Peter Battaglia, and Daniel Zoran. Visual interaction networks. arXiv preprint arXiv:1706.01433, 2017.0#$aTerry A Welch. Technique for high-performance data compression. Computer, 6(17):8—19, 1984.0#$aAshia С WTilson, Rebecca Roelofs, Mitchell Stern, Nathan Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. arXiv preprint arXiv:1705.08292, 2017.0#$aEdward О WTilson. The social conquest of earth. WW Norton k, Company, 2012.0#$aJ Gerard Wolff. An algorithm for the segmentation of an artificial language analogue. British journal of psychology, 66(1):79—90, 1975.0#$aJ Gerard Wolff. Language acquisition, data compression and generalization. Pergamon, 1982.0#$aJ Gerard Wolff. Learning syntax and meanings through optimization and distributional analysis. Categories and processes in language acquisition, 1(1), 1988.0#$aRichard Wrangham. Catching fire: How cooking made us human. Basic Books, 2009.0#$aY. Wu, G. Wayne, A. Graves, and T. Lillicrap. The Kanerva Machine: A Generative Distributed Memory. ArXiv e-prints, April 2018.0#$aYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherev, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherev, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint агХю:1609.08Щ, 2016.0#$aKelvin Xu, Jimmy Ba, Ryan Kiros, Kvunghvun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, pages 2048-2057, 2015.0#$aTom Young, Devamanvu Hazarika, Soujanva Poria, and Erik Cambria. Recent trends in deep learning based natural language processing, ieee Computational intelligence magazine, 13(3):55 75. 2018.0#$aHujia Yu, Chang Yue, and Chao Wang. News article summarization with attention-based deep recurrent neural networks, 2016.0#$aYan M Yufik and Karl Friston. Life and understanding: the origins of "understanding" in self-organizing nervous systems. Frontiers in system,s neuroscience, 10:98, 2016.0#$aMatthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818-833. Springer, 2014.0#$aHan Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dimitris Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv preprint arXiv:1612.03242, 2016.0#$aYing Zhang, Mohammad Pezeshki, Philemon Brakel, Saizheng Zhang, Cesar Laurent Yoshua Bengio, and Aaron Courville. Towards end-to-end speech recognition with deep convolutional neural networks. arXiv preprint arXiv:1701.021'20, 2017.0#$aJun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.##$aThere is an electronic copy4#$ariorpub.com